Program Evaluation in Language Education
Richard Kiely and Pauline Rea-Dickins
Program Evaluation in Language Education
Research and Practice in Applied Linguistics General Editors: Christopher N. Candlin, Macquarie University, Australia and the Open University, UK and David R. Hall, Macquarie University, Australia. All books in this series are written by leading researchers and teachers in Applied Linguistics, with broad international experience. They are designed for the MA or PhD student in Applied Linguistics, TESOL or similar subject areas and for the language professional keen to extend their research experience. Titles include: Richard Kiely and Pauline Rea-Dickins PROGRAM EVALUATION IN LANGUAGE EDUCATION Cyril J. Weir LANGUAGE TESTING AND VALIDATION Forthcoming titles: Martin Bygate and Virginia Samuda TASKS IN LANGUAGE LEARNING Francesca Bargiela, Catherine Nickerson and Brigitte Planken BUSINESS DISCOURSE Sandra Gollin and David R. Hall LANGUAGE FOR SPECIFIC PURPOSES Sandra Hale COMMUNITY INTERPRETING Geoff Hall LITERATURE IN LANGUAGE EDUCATION Marilyn Martin-Jones BILINGUALISM Martha Pennington PRONUNCIATION Devon Woods and Emese Bukor INSTRUCTIONAL STRATEGIES IN LANGUAGE EDUCATION Tony Wright LANGUAGE EDUCATION AND CLASSROOM MANAGEMENT
Program Evaluation in Language Education Richard Kiely and Pauline Rea-Dickins Graduate School of Education, University of Bristol
© Richard Kiely and Pauline Rea-Dickins 2005 All rights reserved. No reproduction, copy or transmission of this publication may be made without written permission. No paragraph of this publication may be reproduced, copied or transmitted save with written permission or in accordance with the provisions of the Copyright, Designs and Patents Act 1988, or under the terms of any licence permitting limited copying issued by the Copyright Licensing Agency, 90 Tottenham Court Road, London W1T 4LP. Any person who does any unauthorised act in relation to this publication may be liable to criminal prosecution and civil claims for damages. The authors have asserted their rights to be identified as the authors of this work in accordance with the Copyright, Designs and Patents Act 1988. First published 2005 by PALGRAVE MACMILLAN Houndmills, Basingstoke, Hampshire RG21 6XS and 175 Fifth Avenue, New York, N.Y. 10010 Companies and representatives throughout the world PALGRAVE MACMILLAN is the global academic imprint of the Palgrave Macmillan division of St. Martin’s Press, LLC and of Palgrave Macmillan Ltd. Macmillan® is a registered trademark in the United States, United Kingdom and other countries. Palgrave is a registered trademark in the European Union and other countries. ISBN-13: 978–1–4039–4570–9 hardback ISBN-10: 1–4039–4570–5 hardback ISBN-13: 978–1–4039–4571–6 paperback ISBN-10: 1–4039–4571–3 paperback This book is printed on paper suitable for recycling and made from fully managed and sustained forest sources. A catalogue record for this book is available from the British Library. Library of Congress Cataloging-in-Publication Data Kiely, Richard, 1955– Program evaluation in language education / Richard Kiely and Pauline Rea-Dickins. p. cm. — (Research and practice in applied linguistics) Includes bibliographical references and index. ISBN 1–4039–4570–5 (cloth) — ISBN 1–4039–4571–3 (pbk.) 1. Language and languages—Study and teaching—Evaluation. 2. Language and languages—Ability testing. 3. English language— Study and teaching—Foreign speakers—Evaluation. I. Rea-Dickins, Pauline. II. Title. III. Series. P53.63.K54 2005 418′.0076—dc22 2005043358 10 14
9 13
8 12
7 11
6 10
5 09
4 08
3 07
2 06
1 05
Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham and Eastbourne
For our mothers Dolly and Dorothy and Linda, Ellen, Laura, James, Joe and Dan
This page intentionally left blank
Contents
General Editors’ Preface
xi
Acknowledgements
Part 1
xii
Departure Points
1
Introduction
3
1 Themes and Challenges 1.1 Introduction 1.2 Three features of evaluation 1.3 Five challenges for evaluation 1.4 Summary
5 5 5 7 16
2 Historical Perspectives: Focus on Design and Method 2.1 Introduction 2.2 Early judgements 2.3 Early evaluations 2.4 Evaluation as measurement and comparison 2.5 Evaluation as a focus on worth and development 2.6 Evaluation standards 2.7 New theoretical perspectives 2.8 Summary
17 17 18 19 23 26 30 32 36
3 Historical Perspectives: Focus on Context and Use 3.1 Introduction 3.2 Use of evaluation 3.3 Constructivism in evaluation 3.4 Realism in evaluation 3.5 Political dimensions of evaluation 3.6 Compliance with mandates 3.7 Summary
37 37 37 40 44 46 50 54
4 Historical Perspectives: Language Program Evaluation and Applied Linguistics 4.1 Introduction 4.2 Evaluation in language education and Applied Linguistics 4.3 Language program evaluations 4.4 Language evaluation outside language teaching programmes 4.5 Summary
56 56 56 59 71 72
vii
viii
Contents
Part 2
Cases and Issues
73
Introduction
75
5 Evaluating Teachers’ English Language Competence 5.1 Introduction 5.2 Context 5.3 Scope and aims of the evaluation 5.4 Evaluation planning 5.5 Articulating constructs and procedures 5.6 Evaluation implementation and implications 5.7 Summary
77 77 77 78 80 81 96 98
6 Evaluating a Language through Science Program 6.1 Introduction 6.2 Context 6.3 Aims and scope of the evaluation 6.4 Evaluation design 6.5 Some findings 6.6 Implications for evaluation 6.7 Summary
99 99 99 102 104 111 115 117
7 Evaluating the Contribution of the Native Speaker Teacher 7.1 Introduction 7.2 Context 7.3 Aims and scope of the evaluation 7.4 Evaluation design 7.5 Data collection 7.6 Some findings 7.7 Implications for evaluation 7.8 Afterword – 15 years on 7.9 Summary
119 119 119 122 124 127 128 129 133 134
8 Evaluating Foreign Language Teaching in Primary Schools 8.1 Introduction 8.2 Context 8.3 Aims and scope of the evaluation 8.4 Design of the evaluation 8.5 Data collection 8.6 Implications for evaluation 8.7 Summary
136 136 138 141 142 145 147 149
9 Evaluating Program Quality in Higher Education 9.1 Introduction 9.2 Context 9.3 Aims and scope of the evaluation
150 150 150 151
Contents
9.4 Design of the evaluation 9.5 Implications for evaluation 9.6 Summary
ix
154 156 160
10 Evaluating the Student Experience in Higher Education 10.1 Introduction 10.2 Context 10.3 Aims and scope of the evaluation 10.4 Data collection 10.5 Implications for evaluation 10.6 Summary
161 161 161 162 169 173 177
11 Evaluating Assessment Standards and Frameworks 11.1 Introduction 11.2 Evaluation study 1: Evaluation of national and state frameworks for the assessment of learners of English as an additional language 11.3 Evaluation study 2: The Centre for Canadian Language Benchmarks 11.4 Implications for evaluation practice 11.5 Summary
178 178
190 197 199
12 Stakeholding in Evaluation 12.1 Introduction 12.2 Understanding stakeholding 12.3 Modes of stakeholder involvement 12.4 Stakeholding and the locus of control 12.5 Summary Appendix 12.1 Student action points Appendix 12.2 Focus group instruments
200 200 202 206 214 218 219 219
Part 3
221
Evaluation Practice and Research
183
Introduction
223
13 Large-scale Evaluations 13.1 Introduction 13.2 Understanding large-scale evaluations 13.3 How will you validate the evaluation issues and questions? 13.4 Which evaluation procedures will you use, and why, and how will you analyse your data? 13.5 Why and how will you develop evaluation skills? 13.6 What are the constraints that you will be working with in your evaluation?
225 225 225 231 233 236 240
x
Contents
13.7 Developing ethical mindfulness 13.8 Summary
244 245
14 Teacher-led Evaluations 14.1 Introduction 14.2 The scope of teacher-led evaluations 14.3 Evaluation projects 14.4 Sample projects 14.5 Summary
246 246 246 247 250 254
15 Management-led Evaluation Projects 15.1 Introduction 15.2 The scope of management-led evaluations 15.3 Evaluation projects 15.4 Sample projects 15.5 Summary
255 255 255 255 261 271
Part 4
273
Resources
16 Resources for Language Program Evaluation 16.1 Introduction 16.2 Books 16.3 Journals 16.4 Professional associations 16.5 Ethical guides and best practice codes 16.6 Email lists and bulletin boards 16.7 Additional internet resources
275 275 275 278 282 283 284 285
Postscript
289
Bibliography
292
Index
307
General Editors’ Preface Research and Practice in Applied Linguistics is an international book series from Palgrave Macmillan which brings together leading researchers and teachers in Applied Linguistics to provide readers with the knowledge and tools they need to undertake their own practice-related research. Books in the series are designed for students and researchers in Applied Linguistics, TESOL, Language Education and related subject areas, and for language professionals keen to extend their research experience. Every book in this innovative series is designed to be user-friendly, with clear illustrations and accessible style. The quotations and definitions of key concepts that punctuate the main text are intended to ensure that many, often competing, voices are heard. Each book presents a concise historical and conceptual overview of its chosen field, identifying many lines of enquiry and findings, but also gaps and disagreements. It provides readers with an overall framework for further examination of how research and practice inform each other, and how practitioners can develop their own problem-based research. The focus throughout is on exploring the relationship between research and practice in Applied Linguistics. How far can research provide answers to the questions and issues that arise in practice? Can research questions that arise and are examined in very specific circumstances be informed by, and inform, the global body of research and practice? What different kinds of information can be obtained from different research methodologies? How should we make a selection between the options available, and how far are different methods compatible with each other? How can the results of research be turned into practical action? The books in this series identify some of the key researchable areas in the field and provide workable examples of research projects, backed up by details of appropriate research tools and resources. Case studies and exemplars of research and practice are drawn on throughout the books. References to key institutions, individual research lists, journals and professional organizations provide starting points for gathering information and embarking on research. The books also include annotated lists of key works in the field for further study. The overall objective of the series is to illustrate the message that in Applied Linguistics there can be no good professional practice that isn’t based on good research, and there can be no good research that isn’t informed by practice. Christopher N. Candlin Macquarie University, Sydney and Open University, UK
David R. Hall Macquarie University, Sydney
xi
Acknowledgements As a number of the case studies that feature in Part 2 of this book have been worked on by several individuals, we would like to acknowledge fully their contributions to the evaluations concerned. These include the contributions of John Clegg, Dr Miyoko Kobayashi and Ping-Huang Sheu, who worked on the evaluations reported in chapters 5 and 6; and Marianne Cutler, who was the coordinator of the Science Across Europe programme (chapter 5) at that time. We thank Professor John Harris for making available the evaluation of the Primary Modern Language Project in Ireland. In chapter 11 we report on the evaluation of assessment frameworks. Part of this chapter is based on the work of Hugh South, Constant Leung, Catriona Scott and Sibel Erduran (Pauline Rea-Dickins was also a member of this team); equally, our gratitude and acknowledgements go to Professor Alister Cumming who kindly made available the documentation on the evaluation of the Centre for Canadian Language Benchmarks, conducted by Professor Cumming and his colleagues. Dermot Murphy, formerly of King’s College London, has been influential in discussions on participatory approaches to evaluation and the role of stakeholders in program evaluation. We have drawn on his writings and fully acknowledge the influence of his work and ideas in chapter 12. Throughout the book we have referred to a number of external evaluations conducted by the authors and those who have contributed to these are numerous: Hilda Banda, Faustina Caley, Z. Condo, Mary Conway, Rose Clark, Professor Alan Davies, Sibel Erduran, Dr A. N. Idama, Dr Kia Karavas-Doukas, Alan Fortune, Lesley Hayman, Dr Patricia Hedge, Cecelia Jacobs, Dr Keith Johnson, J. Kamugisha, Dr M. Kapoli, Muriel Kirton, C. Kivanda, Esther Mabuza, Patricia Mathe, Pretty Masuku, Mary Mgaya, N. Mtana, A. S. Mwaimu, M. Mwandezi, Thembe Nkumalo, Charles Nuttall, Lulu Nxumalo, Denise O’Leary, John O’Regan, Mike Potter, Margaret Reid, Val Roper, Dr Casmir Rubagumya, Goba Shabangu, John Shillaw, Gladys Tang, Julian Tetlow, Sue Timmis. Special thanks also go to Guoxing Yu for his contributions to Part 4. We apologise most sincerely to any of our collaborators whom we have most inadvertently omitted from this list. Finally, but certainly not least, we are also immensely grateful for the timely and incisive comments from the series editor, Professor Christopher Candlin, which, although they gave us more to think about and problematise, as well as extra work at the time, have undoubtedly steered us towards the development of a better volume. Ultimately, though, any weaknesses or infelicities are ours. Richard Kiely and Pauline Rea-Dickins xii
Part 1 Departure Points
This page intentionally left blank
Introduction
In Part 1 we set out the background to current understandings of language program evaluation. We explore three fields: the nature of social programs in general and the purposes and tasks involved in evaluating them; educational programs and the traditions of evaluation for the purposes of policy development, management of change and inspection; and the specific field of language program evaluation where practice has been shaped by developments in language learning theory, language teaching methodology and, more recently, perspectives on language use in programs where the objectives do not relate to second or foreign language teaching and learning. In chapter 1 we outline five challenges for evaluation. These relate to the purposes of evaluation, participants and stakeholders; evaluation criteria; data; and the role of evaluation in program management. The challenge in each case is to understand and develop the construct so that both evaluation process and product have an improving effect. In chapters 2–4 we explore some developments in evaluation theory over the last several decades. The task is to focus on those developments that have proved enduring and that still inform, or have the potential to inform, language program evaluation. In chapter 2, we look at the tradition of evaluation as measurement and comparison. The Tylerian model, based on measurable learning outcomes, and the classic experimental design are still key reference points in evaluation design, even if they no longer have the dominance they once did. In chapter 3 we examine a range of perspectives proposed as alternatives to the measurement of outcomes. From the vantage point of pedagogic practice within programs, the work of both Stenhouse and Stake is valuable in linking evaluation to learning and to the work of teachers. More explicitly philosophical perspectives are provided by realists such as Pawson and Tilley and constructivists such as Lincoln and Guba. The increasing focus on the use of evaluation for increased program effectiveness derives from the work of Checkland (soft systems methodology) and Patton (utilisation-focused evaluation). One form of evaluation which we see as a developing trend 3
4
Program Evaluation in Language Education
reflecting wider discourse in the management of service provision is enquiry to monitor compliance with mandates. In chapter 4 we examine the specific context of language program evaluation. We identify two features that have shaped practice in this field: the influence of language learning theory; and the design and implementation of evaluation in educational aid project contexts. Recent developments in this area, such as the Project Development Support Scheme (PRODESS) and Mackay’s quality management orientation to evaluation, foreshadow key themes developed further in the case studies in Part 2.
1 Themes and Challenges
1.1 Introduction Evaluation has many meanings in language programs. It is part of the novice teacher’s checklist to guide the development of initial lesson plans and teaching practice, a process of determining learning achievements or student satisfaction, and a dimension of the analysis of data in a formal evaluation or research study. It refers to judgements about students by teachers and by external assessors; the performance of teachers by their students, program managers and institutions; and programs, departments and institutions by internal assessors, external monitors and inspectors. Evaluation is about the relationships between different program components, the procedures and epistemologies developed by the people involved in programs, and the processes and outcomes which are used to show the value of a program – accountability – and enhance this value – development. This chapter provides an overview of this territory. It identifies themes and notions examined in their historical context in Part 1, in the case studies in Part 2, and in the ways forward for language program evaluation in Part 3. In this chapter we outline three characteristics of language program evaluation as a field of study (section 1.2). Then we set out five challenges for evaluation – themes which together constitute a framework for developing the theory and practice in relation to aspects of language programs informed by Applied Linguistics on the one hand, and by the fields of education and management on the other.
1.2 Three features of evaluation The study and practice of evaluation has developed in diverse ways over recent decades. These developments are driven by issues from within evaluation and aspects of the wider socio-political context. Three features of evaluation theory and practice illustrate the complexity of these developments and the difficulties inherent in the task of mapping achievements and directions. 5
6
Program Evaluation in Language Education
First, there is the question of definition; evaluation is a form of enquiry, ranging from research to systematic approaches to decision-making. Our account of the history of evaluation over the past several decades in chapter 2 illustrates a progression from reliance on stripped-down statistical representations of a program to inclusive, multi-perspective approaches. The common thread – the making of judgements in a shared context – gives a problematically wide-ranging basket of activities. Thus, in the context of an innovative language program, evaluation might include periodic reviews of the budget, staff appraisal and decisions relating to professional development, iterated classroom observation for professional development of teachers or for quality assurance purposes, narratives of experience from participants, as well as a one-off study to inform on the success of the innovation. Second, there are two perspectives on evaluation research. It is viewed, on the one hand, as a type of study which has both research functions – rolling back the frontiers of knowledge – and evaluation functions – providing information for judgements or decision-making; and, on the other, as research into the processes of evaluation. The former perspective has been significant in language program evaluations, as evidenced by edited collections such as those by Alderson and Beretta (1992) Rea-Dickins and Lwaitama (1995) and Rea-Dickins and Germaine (1998). In the latter perspective evaluation research can be seen as analogous to the research which has for decades underpinned the validity and reliability of language testing processes. (For a recent account of the processes and issues here, see Weir 2004 in this series.) There is no doubt that evaluation processes require such epistemological and methodological underpinning – Lowe (1995) and Dornyei (2003), for example, examine in detail the issues involved in questionnaire design and completion; while Alderson and Beretta (1992) and Saville and Hawkey (2004) note that validation procedures in test design have tended to be much more extensive than in the design of other evaluation instruments. Without an understanding of the data types which are most appropriate for the different uses of evaluation, there may be a tendency to inefficient scrutiny of all practices, documents and perspectives, with constant doubts regarding the extent to which they actually evidence the success or otherwise of the program in question. Third, many accounts of evaluation do not reach the public domain. For a range of reasons, some proper, others less so, evaluation processes and findings remain either insufficiently documented or unpublished. One outcome of this feature of evaluation is the difficulty of mapping theory and practice when some of the terrain is obscured from view. Evaluations of social programs are for the most part funded from the public purse. In addition, they involve aspects of people’s lives in a way that profiles legal and ethical issues. Thus, there are contending forces for transparency and confidentiality, which means that the issue of publishing and not publishing evaluations
Themes and Challenges
7
is difficult. The case studies and discussion in Part 2 illustrate in particular evaluation contexts both potential conflicts in, and principled resolutions to, managing accountability and anonymity in evaluation practice. In Part 3 we revisit these issues in the context of guidelines for practice in, and research into, evaluation processes. Together, these features present difficulties, but also opportunities. In this book we bring together perspectives from published evaluations, unpublished, but researched evaluations, and from the wider discourses of evaluation in particular fields. The case studies in Part 2 address issues of evaluation purpose and design, the role of evaluation in program decision-making and policy development; the roles of stakeholders in evaluations, and of evaluation in the lives of stakeholders, evaluation and learning in language programs, and evaluation as a procedure for quality management in programs, departments and institutions. In each case study, we pay particular attention to the construct of evaluation – what the data represent and how they correspond to the stated objectives on the one hand, and the wider purposes of the program on the other. In Part 3 we explore options for future development in terms of research into evaluation policy positions within programs, frameworks and guidelines for practice and methodological orientations. In addition, we examine research possibilities into the cross-cutting issues of stakeholder evaluation, ethicality and fairness, and ‘learning to do’ evaluations. This broad-based perspective on language program evaluation as the examination of situated language programs can complement, on the one hand, the more theoretical orientations to understanding language learning in instructed settings in Applied Linguistics and, on the other, the local development of teaching skills, learning materials and other program components in schools, universities and ministries of education world-wide. To develop this analysis of the potential of evaluation, we set out five challenges. These are reflections of the features outlined above in two ways: first, they have proved enduring issues in the development and practice of evaluation in recent decades (we explore the issues here more fully in the following chapters and in Part 2); and second, they represent areas for evaluation theory and for evaluators in different program settings to engage with.
1.3 Five challenges for evaluation There are five challenges which we see as characterising the theoretical orientation and practice of evaluation. The challenge in each case is to understand and communicate the issues involved in the following dimensions of evaluation: 1. The purpose of evaluation in its social and political context. 2. The informants who people programs and evaluations.
8
Program Evaluation in Language Education
3. The criteria which generate evaluation frameworks, instruments and ultimately judgements. 4. The data which validate these approaches and instruments, and complete the construction of judgements. 5. The use of evaluation findings in managing social programmes.
Evaluation purpose: The challenge of evidence-based public policy development The ideas which shape public and social policy in this period of late or postmodernity represent a shift away from ideology-driven programs derived from philosophical positions and grand theories. New perspectives on the social aspects of our human nature, such as evolutionary psychology, activity theory and game theory, give a view of individual and social behaviour which is infinitely complex. This complexity combines with reassessments of the success or appropriateness of the social projects of high modernity to generate a need to move beyond debates focused on nature/nurture, social/ individual and public/private in defining and developing the role of the state, and of public programs in the lives of citizens. These debates have been characterised in Western democracies over the last decade by new syntheses of Right and Left in public sector programs relating to health care, social welfare and education, such as the Third Way (Giddens 1998). In education in particular, the task has shifted from universal provision to effectiveness for particular groups in particular settings (in England and Wales this educational debate has been characterised by the issue of ‘bogstandard comprehensives’ – should there be one national approach to the structure, resourcing and curriculum of secondary schools, or should there be a diversity of approaches as determined by local stakeholders and factors?). In Applied Linguistics, understanding second language acquisition and the teaching strategies which best facilitate this are engaging with diverse social and personal factors rather than focusing on universals of cognition: Cook (2000), Block (2003), Lantolf (2000) and Kramsch (2002), for example, explore those social and cultural dimensions of language learning which generate new perspectives on the roles of context and identity in language learning. The focus on what works in such policy development implies a strong role for evaluation. Patton (1997: 192–4) lists 58 types of evaluation, all of which involve understanding the impact of these programs on the problems to be resolved or the situation to be improved. The unifying theme in these different purposes is their shared platform of evidence, and the ways in which it can serve to inform on programs and policies. The verbs – appraise, assess, audit, examine, monitor, review, etc. – all suggest judgements based on empirical scrutiny of the program in operation.
Themes and Challenges
Concept 1.1
9
Evidence-based practice
Evidence-based practice has developed as an approach to understanding medical treatments and health care and, to a lesser extent, learning and schooling processes. It is seen as appropriate where program processes and outcomes are complex, variable and apparently determined by contextual and cultural factors, that is, cannot be predicted by universal theories. The focus of knowledge-building is what works, rather than why, the local rather than the universal, the practical rather than the theoretical. The focus of professional education is based on engineering rather than enlightenment principles, and training works with techniques and procedures which have been demonstrated as successful, rather than theories and ideas operationalised by practitioners. Elliott (2001) identifies three assumptions which present a challenge to evidencebased practice in education: (a) teaching and learning activities need to be justified as effective and efficient; (b) means–ends relationships need to be empirically established; (c) the means need to be specified as quality standards against which the performance of teachers can be judged. Each of these points can be seen as tasks for evaluation, whether in the realm of large-scale experimental studies, or in the area of inspection and quality management. Pawson (2002), examining the links between evidence-based policy development and realist evaluation (see chapter 3 below), proposes a ‘middle level of abstraction . . . . The precise combinations will always be context dependent but research should be able to identify a range of positive, middle-range configurations’ (2002: 356).
In education in both the US and the UK, the role of evidenced-based approaches is still in formation. In the US, the principles of fourth-generation evaluation (Guba and Lincoln 1989; Lincoln 2001) emphasise process factors such as negotiation and dialogue. Findings and conclusions are local and relative, so wide dissemination and use of successful practices may not be appropriate. Whereas Guba and Lincoln focus on sense-making in relation to values and experience within the program, more critical views of evaluation examine the wider context as well: ‘critical evaluation . . . considers the conditions of social regulation, unequal distribution, and power’ (Popkewitz 1990: 48). The debate on research into evidence-based teaching in the UK in the late 1990s illustrates an approach to educational enquiry and evaluation practice where the purpose is to identify successful teaching techniques and disseminate them for wider application. Hargreaves (1996; 1997) calls for more educational research which has greater relevance for and impact on educational practice than is currently the case.
10
Program Evaluation in Language Education
Quote 1.1
Hargreaves on teachers and doctors
Teachers and doctors are pragmatic individuals. They are primarily interested in what works, in what circumstances and, only secondarily, in why it works. There are immense difficulties in disentangling what works from what does not, and even more in giving a satisfactory explanation in some scientific sense. Doctors can produce a scientific explanation for what they do in far more areas of their professional activities than teachers can. . . . Much medical decision-making proceeds on the basis of experience of what works rather than knowledge of precisely how and why it works. Indeed some medical specialities, such as anaesthetics, would be severely retarded if practice were disallowed in the absence of scientific explanation. Many clinical practices have developed on the basis of trial and error – a very old friend of human learning. . . . Much, possibly most, of what teachers do in classrooms cannot at present be given a firm explanatory grounding in social science, but they do not remain inactive until one is found. They ‘tinker’, engaging in trial and error learning, just as doctors do. (1997: 410–11; emphasis in original)
Hargreaves cites Haynes et al.’s account of how evidence-based approaches have developed clinical practice in medicine in a manner which establishes links with evaluation practice. The purpose of the enquiry is to guide policy and improve practice, as determined by effectiveness measures, rather than the conventional remit of educational research which is to explain, and indirectly inform, policy and practice. The evidence-based approach has three components: clinical expertise, patient preferences and evidence from research. These are analogous in language program evaluation to expertise from language learning and teaching theory, student satisfaction and learning outcomes as determined by tests. Haynes et al.’s three steps in implementing this approach also relate to evaluation: getting the evidence straight, developing policy from this evidence and applying the policy in the right place and time (Hargreaves 1997: 414). In language programs, these steps represent a major challenge. Beretta (1992a), in a survey of evaluations of teaching methods, noted the problem with the evidence: the construct of method was often vague and very often assumed that teachers worked exclusively with a prepared plan for lessons. Evaluations, then, are located at the intersection of professional practice, policy and management, and research into learning and instructional processes. Each of these domains presents its own inherent purposes for the evaluation. Each program presents its own epistemological mélange, which needs to be understood both at the evaluation design stage and during its implementation. The case studies in Part 2 address the challenge here: a particular focus in the analysis of evaluations is the nature of the construct in each case. This construct analysis approach is further examined in Part 3 as a strategy for enhancing evaluation quality generally, and specifying purposes in particular.
Themes and Challenges
11
Stakeholders: The challenge of quality assurance and enhancement The challenge of evidence-based public policy development introduced above is grounded in notions of quality: What is good? How can social programs achieve this goodness? How can we guarantee it in social programs and institutions? We note that where the answers to these questions are grounded in the experience of users and citizens rather than in ideology, the role of these users, as both communities and individuals, assumes a high profile in definitions of quality. Quality in educational provision is less characterised by the degree to which program design conforms to a theory or ideology, or to measurement of outcomes on an absolute scale, and more to a complex synthesis of process factors and relativised, value-added dimensions.
Quote 1.2 programs
Thomas on the quality of English language teaching
Quality for the professional teacher means being committed to different interpretations of quality, not only to improvement, but to standards, fitness for purpose and fitness of purpose, too. This holistic view of quality needs to be embedded in any educational institution. In seeking to manage and enhance quality in our practice as teachers, educators and managers, we must commit ourselves to an ethos in the institution which encourages everyone to reflect on themselves in the context of the institution, the sector in which they work, and in the broader economic and political context. It is no longer possible to decide, as universities in Britain could do in the past, to teach what we want. To be fully professional, we must account for all of what we do, and do it with full awareness of the context. (2003: 240)
The evaluations – assessments, audits, inspections – which generate these judgements suggest a strong role for users and program participants: they are stakeholders whose experience of the program is the key to unlock the ‘black box’ of quality. In Part 2 evaluation case studies examine in detail the nature of this strong role. In the British context the efforts to develop educational processes in both schools and universities in the past decade represent operationalisations of notions of quality for evaluation purposes. In the US a similar focus on quality of experience has driven evaluation and management of university education. In contexts from Australia and the European Union, evaluation approaches have moved from value for money to best practice models, as public bodies seek to find the most effective ways of providing services such as garbage collection and recycling to social care for the elderly. The challenge of evaluation processes engaging with notions of quality is to capture, in a credible manner, the drivers of quality and the factors which mediate them.
12
Program Evaluation in Language Education
Concept 1.2
Stakeholders
Weiss (1986) identifies two categories of stakeholders: (1) members of groups affected by the program, and therefore any evaluation of it; and (2) members of groups who make decisions about the future of the program. In category (2) we have policy-makers and program managers, whose stakes are likely to be in the area of resourcing and strategic management. In category (1) we have practitioners, such as teachers, and clients, such as students. Their stakes relate to modes of participation and investment. Evaluation, as a process of determining the worth of a program, has to engage with the different perspectives here. Understanding the stakes involves working not only with the program aims and objectives, but also with emerging constructs and dynamic relationships as the program is implemented. Chapter 12 below explores the notion of stakeholding further.
Crabbe (2003), exploring ways in which quality of process might be benchmarked as quality of outcomes has been in the past, identifies three domains of quality in English language programmes: • Theoretical enquiry: learning opportunities. • Cultural enquiry: values and roles. • Management enquiry: operationalising and achieving quality. This framework, which provides ‘a proactive basis for evaluation by stating the salient features of program quality from the beginning’ (Crabbe 2003: 31), becomes more complex when people and their stakes are added. Opportunities are dynamic notions: they have to be perceived as such and taken. Values are also dimensions of the programs to which the participants contribute. Operationalising and achieving quality involves engagement with participants’ and other stakeholders’ needs and expectations as the program unfolds. The notion of stakeholding in programs brings together the notions of program quality and learning. The challenge for evaluation is to understand and articulate the stakes, so that quality is about the factors which affect the taking of opportunities as well as about the provision of these. Evaluation criteria: The challenge of linking values to benchmarks, indicators and theoretical constructs Every evaluation is based on a framework which determines both the strategy for gathering data and the extrapolation of judgements from these. In some evaluation processes this framework is implicit or hidden, but increasingly the exigencies of openness and accountability in such processes require that criteria be clearly stated. These frameworks seek to incorporate a range of stakeholder perspectives, while at the same time reflecting the program theory – the set of axioms and assumptions on which the program is based.
Themes and Challenges
13
There are three approaches which evaluations can take to specifying criteria for making judgments of worth about language programs: 1. Theory-based criteria derived from understandings of language learning processes Language programs typically are based on theories from Applied Linguistics, e.g. language learning opportunities (Crabbe 2003), constructs of language form, language use or language learning processes. Examples include the Bangalore Communicational Teaching Project Evaluation (Beretta and Davies 1985; see section 2.4, p. 23 below), which sought to determine the superiority of a task-based approach to teaching English, and the evaluation of the French course book, Tour de France (Parkinson 1983), which examined the communicative qualities of learning materials in use. 2. Policy-based criteria established through professional considerations Here evaluation of language programs is based on education policy, articulated as program quality indicators, which can include program design and resource factors, staff qualifications or process factors such as instructional strategies and learning materials. These quality indicators constitute a mandate, and the process of evaluation is a compliance check. The process here can involve ticking boxes where a minimal threshold level of compliance is evident, or a more enquiring process by practitioners into what compliance involves for activity within the program. Blue and Grundy (1996) describe such a process in the context of an English for Academic Purposes program as part of the preparation for accreditation by the British Association of Lecturers in English for Academic Purposes (BALEAP). 3. Constructivist or ethnographic approaches seek to determine criteria though internal program sense-making Ethnographic evaluation (Fetterman 1988) and evaluations within the constructivist tradition (Guba and Lincoln 1989) see the role of evaluation as describing the emic, the internal value system of the programme. Chapter 3 explores how evaluations over the past two decades have interpreted in a range of ways the sense-making requirement, both in terms of how effective programs are, and how they can be improved. Chapters 9 and 10 present such ethnographic studies of evaluation, exploring the social, dynamic learning context of an English for Academic Purposes (EAP) program, and the role of evaluation in its development. While each of these perspectives provides opportunities for accessing program and evaluation constructs, they can also be represent limitations. Theorybased approaches can marginalise the more dynamic, responsive dimensions of program implementation Mandate compliance can equally limit the program construct and creativity in learning materials, classroom activities and assessments as well as exclude appreciation of professional, context-sensitive judgements and interpretations (see section 3.6, p. 50 below). More interpretive approaches can limit links to wider discourses of quality and program
14
Program Evaluation in Language Education
development as well as critical engagement with fidelity issues in program implementation. The key issue here is the nature of the evaluation construct according to which criteria are elaborated, data collected and judgements made. The construct interfaces with stakeholders and their often contending interests, the cultural, professional and policy contexts within which the program and evaluation are located on the one hand, and the purposes and roles of evaluation in relation to implementation and management aspects of the program on the other. The case studies in Part 2 represent the array of evaluation constructs which can prevail. The discussion in each case examines the constructs which frame and guide the evaluation, so that both stated and less explicit criteria are set out. The challenge for evaluation practice is to identify criteria such that findings and judgements are grounded in both the experience of stakeholders, and the rationale for the programme. Evaluation data: the challenge of reconstructing program experience It is in relation to the role of data that evaluation shares most with conventional research. The purpose of data is to represent the experience of a given program in terms of behaviours and attitudes, such that judgements regarding its worth can be made. A number of factors have increased significantly the data which may be available to an evaluator: commitments to transparency of educational and other public programs mean that, more than ever, an extensive range of documents can be accessed. In the educational field these are likely to include program rationales and descriptions, schemes of work, classroom materials, students’ work, feedback on the program from students, and teachers, etc. In addition, developments in information and communication technology mean that data on the educational process are easy to secure, from classroom recordings and interviews to emails documenting intraprogram discussion of issues and decisions. Fourth-generation approaches, such as Guba and Lincoln (1989) and Lincoln (2001), would see the sensemaking at the centre of the evaluation enterprise as involving all of these. The selection of informants – whether randomised and in accordance with a preordained design, or inclusive and self-selecting, remains a key issue in constructing data sets. The preceding challenges which call for broad-based engagement with criteria and stakeholders serve to augment this richness of data. In practice, the challenge is the necessary limitation of scope here – to take Occam’s razor to the full set of program representations available; to identify which behaviours and attitudes need to be documented for a valid, credible set of judgements to be made. The case studies in Part 2 examine a range of approaches to data-gathering, including classroom process data, the increasing use of groups to captures key themes in the range of individual program experiences, and the use of email questionnaires in large, multi-site evaluations.
Themes and Challenges
15
Evaluation use: the challenge of developmental evaluation The purpose of evaluation, to complete a metaphorical loop in this opening chapter, is to have some practical effects on a given program. The impact here represents a key difference between research and evaluation: while they share goals in relation to knowledge-building and explanation, evaluation is generally considered to involve more immediate and practical use of findings. This notion of effect is usually related to some developmental impact on the program’s activities. However, it is equally important in what are considered accountability evaluations, such as audits, inspections and periodic reviews: these at the very least have the function of validating a given set of policies. There are significant strands in the literature on evaluation which document political undermining of evaluation rationality (e.g. Schick 1971; Mitchell 1992), negative attitudes towards evaluation and evaluators which generate resistance to and marginalise the findings of evaluations (e.g. Gitlin and Smyth 1989; Taut and Brauns 2003), and calls for greater use of evaluation for program improvement, leadership development and innovation management (for example, Rea-Dickins and Germaine 1992; 1998; Rea-Dickins 1994; Patton 1995). These strands of discussion suggest that, along with developments in theory and practice, there is a need across the wider context of social programs for greater understanding of and commitment to the role of evaluation in their implementation and development. The challenge of use, then, is the challenge of new connections, and new applications of evaluation processes and their findings. We need to consider, in addition to the conventional links between evaluation and recommendations for program development, links between evaluation and management. In particular, it becomes important to identify how the processes of evaluation support and develop leadership and innovation strategies in program contexts. Second, there are links between evaluation and research – for example, how do the findings of evaluation inform the theoretical construct of a given program? Evaluations typically address situated problems, such as human behaviour in car parks (Pawson and Tilley 1997), or language learners’ attitudes to self-access centres (Reid 1995). Behaviour in these situations can be understood only in its social and organisational context, and improvements and extensions to programs only effected within the ecology and affordances of such contexts. It is thus the task of evaluation to develop situated theories which inform the development of given programs and establish a platform for the design and implementation of similar programs in related contexts. In exploring further these features and challenges to evaluation, this book presents readers with opportunities to learn about evaluation in Applied Linguistics and language programs – to critically appraise evaluations within their cultural, ontological and methodological contexts – and to learn how to carry out evaluations: the processes of design, implementation and use.
16
Program Evaluation in Language Education
Evaluation practice in turn becomes a form of enquiry – a context for theory-building – so that we (collected language program professionals) better understand what makes for good programs in different contexts. The chapters in Part 1 explore the historical context of educational and language program evaluation of these broad themes. The case studies in Part 2 provide opportunities to examine these in evaluation contexts, and in Parts 3 (guidelines for evaluation practice and research) and 4 (resources) we discuss their implications for the development of evaluation theory and practice.
1.4 Summary The task in this chapter has been to map the territory. We have set out three features of language program evaluation, illustrating the diverse activities in the field, the nature of evaluation research, and issues of access to evaluation reports. We have set out five challenges which illustrate both the complexities and opportunities in the evaluation task. These challenges relate to clarifying evaluation purpose, engagement with stakeholders, establishing criteria for judgements, identification of appropriate evaluation data and ensuring evaluation use. These challenges represent themes explored further in the case studies in Part 2, guidelines for evaluation practice and research in Parts 3 and 4, and we hope, platforms for language program practitioners and evaluators everywhere to engage in critical debate.
2 Historical Perspectives: Focus on Design and Method
2.1 Introduction In chapter 1 we outlined five challenges for the development of evaluation theory and practice in language education. These challenges derive from both current perspectives on evaluation – the need to understand language education processes in program contexts, the demands of effective program management in areas such as quality enhancement, accountability and transparency, and the involvement of stakeholders – and also from the tradition of theory-building and practice in evaluation over recent decades. This chapter begins our account of these traditions by outlining the early phases of evaluation theory and practice of evaluation in social programs generally and in education in particular. The focus is on those conceptual strands that have endured across time and inform and have the potential to develop current theory and practice. In this chapter we provide a chronological overview of a period when the focus in educational evaluation was on research design, data collection methods and clear evaluation outcomes for decision-making and other judgements. In the following chapters of Part 1 we continue the historical perspective. In chapter 3, we examine through a thematic lens enduring conceptual developments that are both part of the evolution of evaluation over recent decades, and that still, in the context of changing perceptions of and roles for evaluation, inform innovative practice. In chapter 4 we focus on the strands in the story of evaluation which are particular to language education and Applied Linguistics more generally. We explore the role of evaluation in language learning theory, language education policy and professional practice in teaching and training contexts. This brief history is intended to complement rather than replace other surveys. We deal with the chronology, as do Guba and Lincoln (1989) and Pawson and Tilley (1997), but with less emphasis on generations or pendulum swings: we see the history of the development of ideas as a cumulative layering, where particular approaches which may become discredited and unfashionable still endure, informing mainstream practice while not being part of the cutting 17
18
Program Evaluation in Language Education
edge. We consider trends in educational evaluation in both the United States and Britain, as does Norris (1990), but relate these to other policy areas, and specifically to program evaluation in language education and Applied Linguistics. We examine the practice of evaluation through landmark evaluations of language programs, as do Rea-Dickins and Germaine (1992) and Weir and Roberts (1994), but also relate these to wider discourses in evaluations in education generally and across the social sciences.
2.2 Early judgements As Gitlin and Smyth (1989) illustrate in their account of teacher evaluation in the United States, Great Britain and Australia during the nineteenth century, concerns for evaluation were evident from the start of public education systems. In these industrialising countries, measuring effectiveness accompanied the funding of schooling and education from the public purse. Evaluation processes were initially established through inspection procedures. From the outset, two approaches were evident. First, evaluation of the curriculum was implemented through scrutiny of the competence and behaviour of the teacher. This had a strong ideological dimension: in the US and Britain teachers were expected to exemplify a respect for authority on the one hand, and a religious and moral lifestyle in the prevailing Christian ethos on the other. This ‘control’ of the teacher type was for the purpose of determining the quality of the curriculum: it was assumed that a teacher whose lifestyle exemplified idealistic, Christian principles would construct an appropriate learning experience in the classroom. Second, there was a concern for effectiveness, especially towards the end of the nineteenth century. The indicators here were retention rates, i.e. the number of children attending school, and learning outcomes, i.e. test results. Gitlin and Smyth argue that these values underpinning early evaluation processes in education have given direction and shape to evaluation theory and practice during the twentieth century. The enduring legacy of these processes is a discourse of evaluation, which is about oversight and control as essential elements of an effective schooling and learning system.
Quote 2.1 Gitlin and Smyth on evaluation, teachers and technical rationality The most common forms of teacher evaluation have been shaped by assumptions about purpose – the need for social control, the reliance on technical rationality and restricted notions of teaching – all of which depend upon narrow authoritarian views. These assumptions both reflect and serve conservative interests by allowing questions about the rightness of educational aims to be obscured, while giving legitimacy to the forms themselves by relying on the often unquestioned faith in science, narrowly understood. (1989: 25–6)
Focus on Design and Method
19
Two dimensions of this early phase of evaluation reflect what are in many ways opposing concerns: subjective values at the heart of the curriculum on the one hand, and scientific, objective measures on the other. The possibility of a good learning experience derives in large part from the personal qualities and values of teachers and other practitioners. The means of determining such an experience, however, focus on depersonalised and decontextualised outcome measures. These subjective and objective dimensions represent key strands in the development of evaluation as described in the following chapters. The purpose of evaluation – to assist program management so that quality processes are assured and high standards of learning are achieved – and the dominant method – external inspection – are still central features of evaluation practice in education generally (Norris 1998) and in language programs (Thomas 2003). They have, however, been augmented and enhanced by a range of philosophical and technological developments, as outlined below.
2.3 Early evaluations The twentieth century saw extensive development in publicly funded and managed social programs in the fields of education and schooling; health care; social welfare; and policing and penal services. These areas share two key factors: first, they are the areas of state provision which directly affect individuals as users and also as providers of services. Second, they are the major areas of spending from the public purse, and funded programs are expected to contribute to the public good. Together, these factors have promoted program evaluation. They are linked to two axioms of the modern democratic state: the requirement to treat citizens with propriety and fairness; and the requirement to spend public funds wisely and account transparently for this spending. In education, these requirements generate an architecture of interests, rights and responsibilities in the management of curricula, institutions and stakeholders which make evaluation an essential but complex activity (Norris 1990; Kushner 1996). In addition to these philosophical and political factors, the role and practice of evaluation have been shaped by the increasing complexity of the management task within organisations through the twentieth century. As work in general moved from a craft environment to a more industrial production line, so did the need for systematic, reliable information on the various human contributions to the production task (Simon 1976; 1977). The processes of generating and using this information are normally viewed as part of the field of management, but in many ways underpin the development of the notion of program and program evaluation through the twentieth century. The classical approach to management, to getting a complex task done, framed what has become an enduring notion of program. Theorists such as Taylor (1947), Fayol (1952) and Urwick (1952) set out principles for organising work which programmed a series of tasks rationally and established
20
Program Evaluation in Language Education
structures for co-ordination and control. These programs constructed tasks undertaken in a manner which could be studied, measured and ultimately compared and improved. The development and evaluation of programs within this framework took place in the fields of industry and engineering (Taylor’s approach was developed in a steel mill) and was extended to social, publicly funded programs as the principal means of managing these programs during the second half of the twentieth century. In the US, the Great Society reforms of the 1960s generated both schooling and educational programs, as well as a requirement for the systematic evaluation of these (Beretta 1992a). The scientific management view of programs – a behavioural description of the task, a systematic approach to measuring these behaviours, and established targets to achieve – proved a convenient and enduring starting point. A seminal application of these principles of program design and evaluation in the field of education was developed by Ralph Tyler in the United States in the post-1945 period. He set out a framework for evaluation which focused on program design: the key to understanding how successful a program was required detailed specification of purposes, processes and, especially, outcome measures which informed on these.
Concept 2.1
Tyler’s framework for curriculum evaluation
What educational purposes should the school seek to attain? What educational experiences can be provided that are likely to attain these purposes? How can these educational experiences be effectively organised? How can we determine whether these purposes are being attained? (1950: 48)
This framework illustrates the centrality of evaluation in the management of curricular processes: the answer to the final question requires a measure, and objectives must be framed in the language of such measures. The operationalisation of the Tylerian framework had two major features: the specification of objectives in terms of learning outcomes, and the measurement of these on the one hand, and the specification of teacher behaviours and their measurement on the other. In addition to what we identify as the ‘scientific management’ of the curriculum, evaluation was also seen as having a central role in the management and development of curriculum innovations. Hilda Taba’s observation on the role of evaluation as an essential empirical perspective in the development of educational programs, set out in Quote 2.2, illustrates this. Evaluation cannot be a mere fidelity check on the implementation of a detailed specification: it must also document the unspecified and unanticipated impacts of the program on teaching and learning. The
Focus on Design and Method
21
dynamic notion of curriculum and program implicit in this view has proved a platform for re-conceptualising evaluation – for example, it has informed the evaluation discourse in development contexts in the 1990s (Nuttall 1991; McKay and Treffgarne 1999).
Quote 2.2
Taba on evaluation
Many curriculum innovations are introduced on little more than hunches. . . . These matters cannot be settled by philosophical arguments alone. One needs to determine what changes these innovations actually produce and what effects they have on the total pattern of educational outcomes. Innovations introduced for a certain limited purpose too often produce other undesirable results. For example, a school which was greatly concerned with the development of scientific objectivity and critical thinking had stressed the reliable and dependable materials of unquestionable objectivity. After administering a battery of tests on thinking, the staff discovered to its amazement that the students were highly gullible. They had a tendency to accept as true almost anything in print because they had no opportunity to compare poor and good sources. An exclusive diet of excellent and dependable ideas cultivated an unquestioning attitude. Evaluation thus serves not only to check the hypothesis on which the curriculum is based but also to uncover the broader effects of a program which may serve its central purpose well, but may, at the same time, produce undesirable by-products. (1962: 314–15)
Taba’s perspective illustrates four conceptual dimensions of curriculum and program evaluation in the US in this period: 1. The value of particular educational strategies cannot be determined by theory; they need to be empirically validated through evaluation procedures. 2. Evaluation proceeds by identifying behaviours which represent the curriculum strategy, measuring them and developing comparisons with a parallel context where the strategy is not used. 3. The evaluation process has to take a broad, open view, documenting curriculum processes so that unanticipated data can be interpreted and understood. 4. This evaluation process has the potential to improve the program at the level of understanding the curriculum theory, as well as identifying successful and unsuccessful aspects of the practices which make up the program. Points 1 and 2 reflect the demands of external perspectives on educational processes, relating to both managerial accountabilities and theoretical understandings. Points 3 and 4 focus on internal views of the programme, engaging with the complexities of practice and the evolutionary nature of programs. The notion of evaluation examining both the predicted and the
22
Program Evaluation in Language Education
unexpected has also proved an enduring one. Taba is highlighting different divides in orientations and interests here:
Managerial accountability External evaluation Predicted outcomes
Curriculum theory and practice Internal evaluation Total pattern of outcomes
Reconciling these divides became a major task for evaluation theorists and practitioners in evaluation during the following decades, in language education as in general educational programs and innovations. Whilst the development of educational evaluation in the US was largely shaped by the engineering model illustrated above in the work of Tyler and Taba, the situation in Britain and elsewhere in Europe was characterised by a greater emphasis on psychometric research (Norris 1990). This focused on narrow learning questions on the one hand, and local school inspection regimes on the other. Understanding learning was the task of academia and specialised research institutes, accounting for the effectiveness of schooling processes the task of government school inspectors, and managing and improving these processes, the task of local education authorities. The division of labour here rendered the task of program evaluation particularly complex. The fate of an early evaluation – the Nuffield Foundation’s Primary French evaluation – illustrates this. The Primary French curriculum evaluation in Britain, a longitudinal (1964–74) comparative study, investigated issues of learning, teaching resources and wider policy (Burstall et al. 1974). The aim of the evaluation was to determine if starting French as a foreign language in British primary schools would contribute to more successful learning in the secondary school. The purpose, therefore, was to inform on the use of educational resources and improve the curricular policies which shaped primary schooling. The study concluded that there were only limited benefits to Primary French, and the policies promoting foreign languages in the primary curriculum were duly changed: foreign language learning started in secondary schools. Although this might be considered appropriate attention to, and use of, evaluation findings, many stakeholders (Bennet 1975; Buckby 1976), including educationists and evaluators of a liberal, progressive educational philosophy, considered the change a regressive one: the stark answers to the evaluation questions posed had a negative impact on the development of the program. Buckby (1976) describes how the range of interests and stakeholders involved in this program and its evaluation contributed to a lack of ownership and engagement with findings: early negative findings sapped energy and resources from the program, and allowed the scepticism towards learning foreign languages in the wider society to prevail over more progressive voices. Norris (1990: 32–3) concludes that a strong role for this type of evaluation was
Focus on Design and Method
23
seen as ‘threatening teacher autonomy and eroding the responsibility of local government’. Program development was furthered by local curriculum reform projects rather than national evaluations, and the key information for decision-making related to curriculum process rather than outcomes (Norris 1990: 38). This example profiles the perennial conflict in evaluation between ideological commitment and empirical evidence on the one hand, and between specific evaluation findings and the range of factors which inform policy decision-making on the other. Chapter 7 describes a language education project evaluation in Hong Kong a decade later, where teacher and media opposition to the program and the strategy of innovation management similarly shaped outcomes. Chapter 8 describes a recent evaluation of primary-level foreign language teaching in Ireland, where, in a different policy development environment, the evaluation had a different policymaking impact. The following sections explore the developing role of empiricism in understanding educational processes and the emerging sense of the complexity of educational decision-making. These concerns focused attention on evaluation methodology. Influenced, on the one hand, by research-oriented accounts of cognitions and learning, and on the other, by policy issues related to social justice and educational effectiveness, the quantitative experimental paradigm emerged as the dominant methodology.
2.4 Evaluation as measurement and comparison Stufflebeam, Foley, Gephart, Hammond, Merriman and Provus defined educational evaluation as ‘the process of delineating, obtaining and providing useful information for judging decision alternatives’ (1971: 43). The twin dimensions of this definition illustrate (1) the nature of the data involved – quantified measures of program or curricular outcomes; and (2) the way these are used, for the purpose of comparing two different policy options or pedagogical strategies. Beretta (1992a) ascribes the dominance of these features of program evaluation to the wider socio-political situation in the United States in the 1960s. The need to compete technologically with the USSR (which was perceived as more advanced in the flagship arena of space exploration), and the desire to reduce social disadvantage, together generated a substantial increase in innovative educational programs. The transparency requirements of these public programs made evaluation an integral part of both their design and implementation.
Concept 2.2
Experimental design
An experimental evaluation tests the effectiveness of a particular strategy or intervention by comparing two groups: an experimental group which experience the strategy or intervention; and a control group which have the normal educational experience.
24
Program Evaluation in Language Education
Concept 2.2
(Continued)
The intervention is described as an independent variable of which the impact can be measured, for example, by test results or questionnaire findings. The evaluation seeks to establish an effect which is statistically significant, that is, the difference observed between the findings of the experimental group and the control group is greater than that which might occur naturally. Key quality criteria of experimental evaluations are internal validity – the extent to which the effect measured is caused by the intervention – and external validity – the extent to which the findings can be generalized to the wider population. Detailed accounts of the experimental approach can be found in research and evaluation methods handbooks, such as Cohen, Manion and Morrison (2001) and Lynch (1996). Critical reviews of a reliance on this form of evaluation are set out in Guba and Lincoln (1989) and Pawson and Tilley (1997). Chapter 7 below describes a language program evaluation which had a substantial experimental dimension.
We identify three factors which have contributed to the dominance of the experimental, outcomes approach: 1. As a method of enquiry, it has been tried and tested in a range of scientific research contexts. It is still the mainstay of medical research and the evaluation of new interventions and therapies where it has external validity; the findings can safely be generalised to contexts and populations other than those involved in the studies. 2. It had the potential to produce a clear, objective comparison supported by quantitative data, which have the credibility and ethicality to sustain unpopular judgements and decisions, i.e. to go against the grain of popular beliefs and expectations. 3. It focused program implementation on fidelity to the original blueprint. Teachers and other practitioners would not deviate from or alter the pedagogical strategies of the programme, and curriculum theorists and policy-makers could be confident that the findings had internal validity, i.e. that they were measures of the pedagogical strategy in question. The application of this approach to evaluation is illustrated in early studies of foreign language teaching in the United States. The evaluations of both the Colorado Project (Scherer and Wertheimer 1964) and the Pennsylvania Project (Smith 1970) sought to show ‘scientifically’ the merits of audio-lingual and cognitive code methods in foreign language classrooms. The evaluation some 20 years later of the Bangalore Communicational Teaching Project (CTP) also represents an example of a study designed to make a definitive judgement between alternatives: Prabhu’s task-based ‘communicational method’ and
Focus on Design and Method
25
the ‘Indian version of the structural method’ (Beretta and Davies 1985: 122). The Bangalore Project and its evaluation had a wide resonance in ELT and Applied Linguistics. It had relevance for the development of the second language acquisition theory as well as for Communicative Language Teaching pedagogy. It was sponsored by the British Council, and visited by influential curriculum theorists (for example, Chris Brumfit and Keith Johnson). In addition to the published evaluation (Beretta and Davies 1985) and commentaries on it (Beretta 1989b; 1990; 1992a; 1992b), a number of publications explored its implications for ELT theory and practice and second language acquisition research (Brumfit 1984; Greenwood 1985; Beretta 1986a; 1986b; 1986c; 1987; 1989a; 1992b). It is the implications for evaluation theory and practice which are of particular interest here. While the purpose underpinning the comparison was to facilitate decision alternatives with regard to teaching methods, the results ‘constitute a “probe” of the central CTP hypothesis, but not “proof” ’ (Beretta and Davies 1985: 126). They give three reasons for this less than definitive conclusion: 1. Full experimental control was not possible because of variations in the ways teachers understood and implemented Prabhu’s language learning tasks. 2. There was potential for bias in the test construction, i.e. the tests might have favoured one or other group of students. 3. The experimental group of students had been exposed to the CTP treatment for only three years. These factors represent recurrent features of curriculum innovations and evaluations carried out in real classrooms and schools rather than in laboratory settings, that is, where synthetic teaching contexts are constructed for the evaluation of pedagogical strategies. In the latter it may be possible to achieve a level of control of variables which can ensure agreement on findings. In real classrooms, however, the natural variability in the interactions which constitute teaching means that questions about the similarity of treatments are likely to arise. Experimental evaluations also assume a correspondence between teaching method and learning opportunity. Beretta (1992a) describes a series of such teaching method evaluations in the US which were inconclusive, because this correspondence was insufficiently theorised, that is, it was unclear how language learning should result from specific pedagogical action. Experimental evaluations such the Bangalore study often address separate theory and policy questions. The theoretical dimension relates to a psycho-linguistically-based SLA research agenda, while the policy questions relate to how language teaching should be conducted, how materials and tasks should be designed, and how teachers should be trained. The former, firmly located in the psychometric tradition, requires data which inform on
26
Program Evaluation in Language Education
cognitive processes, while the latter, the more socially mediated context of teaching, is unlikely to be represented with validity by such narrowly constructed data. Beretta, one of the CTP evaluators, coined the phrase ‘program-fairness’ of evaluations to describe this intersection of abstract cognitive processes, and real-world schools and classrooms. His assessment of the Colorado project (Scherer and Wertheimer 1964) describes the more general challenge of experimental control in real classrooms.
Quote 2.3
Beretta on program-fairness
Scherer and Wertheimer (1964) felt able to predict a ‘rigidly controlled large-scale experiment which would yield clear-cut data’. This was clearly an untenable aim for a study which was to compare audiolingual teaching with cognitive-code, in which the treatments could only be vaguely described and were extremely vaguely monitored, in which neither student nor teacher variables could be controlled, in which all manner of real-world accidents could occur (and did), and in which the testing, in my view at least, could never be programme-fair. (1992a: 8)
In many ways the notion of program-fairness corresponds to internal validity – the accuracy of the representation of the pedagogy or innovation provided by the data. In addition, it suggests that real-world classrooms are more than a set of variables that can be controlled, and that evaluation needs to engage with their uniqueness. This view of the complexity of classroom and teaching processes makes total reliance on measurement and comparison evaluations extremely difficult. However, as we shall see in the case studies in Part 2, the experimental approach is still used, usually as part of a more comprehensive strategy to understand a program or innovation. One reaction to the perceived shortcomings of experimental approaches was an approach to evaluation which works with single programs or cases, and focuses on their worth and development.
2.5 Evaluation as a focus on worth and development The 1970s saw the development of a range of approaches to evaluation as the purpose moved from comparison and decision-making between alternatives as in the Colorado, Pennsylvania and CTP evaluations, to a focus on determination of worth and program development. This change in purpose is illustrated by the characterisations of evaluation in Worthen and Sanders (1973) and Popham (1975), which focused on worth, an inclusive term which sought to capture both specific, predicted outcomes and unanticipated impact on the educational context.
Focus on Design and Method
27
Quote 2.4 Worthen and Sanders, and Popham, on focus on worth in evaluation Evaluation is the determination of the worth of a thing. It includes obtaining information for use in judging the worth of a programme, product, procedure, or object, or the potential utility of alternative approaches designed to attain specific objectives. (1973: 19) Systematic educational evaluation consists of a formal assessment of the worth of educational phenomena. (1975: 28)
In the American educational context, a range of evaluation strategies was elaborated in order to provide an external perspective on program worth. These include measurement of static characteristics (Worthen and Sanders 1973), such as the resources (both human and material) available to a program which might indicate its quality. Brown (1989), writing from a language program perspective, links this approach to procedures for institutional accreditation and the inspection and audit traditions (see section 3.6, p. 50 below). In the British context, a range of agencies carry out such reviews of educational institutions of different types (Norris 1998). The case studies in chapters 9 and 10 discuss evaluations designed in part to interface with the inspection regime for programs in Higher Education in the England. In English Language Teaching, the British Council schemes for accrediting language teaching institutions routinely use such data. Pennington and Young (1989) describe how a similar approach has been used for faculty evaluation in the US. Where the focus is on the static, what is missed is the dynamic, the learning in an educational program. A number of processoriented approaches were developed in order to understand (and focus the attention of program participants on) aspects of programs other than their static characteristics. These focus on the dynamic implementation phase of programs, providing a description of the program as well as judgements of worth. Scriven (1967) set out a procedure for goal-free evaluation, which looked broadly at a program as an evolving social construct rather than as a set of Tylerian objectives. Stake developed a countenance model which constituted a comprehensive approach to understanding educational programs. It begins with a rationale, then focuses on descriptive operations (intents and observations), and ends with judgemental operations (standards and judgements) at three different levels: antecedents (prior conditions), transactions (interactions between participants) and outcomes (as in traditional goals but also broader in the sense of transfer to real life). (Brown 1989: 226–7)
28
Program Evaluation in Language Education
The three levels represented in the first column in Concept Box 2.3 are 1) the starting point, 2) program process and 3) outcomes. Together, these provide a means for assessing the added value of a program. The focus in columns 2 and 3 provides a distinction between the description process (data-gathering and analysis) and the interpretation process (drawing conclusions which inform policy and recommendations for the future of the program). In providing an evaluation framework which attends to baseline assessment, added value, interactions within the program (including the role of the evaluator as advocate for the program) and wider impact, Stake established the agenda for evaluation development over the next three decades.
Concept 2.3
Countenance evaluation
1 Levels
2
3
Descriptive operations (intents and observations)
Judgemental operations (standards and judgements)
Level 1 Antecedents (prior conditions) Level 2 Transactions (interactions between participants) Level 3 Outcomes (test results and wider impact)
A key assumption in Stake’s approach is that real-world education contexts, particularly those where interventionist programs are considered appropriate, are not ideal teaching and learning contexts. The evaluation task is to understand the contribution a program can make in the context of environmental, social and professional constraints, when a predicted outcomes approach would miss important benefits. Stake (1995) illustrates such a feature in a case study of an inner city school in New York. Language program evaluations, such as those described in chapters 5 and 7, are frequently initiated in difficult circumstances. Accounts of language program innovations such as Kennedy’s (1988) and Holliday’s (1992) describe how, when the context of innovation was insufficiently understood, the task of baseline assessment became, quite late in the day, part of the evaluation. In the 1970s, a number of other evaluation technologies were developed, addressing in different ways the issues Stake identified. The CIPP (Context, Input, Process and Product) approach presented by Stufflebeam et al. (1971) sought to document fully the implementation aspects of the program, maintaining
Focus on Design and Method
29
an external perspective which would inform policy development and decision-making relevant to program contexts beyond that evaluated. The framework specified four areas for scrutiny: program objectives, resources, implementation and outcomes. The focus on program implementation and internal perspectives was emphasised in the adversary approach (Beretta 1992a; Wolf 1995) where evaluators and participants assemble data to argue for their view of the value of a program and its development. Eisner (1977) elaborated the educational connoisseurship approach, which sought to determine the value of a program through a rich impressionistic narrative rather than through sets of quantitative data which measured attainment of behavioural objectives. This approach is evident in evaluations of overseas projects (Alderson 1992; and see section 4.2, p. 56 below), and also in ethnographic approaches to evaluation (Fetterman 1988; Kiely 2000). The CSE (Center for the Study of Evaluation at the University of California, Los Angeles; see Part 4) set out a similar, five-element approach, one of which was program improvement (Brown 1989). Provus’s discrepancy model took a similar view, but focused on discrepancies between objectives and outcomes, and the measures appropriate for reconciling these – either program improvement or adjustment to standards (Provus 1971). These innovations in evaluation in the 1970s represent in many ways a search for a coherent alternative to the experimental approach in the evaluation of educational and other social programs. The range of named or branded approaches elaborated represents in part a real diversity of ideas in a context of increasing evaluation activity carried out by creative and committed evaluators, and in part a shared search for a coherent perspective on education programs which relied less on principles of industrial engineering and classical management, and more on their key social and interpersonal characteristics. In this enterprise, the development of program evaluation was interfacing with an enduring approach in management – soft systems methodology (Checkland and Scholes 1999). Soft systems methodology (SSM) is a management approach to problemsolving, management of change and development in organisations which are characterised by human activity rather than industrial processes. The acronym CATWOE illustrates the perspective on situated human activity developed by Checkland and Scholes (1999), and captures something of the evaluation task described above: the key to understanding program worth, or ‘root definition’, in SSM requires an articulated and shared understanding of what the program or organisation is for. Particularly interesting in the CATWOE approach is the three groups of people – clients (C), actors (A) and owners (O) – a feature which resonates with current interest in stakeholding in evaluation (see chapter 12). Many of the evaluations in Part 2 illustrate both the importance and challenge of understanding evaluation contexts as soft systems, where the key drivers of activity are interaction and engagement, rather than structures and planned operations.
30
Program Evaluation in Language Education
Concept 2.4
Soft Systems Methodology
The CATWOE approach to analysis requires ongoing engagement with six aspects of the organisation’s activity. The process of mapping the dynamic links between these aspects of the situation contributes to a rich picture and root definition. This process, like an evaluation, contributes to enhanced understanding of the complexity of the task in hand, greater effectiveness in operations, and increased capacity to deal with change and new tasks. Clients – users, purchasers, and beneficiaries of the service or program. Actors – providers of services, whose tasks include interpretation and regulation. Transformations – processes which relate to the purpose of the program or organisation, similar to the transactions in Stake’s Countenance Model, and representing learning and teaching in educational programs. Weltanschauung – world view, wider discourse which inform or determine the transformational processes which drive human activity. Owners – the third stakeholder group, often not directly involved in transformations, but with ultimate control. Environment – The context of operations, which is a network of factors, some of which are supportive of transformations, but many of which will represent constraints in some way. A CATWOE analysis leads to a conceptual model of the organisation’s activity. This model provides a basis for monitoring and assessing the three Es: Efficacy, Efficiency and Effectiveness. The case studies in Part 2, particularly in chapters 9 and 10 which explore the role of evaluation in quality management processes in language programs, reflect the relevance of SSM to understanding the complexities of such programs.
By the end of the 1970s the creative thinking in program management and evaluation, and the increased awareness of the complexity of such programs, created a new requirement: a means of determining evaluation standards. The same thinking that promoted the evaluation of publicly funded innovative social and educational programs led to a sense that innovative evaluations should face similar scrutiny. In the US the Joint Committee on Standards for Educational Evaluation drew together the key, shared elements of the preceding period.
2.6 Evaluation standards The development of evaluation theory and practice in the US in the 1960s and 1970s illustrates the search for a technology to provide reliable information on the effectiveness and efficiency of innovative social programs. The various models show different emphases in terms of evaluation purpose, the roles of program participants in the evaluation, the methods used to collect and analyse data, and strategies to enhance dissemination and use of findings. Evaluations had in effect become programs too, and there was a need for an approach or procedure to determine their worth. Commissioners of evaluations
Focus on Design and Method
31
and other stakeholders needed assurance that an evaluation was designed and implemented to a required standard. The evaluation community established a joint committee based at the Center for Evaluation Study at the University of California in Los Angeles to develop a way of evaluating evaluations (see Part 4). The various innovative strands in evaluation were drawn together in the form of standards: 30 criteria in four categories, stated in order of priority as utility, feasibility, propriety and accuracy (Stufflebeam and Webster 1980; Joint Committee on Standards for Educational Evaluation 1981; Weir and Roberts 1994). The Committee reviewed the standards in 1994, but made only minor changes. Patton (1997) comments on the significance of this orientation in the development of evaluation.
Quote 2.5
Patton on evaluation standards
Taking the standards seriously has meant looking at the world quite differently. Unlike the traditionally aloof stance of basic researchers, evaluators are challenged to take responsibility for use. No more can we play the game of ‘blame the resistant decision maker’. Implementation of a utility-focussed, feasibility-conscious, propriety-oriented, and accuracy-based evaluation requires situational responsiveness, methodological flexibility, multiple evaluator roles, political sophistication, and substantial doses of creativity, all elements of utilization-focussed evaluation. (1997: 17; bold in original)
The focus of the Joint Committee and key members like Patton on utility and responsibility for use established what is still an issue for evaluation theory and practice (see Part 3). A less radical statement of evaluation standards was set out by Harlen and Elliott as criteria for evaluations in the British educational context. These focused on the decision-making purpose and the methodological dimensions of the evaluation (Harlen and Elliott 1982; Weir and Roberts 1994). The focus on decision-making reflects a link to policy in program and innovation management rather than ‘serving the practical information needs of intended users’ (Patton 1997: 18). The development of standards for evaluations was generally intended for external evaluations, commissioned and designed to inform on the worth of innovative programs. As discussed in chapter 4, these developments proved influential for a range of evaluations in language education – for example, the development of the notion of participatory evaluation in the Brazilian ESP project (Celani et al. 1988; Alderson and Scott 1992), and the evaluation activity carried out collaboratively in the British Council-sponsored Project Development Support Scheme (PRODESS) in Eastern and Central Europe in the 1990s (Kiely and Reid 1996). Despite these instances of participation and collaboration in evaluation practice, much evaluation activity did not connect effectively with domains
32
Program Evaluation in Language Education
of practice. Too often evaluation was seen as monitoring and accountability in a way which lost the benefits for practice. Two developments in educational evaluation in Britain proved more radical and more fundamental than the processes which led to the elaboration of agreed standards. These are Parlett and Hamilton’s (1972) illuminative evaluation for programs, a response to the dominant psychometric research approach described in section 2.3, p. 19 above, and Stenhouse’s (1975) perspective on curriculum evaluation, developed within what Norris (1990) describes as the Curriculum Reform approach (see also section 2.3, p. 19).
2.7 New theoretical perspectives In 1972 Parlett and Hamilton set out their approach to the evaluation of innovative educational programs based on the anthropological research paradigm rather than the conventional experimental and psychometric traditions. They set out five major critiques of the dominant ‘agricultural-botany evaluation’ paradigm: an evaluation strategy which ‘utilises a hyptheticodeductive methodology derived from the experimental and mental-testing traditions in psychology’: 1. The nature of educational programs means that a randomised, representative sample of key parameters is not feasible, particularly at an early stage of the innovative program when evaluation is required. 2. Before and after research designs are premised on the notion that there is no evolution of circumstances or thinking within the programme. 3. The requirement to use quantified, objective research methods leads to the neglect of data considered subjective, anecdotal, or impressionistic which might have the potential to provide explanations for findings. 4. There is no possibility to consider local factors: atypical results and deviant cases are marginalised and lost to discussion. 5. The focus on objective truth means that the concerns of participants in the program are not addressed by the evaluation. (Parlett and Hamilton 1972: 59–60)
Quote 2.6
Parlett and Hamilton on illuminative evaluation
Illuminative evaluation takes account of the wider contexts in which educational programs function. Its primary concern is with description and interpretation rather than measurement and prediction. It stands unambiguously within the alternative anthropological paradigm. The aims of the illuminative evaluation are to study the innovatory program: how it operates; how it is influenced by the various school situations in which it is applied; what those directly concerned regard as its advantages and
Focus on Design and Method
33
disadvantages; and how students’ intellectual tasks and academic experiences are most affected. It aims to discover and document what it is like to be participating in the scheme, whether as teacher or pupil; and, in addition, to discern and discuss the innovation’s most significant features, recurring concomitants and critical processes. (1972: 60–1)
Illuminative evaluation takes two features as axiomatic: a focus on the implemented curriculum, or instructional system, rather than the planned curriculum; and a focus on the learning milieu which represents a network or nexus of cultural, social, institutional and psychological variables. The notion of illumination has proved a particularly generative one in language program evaluation. It provides for engagement with the complexities of learning, an important theoretical hinterland for language education programs. Evaluations such as the CTP evaluation (see section 2.4, p. 23 above), the foreign language learning programs reported in Mitchell (1989) and many of the studies reported in Alderson and Beretta (1992) and Rea-Dickins and Germaine (1998) sought to inform on the language learning research agenda as well as determine the worth of specific programs. Illuminative evaluation also engaged directly with notions of agency and mediation in social programs, thus incorporating affective, political, cultural and historical dimensions of program contexts in evaluations. The emphasis on the context here corresponds to the work of Stake, who notes: ‘all evaluation studies are case studies’ (1995: 95). He notes that wider learning from evaluations is not generalisations, the principal goal in the agricultural-botany paradigm, but particularisations: ways of understanding what the various planned elements of a program mean for the operation of the system and the lives of those participating in it. The perspectives of Stake and Parlett and Hamilton on understanding educational programs are evident in the fine grain of curricular development described by Stenhouse (1975). Lawrence Stenhouse, in the context of elaborating an innovative approach to the curriculum in general education, specified a new role for evaluation. His starting point was two critiques of the developments in evaluation theory and practice, particularly first- and second-generation evaluations: see section 2.3, p. 19 above) in the US. First, he considered that the focus on external evaluations was inappropriate for the more decentralised (at that time!) British schools context; and second, the evaluations of specialist, expert evaluators did not meet the needs of teachers developing the curriculum within schools and classrooms (Stenhouse 1975: 99). Evaluation use for curriculum betterment, in Stenhouse’s view, required the process of evaluation to be integrated into the pedagogic processes, thus building a shared understanding of methods of enquiry and strategies of teaching and learning. Stenhouse saw a need to merge evaluation and development of the curriculum. In the context of developing effective learning materials and pedagogic
34
Program Evaluation in Language Education
practices, and using them in ways that maximise their potential, the task of evaluation merges with that of teaching, thus establishing the context where improvements can be implemented.
Quote 2.7
Stenhouse on evaluation and curriculum research
We know enough now to shun the offer of ready solutions. Curriculum research must be concerned with the painstaking examination of possibilities and problems. Evaluation should, as it were, lead development and be integrated with it. Then, the conceptual distinction between development and evaluation is destroyed and the two merge as research. Curriculum research must itself be illuminative rather than recommendatory as in the earlier tradition of curriculum development. (1975: 122)
For Stenhouse ‘an adequate evaluation’ should illuminate the practice of the curriculum in much the way that Parlett and Hamilton recommend. Its primary function is to shape practice in the teaching/evaluation context, rather than generate findings which can be recommended and disseminated to others. It is thus a framework for both action research for pedagogic development and innovation management, and for evaluation for quality management within programs, as explored in chapters 9 and 10. Legutke and Thomas (1991), developing a principle of Communicative Language Teaching (CLT) (set out in Breen and Candlin 1980), provide an account of implementing the foreign language curriculum in this way. Working with principles of task-based learning within a CLT framework, and of autonomy in humanistic approaches to learning, they specify a strong role for evaluation. It is an explicit part of the decision-making and participation in task projects and simulations that make up the opportunities for learning in this context.
Quote 2.8 Legutke and Thomas on Evaluation in Communicative Language Teaching Engaging learners in communicative encounters, especially if their aim is to explore emotional content and experiences, can become too bound up in itself unless this activity also reaches an evaluation stage. Trying to understand what has happened while undertaking a particular task, why it was suggested by the teacher, and contributing actively to the evaluation of learning arrangements, sequences, resources and input materials by means of reflection and meta-communicative discourse – all these are considered indispensable learner activities in ELT. (1991: 65)
Focus on Design and Method
35
The focus here is on the role of evaluation in learning for the students. For Stenhouse, evaluation is also about learning, especially to benefit the teacher in devising effective strategies for improving the curriculum as it is implemented. He lists five criteria which frame this type of evaluation, not so much benchmarks to be determined empirically, but sensitive to philosophical aspects of learning, and the particularities of context.
Concept 2.5
Evaluation for curriculum betterment
Stenhouse’s experience of curriculum development and evaluation within the Humanities Project generate five criteria which the process of evaluation should reflect: 1. Evaluation should constitute a philosophical critique, disclosing the meaning of the curriculum, rather than assessing its worth. The data for the critique are from observation in classrooms which are responding to the curriculum. 2. Evaluation should identify the potential of the curriculum or educational practice in relation to its purpose and actual context. 3. Evaluation should identify interesting problems: a requirement of improvement is understanding and finding strategies to deal with barriers to learning which are persistent or recurring. 4. Evaluation should address local conditions: improvement is possible only if the potential of innovative practices (2 above) works to resolve the interesting problems (3 above). 5. Evaluation should elucidate: it should inform on the extent to which the curriculum throws light upon the problems of change in education, and to which it contributes to a theory of innovation in a particular school, or more generally. (Stenhouse 1975: 118–20)
Stenhouse’s work has proved influential and enduring across British education generally and language education in particular. Elliott (2001) relates his approach to current thinking on evidence-based approaches to the evaluation and development of teaching within the National Curriculum; Crookes (1993) and Wallace (1998) relate aspects of his approach to the development of Action Research in second or foreign language education. Rea-Dickins notes: If evaluation is ELT is to be effective, we will see a stronger integration of evaluation within practice, as part of an individual’s professionalism and an increase in collaborative activity where teachers (and other relevant practitioners) are actively engaged in the monitoring process. (1994: 84) The contributions of Parlett and Hamilton and Stenhouse and his colleagues in many ways rewired evaluation theory. The ways in which evaluation processes
36
Program Evaluation in Language Education
interfaced with practice on the one hand, and educational research on the other, posited radically different roles for all participants in the curriculum. It is no surprise that such innovative thinking has taken some time to work its way into learning theory, policy-making and classroom practice. We can now see this thinking find its place: the principles of CLT, sociocultural accounts of learning, the role of Information and Communication Technology in the curriculum, and the implications of all of these for individualised learning and developing autonomy mean that in language education, as well as in mainstream programs, the virtuous cycle of practice, evaluation and illumination is considered a key approach to development of the curriculum.
2.8 Summary In this chapter we have described the principal strands in the early development of evaluation theory and practice. This phase started with the development of empirical evaluation studies which emphasised fidelity to program design and external validity. A range of further developments placed the focus on more comprehensive accounts of program experience, and the use of evaluation to understand and develop programs. The writers on evaluation considered in this chapter set a challenging and enduring agenda: the case studies in Part 2 illustrate the current relevance of the themes raised. Chapter 3 explores further the democratisation of evaluation practice and its integration with research and professional action. The challenge of realising such a comprehensive and inclusive notion of program evaluation provided for new theoretical orientations such as constructivism and realism, as well as the policy and political factors which shape programs.
3 Historical Perspectives: Focus on Context and Use
3.1 Introduction In this chapter we examine important notions in the theoretical hinterland of evaluation practice. These notions do not so much generate evaluation designs and methods as inform our understanding of evaluation and its relationships to other areas of social science and educational action. They reflect, on the one hand, a need for innovative thinking in dealing with enduring social and educational problems, and, on the other, the expectation in postindustrial societies that such problems can be eliminated and our social condition infinitely improved. We look at the challenge of evaluation use in educational programs, the contribution of constructivism and realism to this goal and political and managerial dimensions of the betterment task.
3.2 Use of evaluation It is a truism that the findings of an evaluation should be used: there is no justification for constructing information on program design or implementation, or the underlying policy (often using resources of the program itself) if the process and findings are not used. However, experience suggests that use is a problematic issue: Cronbach et al. observe that: Whereas persons who commission evaluations complain that the messages from the evaluations are not useful, evaluators complain that the messages are not used. (Cronbach et al. 1980: 47) This view resulted in the Joint Committee on Standards for Educational Evaluation prioritising ‘utility’ in their guidelines for evaluation (see section 2.6, p. 30 above). A member of this Committee, Michael Quinn Patton, has since developed a range of strategies to enhance usefulness and actual use of evaluations. Patton (1997) outlines a seven-stage process for the design and implementation of evaluations which are utilisation-focused. In some ways, 37
38
Program Evaluation in Language Education
Patton’s approach is an implementation strategy for the radical rethinking of the 1970s in the context of complex program management: many of the key principles focus on understanding the program and facilitating its development though engagement with and of the people involved. Although the framework for a utilisation-focused evaluation is set out in the form of a flowchart, Patton warns against assuming that the logic of linearity always prevails; the stages are overlapping and iterated as required. A significant problem for utilisation-focused evaluation, identified by Patton, is changes in program personnel: lack of continuity here often compromises the commitment to use initially negotiated and considered the source of energy and direction for the subsequent stages. Evaluations located in language education projects described in Alderson and Beretta (1992) and Rea-Dickins and Lwaitama (1995) also note this problem of personnel and continuity.
Concept 3.1
Utilisation-focused evaluation
1. Stakeholder analysis – identify interests and commitments of potential program stakeholders. 2. Users – determine primary intended users. 3. Commitment to use – the evaluators and intended users identify, prioritise and commit to intended to intended uses of the evaluation. This stage identifies the most appropriate focus of the evaluation, e.g. outcomes, implementation or program theory, agrees research questions, and explores the consequences of the evaluation using fabricated potential findings. 4. Data – the evaluators and program practitioners decide on the data which need to be collected and the methods, instruments and procedures for collecting these. 5. Findings and recommendations – the evaluators and intended users together interpret findings and generate recommendations. 6. Dissemination – the evaluators and intended users decide on the dissemination of th report, considering both the needs of intended users (planned utilisation) and more general dissemination for broad public accountability (hoped for and unintended uses). 7. Evaluate the evaluation. (Patton 1997: 376–81)
Pawson (2002) sees the challenge of evaluation use as an important stimulus for the development of evidence-based approaches in the design of social programs. He describes the phenomenon of designing new programs on the basis of perceived need and policy trends rather than on the experience and evaluation of implementing similar programs. While in some ways it may seem reasonable to set aside programs and evaluations which are not seen as successes when taking on a new program design, it is also likely to be a lost opportunity: the analysis of need, and the design of appropriate interventions,
Focus on Context and Use
39
carried out systematically, may simply follow in the footsteps of preceding programs, and suffer the same problems and pitfalls.
Quote 3.1
Pawson on the use of evaluation findings
The topic here is ‘learning from the past’ and the contribution that empirical research can make. Whether policy makers actually learn from or simply repeat past mistakes is a moot point. . . . Few major public policy initiatives are mounted these days without a sustained attempt to evaluate them. Rival policy ideas are thus run through endless trials with plenty of error and it is often difficult to know which of them have withstood the test of time. It is arguable, therefore, that the prime function of evaluation research should be to take the longer view. By building a systematic evidence base that captures the ebb and flow of program ideas we might be able to adjudicate between contending policy claims and so capture a progressive understanding of ‘what works’. (2002: 157–8)
Evaluation use has been a persistent problem in the language program context. Bowers (1983) and Mackay (1994) note how programs which are part of development aid projects to improve language teaching or teacher education follow the design and implementation narrative outlined by Pawson above. Programs are informed by notions of classroom practice which do not take into account such contextual factors as teachers’ skills, schooling cultures and assessment formats. Bowers and Mackay call the evaluation of such programs War Stories and Romances: retellings without development of the same stories, though without happy endings! There are many reasons for the failure to build a systematic evidence base from these evaluations: the repeated use of a program design framework (such as logframes; see section 4.3, p. 64 below); the limited dissemination of evaluations; and the changes in policy-making personnel. The features of practice may inadvertently limit opportunities to evaluate the evaluations, an activity essential to evaluation use, as set out by Patton (1997), and noted by Rea-Dickins and Germaine (1992) as a key dimension of good language program evaluations. In addition to these institutional constraints, there is an element of resistance to evaluation at both personal and institutional levels. Taut and Brauns (2003) examine the phenomenon of stakeholder resistance to evaluation. While they see resistance as an ontological phenomenon, a naturally occurring reflex as the individual encounters change and seeks to maintain the stability of the status quo, they explore how a psychological understanding of resistance might be used to minimise its impact. Even though the initial response to participation in evaluation and using evaluation findings may be negative, evaluators can emphasise learning potential and reduce perceived threatening aspects. Strategies recommended include fostering positive attitudes, analysing personal cost-benefit, attending to
40
Program Evaluation in Language Education
control and power issues and providing esteem-building performance feedback to individuals. Patton (1995) proposes assessing ‘evaluation-readiness’ as a way of understanding the capacity for evaluation use, a strategy we explore further in Part 3.
Quote 3.2
Taut and Brauns on resistance to evaluation
Anticipated negative consequences of evaluation include: • • • • • •
negative judgement which affects self-esteem, social comparison, treatment by superiors; loss of control by experiencing restriction of perceived freedom; loss of control because of insecurity concerning consequences of the evaluation; loss of power following a renegotiation of resources (for example, status and decision-making power); loss of rewarding tasks and other reinforcing situations (for example, freedom to structure the work day individually); and direct costs of the evaluation (for example, time and effort).
(2003: 252–3)
3.3 Constructivism in evaluation As illustrated in chapter 2 above, the dominance of experimental quantitative designs in evaluation and their perceived shortcomings generated a range of alternative approaches. One of these is based on a constructivist perspective on experience, which seeks to understand the success (or other outcomes) of innovations or programs in terms of subjective experience rather than the objective outcomes. Labelled Fourth Generation Evaluation by its principal proponents, Guba and Lincoln (1989), it is rooted in the qualitative, interpretive research paradigm on the one hand, and in a postmodernist perspective on hierarchy and power distribution in social organisations on the other. The ontological contribution of constructivism is relativism: each individual’s experience, and the way each interprets and makes sense of that experience, are different, and the task of evaluation is to understand these experiences and interpretations without seeking a single, universal, objective truth. The evaluation process involves discovery and assimilation stages. Discovery is elucidation using data on the various constructs of the program. These may include formal proposals and findings of previous evaluations and reviews, as well as the stories of program participants. The assimilation phase is the sense-making task of the evaluator. The various constructs are drawn together to show the range of meanings of the program. These are related critically to program purposes and used to generate modifications which may be appropriate for program development.
Focus on Context and Use
Concept 3.2
41
Four generations of evaluation
Guba and Lincoln’s (1989) account of the development of evaluation uses the generation metaphor to represent four phases, which share family features and likenesses, but differ as a result of adaptations to their times, and as a result of accumulated developments. The key characteristic of the first two generations is a reliance on preordained categories, while third and fourth generations work with actual program implementation and stakeholder experience The account is largely the story of American social program evaluation; in Britain and other parts of the world, and in the particular context of language programs, the first and second generations have been concurrent, and have until recently continued to play a leading role in educational evaluation. The generations can be summarised as follows: First generation: the measurement of outcomes, especially test results (see section 2.4, p. 23 above); Second generation: The specification and measurement of processes as well as outcomes (see section 2.3, p. 19 above); Third generation: Descriptive approaches which focus on contributions to decisionmaking, policy development and program improvement (see sections 2.5, p. 26 and 2.6, p. 30 above); Fourth generation: Descriptive approaches, based on individual constructions of program experience.
There has been a number of critiques of constructivist approaches. An important view is that of Kushner (1996) and Stake (1995), who note that in preserving the subjective accounts of program experience, constructivist evaluators may limit the development of a consensual view which can contribute to program and wider policy development. Kushner is concerned about ‘the unpredictable paradox’ of constructivism in evaluation: Absolute relativism creates an unpredictable paradox – on the one hand it promises to free the individual from official versions of their lives; on the other, it grants license to arbitrary powers to dismiss complaint and dissent. (Kushner 1996: 196) This concern relates to power distribution among program stakeholders: while appearing to redistribute power, when stake and influence come closer together, the effect may be actually to limit the contribution of the ‘freed’ individual. He proposes instead adherence to Stake’s view of case studies in evaluation. In this approach, each account is taken as an interpretation of the realities of the program. The task of the evaluator is to use triangulation, a process of ‘ balancing one account with others, measuring the accuracy of accounts by comparing various versions, testing the limits of consensus’ (Kushner 1996: 196). The contribution of the constructivist evaluators, especially Guba and Lincoln, has been not to provide a complete method, an alternative to either
42
Program Evaluation in Language Education
experimental studies or mixed method approaches. Rather, it has been to focus attention on the complexities which develop as planned programs are implemented, and on the diverse experiences and interpretations of program participants. The former is especially relevant to language program evaluations, where it has facilitated a move from reliance on formal specifications of programs to understand operation and impact, to engagement with emergent realities as programs are implemented (Germaine and Rea-Dickins 1998; Pennington 1998; Tribble 2000). Pennington’s account of evaluation focuses on dynamic – activities and interactions – rather static aspects of programs, and the purpose seems firmly grounded in a Stenhousian notion of program development. The central role for ‘people’ here illustrates the contribution of constructivism: understanding their particular experience of a program is a substantial element of, but not the whole task of evaluation.
Quote 3.3 Pennington on dynamic aspects of language program evaluation Language program evaluation is then less a set of figures or documents than it is a set of activities. These activities involve people and their interaction in gaining increased understandings which allow them to function more effectively in their work environment. At the same time as these activities make it possible for people to adapt to their environment, they also open up the way for changing the environment, so that it better suits their needs and purposes. Thus, evaluation at its most basic level is the process of interaction that dynamically relates to people, processes and things that make up a language program in a process of mutual enlightenment, adaptation, and betterment. (1998: 205)
Constructivism has also had a wider impact on educational programs in two ways. First, it has generated rich and widely used theories of learning generally and in the specific context of language learning. The emphasis here is on language learning as a socio-cultural learning process, rather than as a cognitive one. The work of Jim Lantolf and associates (Lantolf 2000; Lantolf and Poehner 2004), developing concepts initially outlined by Vygotsky (1978, 1987), is particularly important here. A related impact is the use of constructivism to understand language syllabus and program design. Kramsch (2002) and van Lier (2004) explore how teaching can develop using concepts such as ecology, affordances and flow to understand conceptually what good classroom processes involve. Ecological approaches and affordances refer to the potential of the curricular context, while flow (Csikszentmihalyi 1997) captures the rhythms of effort and achievement which characterize program implementation. In Part 3 we explore ways in which data that inform on these aspects of programs can be developed as aspects of innovative program evaluation.
Focus on Context and Use
43
Second, attention to the subjective experience in constructivist evaluation has broadened the agenda. Writing in 2001, Lincoln identifies five forces which are explicitly engaged in Fourth Generation approaches, and which she sees as particularly relevant to understanding social programs in the twenty-first century. These embrace key themes in critical theory and Applied Linguistics: they are particularly relevant, epistemologically and methodologically, to the design and implementation of language policies and programs and to their evaluation. Such programs are inherently multicultural and intercultural, incorporating views of language, learning, teaching, schooling processes, and the purposes and potential of technology informed by a range of cultural perspectives and traditions. Evaluations which are concerned with the success, or otherwise, of specific initiatives within programs, or more broadly with sense-making and explanation, are likely to find in the constructivist contribution generally and these five themes in particular some elucidation of the way forward.
Concept 3.3
Constructivist evaluation: the wider agenda
Five forces which shape social programs and provide ways to make sense of their operation: Postmodernism – moving from an ideology of hyper-rationalised empiricism and a social engineering which proceeds on the same trajectory as technological engineering, to an acknowledgement of the different ways in which life, within specific programs and more generally, becomes meaningful. The interpretive turn – accepting that stakeholders’ perceptions of program entities and activities are as valid as ‘objective’ accounts of the features and purposes of these. The role of identity politics – addressing historical power imbalances and social disadvantage, both in social programs and in the wider social contexts which frame such programs. Globalisation and corporatism – preserving the local in an increasingly globalised world, and addressing the evaluation needs of programs in cross-national environments and the policy frameworks and cultures in which these are implemented. Postcolonial critiques – providing the methodological tools to break with conventional Western science and permit localised and indigenous knowledge to emerge as meaningful forms of the discourses around social programs. (Lincoln 2001)
We see constructivism, therefore, as contributing to evaluation in two ways. It can mean a strong commitment to the subjective, bringing to evaluations a methodological orientation similar to narrative accounts (Bell 2002) and biography (Clandinin and Connelly 2000). Alternatively, it can mean a reduced emphasis on the program design, and in a range of ways, acknowledgement that what must be understood is the program as it is implemented, and the
44
Program Evaluation in Language Education
network of factors that shapes this. This weaker view is evidenced in different ways in the case studies in Part 2. Section 3.4 examines the ways in which another set of social theories has shaped evaluation. Realism, however, focuses not so much on the experience of the individual within programs as on the patterns of reasoning and volition within programs which individuals share.
3.4 Realism in evaluation An innovative perspective on program evaluation in the 1990s came from the application of realist principles. Pawson and Tilley (1997) set out an approach to evaluation which involves identifying the elements of a program to be evaluated (mechanisms), and tracing the effects of these (outcomes) in specific social and cultural conditions (contexts). The purpose of the evaluation is primarily to explain why certain mechanisms generate certain outcomes in certain contexts. The theoretical account of the program produced by the evaluation is then available for improving that programme, and also for use in the design of other programmes.
Quote 3.4 theory
Pawson and Tilley on realist evaluation and program
Policy-makers try to engineer episodes of social change, and the success (or otherwise) of these initiatives depends upon the extent to which the program theory has been able to predict and control the interpretative spiral of ideas and social conditions. Just as a theory of physical change precedes the natural science experiment, the careful enunciation of program theory is the prerequisite to sound evaluation. One can use broadly the same formal and general conceptual matrix with which to express those program theories, namely: outcome = mechanism + context. In other words, programs work (have successful outcomes) only insofar as they introduce the appropriate ideas and opportunities (mechanisms) to groups in the appropriate social and cultural conditions (contexts). All else in realist evaluation follows from such explanatory propositions. (1997: 56–7)
Realistic evaluation is concerned with the specification of causal factors. However, while the notion of ‘explanatory propositions’ resonates in some with the positivist experimental tradition in evaluations (see section 2.3, p. 19 above), for Pawson and Tilley the proposition is not a generalised theory; rather it is a program theory, located in the nexus of views, values and volitions that constitute a program in action. We can thus see how, in language program evaluations, a realist approach has the potential to explain the success or failure of elements such as instructional strategies or learning materials. The evaluation is unlikely to prove a general theory (as was the goal, for example, of the Bangalore CTP evaluation). The findings generate a situated theory, relevant to other situation where similar features prevail. Accumulated
Focus on Context and Use
45
findings, where systematically constructed and aggregated (see Quote 3.4 above), can be used to devise increasingly appropriate program designs. Because of the way they attend to context, realist evaluations can be considered inherently ‘program-fair’ (see Quote 3.4 above): the pattern of actual attitudes and activities within the program determines the lines of enquiry which shape the data collection, and the hypotheses tested in the analysis of these data. Section 4.3, p. 59 and chapter 7 present instances of language programs where the evaluation design and focus were generated by language education theories which formed part of the rationale for innovative practice, rather than the actual curricular and social realities as the program was implemented. A group of realist evaluation theorists in the US developed realist approaches to understanding and evaluating social programmes in a slightly different way from Pawson and Tilley. Their concern was also to identify and measure causal factors, but rather than work with Mechanisms, Contexts and Outcomes, they proposed a more focused analysis of the relationships between them. The approach is analogous to emerging theory in the physical sciences: what Mark, Henry and Julnes (2000) describe as a focus on the molar – the relationships between elements – rather than the molecular – the particular qualities of elements. They posit five relationship factors which mediate the treatment effect, or impact, of the program.
Concept 3.4 evaluation
Relationships examined in an emergent realist
1. Cause – what are the reasons for the program experience (innovation, intervention or treatment)? 2. Recipients – what are the characteristics of the people benefiting from the program? 3. Setting – what is the social context of the program? 4. Time – How long is the program, or what period is there for the treatment, innovation or program to take effect? 5. Outcome variable – what is the program for? (Mark, Henry and Julnes 2000)
The purpose of a realist evaluation is to determine 1) whether the program theory prevails – that is, if the program has the impact intended; and 2) which of the relationship factors listed above has a particularly strong causal effect. To explore the latter a process of competitive elaboration (Mark et al. 2000) is used. This involves an enquiry using reasoning and data to determine the operation of each of these factors on the success of the program. The theoretical basis of this approach to evaluation is grounded in examination of the relationship between different elements of a program – the molar – rather than the inherent qualities of these elements – the molecular. The example of such an evaluation
46
Program Evaluation in Language Education
design of a language course book is set out below. The approach here reflects two important programs of enquiry in language education and Applied Linguistics: the in use evaluation of learning materials (Breen 1985; Rea-Dickins and Germaine 1992; Ellis 2003); and the study of cognitive aspects of language learning tasks (Foster and Skehan 1996; Skehan 1998; Ellis 2003). We see realist evaluation as having the potential to integrate these different modes of understanding language programs so that enquiry can build knowledge for different purposes: judging merit, program improvement, ensuring compliance with mandates, building knowledge and expertise for future programs.
Concept 3.5
Realist evaluation of a language course book
A realist evaluation of an English language course book, for example, would not focus on its design features and inherent qualities, that is, its molecular characteristics, the approach, for example, in Cunningsworth (1995) and McDonough and Shaw (2003). It would instead, focus on the connections between these characteristics and the five features listed above – the molar. To determine the success of the course book, it is necessary to consider: • • • • •
the reasons for its adoption and use, the students using it, the learning context, the length of the program, and the nature of the end point assessment.
A realist evaluation would elucidate the dynamic relationship between these factors, capturing (some of) the complexity involved where a conventional account would find some positive effects and some negative. The focus is on both how successful the initiative is, and why: explanation and sense-making are part of the enquiry so that as a result of the evaluation of a course book it is possible to specify which of a complex network of factors are particularly relevant.
The realist and constructivist approaches to evaluation emphasise the program context and the emerging implementation factors which can characterise a program and determine the extent to which it is successful. Section 3.5, in exploring further the political dimensions of evaluation, explores ways in which these have provided frameworks for conceptualising and understanding evaluation processes and outcomes.
3.5 Political dimensions of evaluation Cronbach etal. (1980: 3) observed in the introduction to the approach to evaluation: ‘A theory of evaluation must be as much a theory of political interaction as it is a theory of how to determine facts.’ There are two ways in which politics interacts with an evaluation: in the wider social context of social programs and their evaluation, and within the evaluation contexts themselves. The ideological
Focus on Context and Use
47
orientation of the wider context which may shape the orientation of the evaluation, or determine how findings are used. Schick’s (1971) somewhat sceptical view is still current in the experience of more contemporary evaluators: eyewash evaluation whitewash evaluation
to make a program look good to cover over the failure of a favoured programme submarine evaluation to sink an unpopular programme posture evaluation to satisfy a condition of funding postponement evaluation to put off the need to act
Schick was writing in a period when ideology was invested with the potential to solve a range of social problems: a consequence of which was a tendency, realised through evaluation, to represent the program as fitting the ideological position. Mitchell’s (1992) account of the Bilingual Education Project in Scotland illustrates how the political commitment to the program’s wider aims risked turning the study into an ‘eyewash’ evaluation: key stakeholders vetoed the exploration of pupil attitudes towards Gaelic, because they felt that positive findings would be evident only after the period of the program. Data on attitudes, however, were considered by the evaluation team to be central to both the success of the program and the theoretical issues involved – community language maintenance and language shift. Brindley (1989) and Nunan and Brindley (1986) note the importance of program representation through evaluation in the context of the Australian Migrant English Program (AMEP). Coleman describes a conflict between ‘evaluating a project in its own terms’ (1992: 237) and evaluating it in terms ‘of the criteria established when the project was set up’ (1992: 239). In their editors’ postscript, Alderson and Beretta (1992) comment on the way the resolution of this conflict through evaluation draws together key issues in stakeholding, insider accounts and the politics of evaluation.
Quote 3.5 Alderson and Beretta on politics of program evaluation Coleman draws our attention to the most poignant aspect of the insider’s dilemma: his own career prospects may be affected by the failure of the project, judged according to someone’s (inappropriate) objectives! And as if that were not enough, he sees himself obliged to exaggerate the needs of the situation and the activities of the situation well beyond what is feasible, simply to convince agencies that the project is ‘aid-worthy’: that it should be funded by outside agencies. An honest account of what could be achieved in the circumstances might well have seemed trivial. Yet by overstating the needs and activities, he makes himself liable to accusations of inefficiency when he fails to achieve the impossible, or of unprofessionalism in advocating the impossible. (1992: 247)
48
Program Evaluation in Language Education
The view of language programs operating in the realms of the impossible is an established one: Markee (1993; 1997) notes in a review of innovations in foreign and second language education the abiding theme is lack of success, while Holliday (1992) describes the fate of such innovations as ‘tissue rejection’. The problem may be due in part to the approach to evaluation, characterised in section 2.3, p. 19 above and by Coleman (1992): an approach characterised by MacDonald (1976) as ‘autocratic’.
Concept 3.6
MacDonald’s political classification of evaluations
1. Bureaucratic evaluation: The evaluator receives terms of reference for the evaluation, and accepts the values implicit in these. The report complies with systemic frameworks, and is owned by the bureaucracy. 2. Autocratic evaluation: The evaluator as expert negotiates the terms of reference. These, like the report, tend to be informed by the academic values: owned by the evaluator, and often published. 3. Democratic evaluation: The evaluator negotiates the terms of reference taking into account the interests of immediate stakeholders, participants in the program, and more remote stakeholders, the citizens at large. The data and report as a whole should be accessible to non-specialist audiences. Reports are widely disseminated, with a view to stimulating and informing critical debate.
MacDonald’s (1976) political perspective describes the distribution of power as a key issue in the evaluation process. His three orientations for the evaluator’s interactions in the program and evaluation context derive from his analysis of the political dimensions of the innovative ideas of Parlett and Hamilton and Stenhouse in the 1970s (see section 2.6, p. 30 above). A key element for MacDonald, as for Stenhouse, is to facilitate democratic evaluation as a tool for understanding programs in a way which informs decision-making and policy development and identification of practices which contribute to success (MacDonald 1976; Hopkins 1989). Democratic evaluation has proved an important stimulus for the development of theory and practice in the 1980s and 1990s. Its key notion that program and policy development are achieved through democratic accountability has been developed by evaluators based in the CARE at the University of Norwich (Norris 1990; Kushner 1996). A similar perspective on evaluation has been developed by Kemmis (1986), better known as a supporter of action research approaches to the development of the curriculum.
Focus on Context and Use
49
Quote 3.6 Norris, and Kemmis, on evaluation and policy development The belief that institutions and culture can be deliberately fashioned through experimentation and research in one of the hallmarks of twentieth-century social thought. Evaluation has emerged as the major practical expression of the application of theories and methods from applied social science to the problems posed by piecemeal social engineering. (1990: 9) Evaluation is the process of marshalling information and arguments which enable interested individuals and groups to participate in the critical debate about a specific programme. (1986: 118)
The role of politics in evaluation described above focuses largely on evaluation purpose and evaluation use. More recently in Britain there has been a debate about what evaluations of social programs should be about. The task of evaluations is not so much to evaluate policy as to inform on what the policy should be. The shift here is due in part to a lack of ideological direction from the left or right. As Deng Xiao Ping is reputed to have said during the shift away from a rigid Marxist political system in China: It does not matter if the cat is black or white as long as it catches mice. Policy development and approaches to program enquiry are informed by an evidence-based approach: the development of education policies and curricular practices based on what works. A key word used both in relation to such programs is delivery, a term that emphasises ends rather than means, product rather than process. The challenge for evaluation here is to engage with the task of quantifying outcomes without losing the sense-making and development potential of evaluation. To return to the cat and mouse analogy, it is important to provide an accurate account of its performance or delivery, but it is also necessary to explore the aspects of its character and behaviour which coexist with strong performance and to bear in mind that its most salient attribute, colour, may be irrelevant. One realisation of this approach to policy development has been Hargreaves’ ‘engineering model of educational research to effect a direct influence on educational action in the areas of policy and practice by generating evidence of what works’ (1999: 243). The task is to identify successful instructional strategies which teachers can implement. This engineering (though in a different sense from Tyler’s; see section 2.3, p. 19 above) model, an aspect of the challenge of evidence-based policy development for evaluation which we set out in chapter 1 above, can be considered political in two ways: 1) it establishes policy, perhaps inappropriately narrowly, on what is considered good teaching; and 2) it seeks to set out in a neutral, technical way, aspects
50
Program Evaluation in Language Education
of program provision which might be most properly considered areas for professional, situated, values-based decision-making. This discussion emphasises the political nature of all evaluations and the centrality of debate as a means of making use of their processes and findings. The challenge of democratic evaluation is to develop links between program evaluation and professional practice. This is effected through attention to power distribution in the conduct of the evaluation, so that it is not with the regulating authority (bureaucratic evaluation) or with the evaluator (autocratic evaluation). MacDonald’s perspective on the need to make evaluations more democratic has been influential: it is still a significant orientation in evaluation for quality management (see chapters 9 and 10), and especially in the development of stakeholder approaches (see chapter 12) where the data generated by the evaluation process is oriented towards the practice needs of teachers and other program participants. On another front, however, the role of what might be considered bureaucratic evaluation has also grown, not only in the context of language education, but in social programs and professional practice more generally. Section 3.6 explores the implications this growth has for the development of evaluation theory and practice.
3.6 Compliance with mandates An aspect of evaluation use which has grown in recent years relates to quality management in social programs and the monitoring of compliance with mandates. Norris (1998), in a review of trends and typologies of curriculum evaluation, identifies six commonly used and routine approaches to evaluation. Four of these approaches represent forms of monitoring to ensure compliance with mandates.
Concept 3.7 Six approaches to curriculum evaluation in Britain in the late 1990s Experimentalism (that which purifies us is trial and trial is that which is contrary): The use of classic research designs (see section 2.4, p. 23 above). The objective achievement model (let me know mine ends and measure them): The measurement of outcomes, such as test results. Performance indicators (every breath you take, every move you make, I’ll be watching you): Scrutiny of a range of features of the program or institution. Self-study (know and better thyself): Statement of rationale, strengths and developments by the program or institution. Expert or peer review (experto credo): Observations of a visiting expert, such as an external examiner. Inspection (the Real Inspector Hounds): Scrutiny on behalf of the relevant authority. (Norris 1998: 209–14)
Focus on Context and Use
51
Performance indicators are in effect criteria, established externally for the periodic review of a program (an example, the logframes used in language programs in development contexts, discussed in section 4.3, p. 59 below). Selfstudy is an internal review taken for the purposes of program or institutional improvement. In the 1980s, it was set out as a form of ‘goal-free’ evaluation: the task was to identify strengths and weaknesses and use the information to improve (Hopkins 1989; Pennington and Young 1989). Increasingly, such processes are externally regulated, with a significant bureaucratic dimension. Expert or peer review is often the external regulation: it involves practitioners, especially trained as evaluators, to undertake reviews of programs and institutions. Inspection or audit describes the process by which compliance is assessed and by which all these forms of evaluation shape programs. In Britain where the inspection approach to curriculum development and review has always been strong, these forms of evaluation characterise review processes in both general education and in language programs. The approach to evaluating university departments is a process of peer review (http//:www.qaa.ac.uk) (such peer review processes constitute the wider context of the evaluations explored in chapters 9 and 10), while the process used in primary and secondary schools uses teams of specialist inspectors. In English Language Teaching, The British Council leads the accreditation of institutions in the public and private sectors in Britain (http//:www.britcoun.org), while the European Association for Quality Language Services (EAQUALS) aims to promote and guarantee quality in modern language teaching institutions in 29 European countries (http//:www.eaquals.org). English language teacher education is managed in a similar way: examination boards use series of schemes to accredit English Language schools, specific programs and professional teaching qualifications. Concept 3.8, with reference to the Trinity College London (TCL) Certificate in TESOL qualification for English language teachers, illustrates the processes involved in evaluations which are based on mandate compliance. The process starts with a government-approved comprehensive categorisation of learning achievements, designed to build the national skills base, to ensure parity across different learning contexts and communities of professional practice, and to protect users/learners from exploitation or deception. Then, Trinity College London, a respected Examination Board in the TESOL field, successfully establishes its initial teacher training course – the TCL Certificate in TESOL – as a Level 4 qualification, and thus has a mandate to award Level 4 certificates. Next, institutions such as language schools and universities propose that their program be validated, such that successful trainees are awarded the TCL Cert TESOL, and where successful, have a mandate to offer a course leading to this qualification. This approved program is then Mandate 4, with which trainees must comply. The operation of the course is a context for verification that Mandate 3, and ultimately Mandate 2, are complied with. As is appropriate in a democratic state, the performance of the QCA and its mandate to oversee learning nationally are a matter for Parliament and its various monitoring agencies and processes.
52
Program Evaluation in Language Education
Concept 3.8 scheme
Compliance in action in an initial teacher education
(description of the operation of the Trinity College London Certificate in TESOL program) Mandates
Evaluation
Mandate 1 The Qualifications and Curriculum Authority (QCA) a government agency specifies learning requirements for Level 4 professional qualification Mandate 2 The Trinity College London Examination Board sets out learning outcomes and assessment requirements which interface with QCA Level 4 professional skills Mandate 3 Teacher education institutions set out programs which meet the examination boards validation guidelines
Mandate 4 Institutions and teacher trainers set requirements and provide feedback for trainee teachers so that they can show skills appropriate for award, which are assessed by programme tutors and external assessors from the examination board in accordance formats and criteria outlined Mandates 2 and 3.
A process of document review and bench-marking establishes compliance with levels of learning achievement, volume of learning and modes of assessment appropriate for Level 4. A process of document review and bench-marking establishes compliance in areas of entry requirements and procedures, syllabus, timetabling, assessment tasks, criteria and processes, staffing, resources and premises. Institution and program management evaluate each program, usually using questionnaires, group discussions, individual interviews and meetings, as well as scrutiny of tasks presented for formal assessment, to ensure that all aspects of the planned program are implemented, and to a satisfactory standard. External (TCL) moderators/assessors then, visit each program to assess for QCA Level 4 compliance, and to check that the program is implemented according to the validated ‘proposal’ for that course.
Weir and Roberts (1994), Blue and Grundy (1996) and Kiely (2001; 2003) represent accounts of evaluation practice carried out routinely within language programs in order to meet an internal and external commitments to quality assurance. Norris (1998) sets out the wider policy context
Focus on Context and Use
53
for this management function in public sector education in Britain, and Thomas (2003) provides an overview in the field of English language teaching. Trends in areas such as consumer protection and accreditation of programs and institutions have created an industry of inspections, audits and reviews to manage compliance with mandates. Such cultures of quality management can stimulate internal evaluations and institutional selfevaluation, as described for example in Hopkins (1989) and Blue and Grundy (1996). Blue and Grundy describe how a team evaluation on an EAP program using a checklist proved beneficial both for curriculum and for mandate compliance.
Quote 3.7 program
Blue and Grundy on mandate compliance on an EAP
At a more utilitarian level, it has been reported by course directors that the checklist has proved a good way to get a team of tutors to understand the basis on which they and their course were to be assessed. And tutors have remarked that working through the checklist in advance helped them to understand what the assessors were looking for, and to explain how the course organization and their own teaching addressed the requirements of the accreditation scheme. (1996: 249)
However, we see two potentially negative impacts of this form of evaluation. First, such evaluations rely on static characteristics (see section 2.3, p. 19 above) rather than more dynamic views of programs as learning opportunities (Kramsch 2002; Crabbe 2003). This may result in a tendency to equate documentation of program implementation or professional practice with what actually happens, which can lead to spending more time on bureaucratic activities than, say, language teaching or teacher education. In consequence, a valued professional skill may become that of documenting compliance, an activity which does not actually benefit the user such as the language learner of the trainee teacher. Second, there is a risk that compliance with mandates determines the curriculum, specifying required or good practice in such a way that teachers or teacher educators make few professional decisions, and increasingly feel their task is to comply, to deliver a program in which they may have limited confidence. Mandates may have a negative washback effect on the curriculum and learning experience similar to that generated by the format of external language tests (McNamara 2000). Ferguson and Donno (2003) reflect how the privileging of mandates can limit the innovative action required to solve enduring program design problems, in this case the format of English language teacher training courses.
54
Program Evaluation in Language Education
Quote 3.8 Ferguson and Donno on compliance in teacher education programs A one-month course concentrating primarily on practical techniques may be attractive when teaching can be conceived of as the implementation of a particular method or set of procedures. Today, however, more than ever, such circumstances do not obtain. We live in the post-method age, and there is no theoretical consensus for any one methodology. (2003)
In the health care field, an approach based on concordance is being advocated as an alternative to compliance. Candlin (2004: 9) sets out the distinctions between the two terms: ‘Compliance is ends-oriented, whereas concordance appears more balanced between means and ends, but with a balance on the means.’ The context of compliance in this field involves patients following doctors’ prescriptions vis-à-vis medication and other therapies, and the main problem is non-compliance which reduces the effectiveness of therapies. While there is no evidence in the language program field that non-compliance in the processes described in this section is a problem, other factors suggest that a concordance approach might be beneficial in areas where mandates are rigid. Candlin notes that compliance approaches prevail in contexts and interactions characterised by unequal power distribution and poor communication. These features may be important in the context of language programs, where an institution is highly dependent on accreditation, but the accrediting body not dependent on any one institution. The challenge of communication may be met by simplified rules, at the expense of the more risky flexibility. A concordance approach, emphasising negotiation rather than tick-box compliance, would emphasise professional decisionmaking at all levels within programs, and provide the space to innovate and problem-solve in ways not facilitated by current mandates. The implications for evaluation practice are explored in Part 3.
3.7 Summary In this chapter we have explored issues and trends in social policy and related program development which have shaped our understanding of evaluation in recent years. We have explored these developments in the context of actual activities in language program evaluation and of potential evaluation action. The discussion has documented the increasing reach of evaluation processes: just as the earlier shift saw the inclusion of the study of program processes to complement the message of program outcomes (see chapter 2), the shift represented in this chapter represents further inclusion. The democratic imperative suggests wider participation
Focus on Context and Use
55
by both language program practitioners and managers of programs and institutions, and greater attention to the theory-building and professional learning potential of evaluation. In chapter 4, we take stock of the particular context of evaluation development in language programs and Applied Linguistics.
4 Historical Perspectives: Language Program Evaluation and Applied Linguistics
4.1 Introduction In chapter 3 we looked at developments in evaluation more generally, and particularly at how these general trends and developments influenced language program evaluation. We consider here language education programs as for the most part located within this educational evaluation discourse, and evaluators who are Applied Linguists and also teachers, educationists and social scientists. In section 4.2 we describe these developments and consider how they have shaped language program evaluation. We then look at two areas in which particular features of the Applied Linguistics/language education field have been influential (section 4.3). First, we focus on language learning theory in program evaluations (both as a purpose and as a framework for designing tests and other instruments, and for analysing classroom data). Then we explore evaluations which are part of development programs, and where activity is often characterised by two kinds of constraints: the international setting of the program and the evaluation, and the challenges of program implementation. In section 4.4 we consider an emerging theme in language program evaluation: assessments of language use in contexts other than that of language education programs. These include service and communication contexts where performance has a language element, and where applied linguistics expertise has made a contribution to understanding the factors which relate to successful (and other) performance.
4.2 Evaluation in language education and Applied Linguistics The evaluations carried out in these areas parallel the developments in program evaluation generally in five ways: (1) a shift from an exclusive focus on measurement of outcomes; (2) increased attention to classroom processes; (3) evaluation as the domain of professional practice; (4) the development of teachers’ (and other professionals’) skills; and (5) attention 56
Language Program Evaluation and Applied Linguistics
57
to baseline and formative evaluation. These evaluation trends have had particular impact on both language programs, and the strategies developed to evaluate them. The shift from an exclusive focus on measurement of outcomes The gradual movement from evaluating specified program outcomes to evaluating the actual curricular experience established a complex task for evaluators. In language program evaluation different outcome constructs have been used, for example, syllabus specifications (Munby 1978) and project frameworks (Nuttall 1991; Alderson 1992), in addition to a range of language competence constructs used to generate tests which have a program evaluation function. While these have proved helpful in providing an accountability perspective for program sponsors and stakeholders, they have not always guided program development in ways that facilitate improvement (Greenwood 1985). The ways in which this development has been achieved in language program evaluations is explored further in the case studies in Part 2. Increased attention to classroom processes Methodologically, the focus on process took evaluation away from tests – the measurement of outcomes – and into scrutiny of classroom processes. This focus has facilitated attention to language learning and instructional effectiveness issues within evaluations, and also to program sense-making – what the program means as it is implemented (see, for example, chapter 5). A range of technologies have developed to describe and represent classroom interactions (Wallace 1991). Developments in discourse analysis within Applied Linguistics have led the way in developing systematic approaches to documenting and researching classroom interactions, while ethnographic strategies have been used to inform on intercultural, identity and affective factors which shape programs and determine their effectiveness. The case studies in Part 2 describe a range of approaches to representing classroom processes, and also engage with the resources issues (time and expertise) in developing and implementing these aspects of program evaluation. Evaluation as the domain of professional practice There has been an evolution in the person specification of the evaluator of language programs paralleling development is general educational evaluation. When evaluation was focused on measurement of outcomes, evaluators tended to be assessment specialists whose expertise did not necessarily cover the range of program components involved, such as management and training. The development remit of evaluations, with its implied partnership with practitioners, has involved establishing a somewhat different role for the
58
Program Evaluation in Language Education
external evaluator. One aspect of this new role is the notion of evaluation consultant/trainer. In the 1990s, the Project Development and Support Scheme (PRODESS) of the British Council within the framework of different English language education programs in Eastern and Central Europe, provided a range of training and support services to programs and institutions using evaluation for development (Kiely et al. 1993; 1995). Other examples of this recast evaluator role are examined in Part 2 – see, for example, chapter 9 for an examination of the quality management role of teachers in an EAP program. The extension of evaluation strategies to the internal development of programs has thus redefined the notion of expert evaluator. In line with a wider trend in program implementation and service provision, all program personnel have an evaluation remit, whether this is for personal professional learning, monitoring and enhancement of quality, or ensuring compliance with missions and mandates. Developing this emerging dimension of professionalism in language education is an ongoing challenge, and is explored further in Part 3, and resources such as those developed within PRODESS are listed in Part 4. The development of teachers’ (and other professionals’) skills The development remit has increasingly emphasised working with practitioners within programs. In general education this approach has been based on the Stenhousian perspectives described in section 2.7, p. 32 above. In English language program evaluation the notion of evaluation for curriculum development has emerged as a response to the prevalence of external evaluations and has been developed to cover the range of curriculum components and program activities (Rea-Dickins and Germaine 1992; 1998). This broad-based view of program development has proved an enduring guide for evaluation. Where it involves evaluation within development programs it often includes capacity building in terms of institutions, professionals and other stakeholders, and the terms of reference for evaluations typically include recommendations for practice as well as judgements of worth. It is equally relevant to evaluations within programs where the focus is on quality assurance: the practitioners are very often the evaluators, and the opportunities for developing the program and enhancing quality are linked to their own professional learning. Inclusion of baseline and formative evaluations The timing of program evaluations has changed. The conventional summative evaluation has been augmented by formative evaluation or periodic program reviews. The PRODESS Guidelines (Kiely et al. 1995b) outline approaches for such evaluations both as a program management strategy and as a contribution to external or summative evaluations. In addition, there has been attention to baseline assessments in program contexts, both to inform on appropriate development activities and to improve the quality of judgements
Language Program Evaluation and Applied Linguistics
59
in evaluations at later stages of the program (Kiely et al. 1995a; Tribble 2000). The notion of baseline assessment has proved especially important in ELT program evaluations for two reasons: first, in the context of programs which are a part of an educational aid program in a developing country, such assessments provide an indispensable data-base for project planning and subsequent periodic reviews; second, the process of such assessments may constitute a valuable learning experience for the range of professionals involved. Developments in evaluation have been cross-disciplinary: new concepts and approaches in one field have been applied in others, acquiring some new characteristics as they met the program and policy requirements in these new fields. Thus, language program evaluation has developed with and from the experience of educational evaluation generally. The experimental research design constructs for the representation of teaching and classroom interaction, and the role of evaluation techniques in quality management are notions imported from education and other social sciences. These in turn have been shaped by epistemological positions in the study of social phenomena such as empiricism, constructivism, realism and soft systems methodology. The evaluation of language education programs has also benefited from developments in Applied Linguistics: here theoretical frameworks from the study of second language acquisition, discourse analysis and communicative language teaching represent specific contributions. In the next section, we review the relationship between language learning and evaluation, and the contributions of evaluation within educational aid programs over recent decades.
4.3 Language program evaluations As we have argued above, evaluation of and within language programs over recent decades has been influenced by the trends in general educational evaluation. The issues in language programs have shared much with those of other social, educational and innovative contexts. In two areas, however, language program evaluation design factors have had a significant influence on the development of evaluation theory and practice in this field. First, developments in recent decades in Applied Linguistics and in language education generally in our understanding of language learning (or acquisition) processes have provided frameworks for evaluation designs. Evaluations have also served as opportunities for further research into such processes. Second, a large number of language program evaluations carried out in recent decades have been in the context of education development projects. This section explores the impact these particular context factors have had on language program evaluation. Focus on language learning The tradition in language program evaluations of addressing issues of language learning theory has affected evaluation design in two ways. First, theory
60
Program Evaluation in Language Education
development in fields such as Second Language Acquisition (SLA) and Communicative Language Teaching (CLT) can establish an agenda and focus for evaluations. Examples of this kind of evaluation are the Canadian bilingual education programs (Swain and Lapkin 1995; 2000), the Bangalore evaluation (Beretta and Davies 1985; see section 2.3 above) and Slimani’s (1992) study of input, uptake and output in language classrooms. Mitchell (1992), writing about the Scottish Bilingual Education Project, notes the separate evaluation and research aspirations of the evaluators.
Quote 4.1 programs
Mitchell on evaluation of and research into language
In undertaking the project, however, the evaluators themselves expected not so much to solve one specific policy question, as to provide substantial and detailed accounts of the workings of bilingual education, and of the contextual parameters which appeared to constrain or promote it, which could form a more general background to policy making by others. (1992: 124)
The evaluation task in these language programs was to determine if a specific curricular intervention or set of interventions had an effect on second language acquisition. Beretta (1990) observes that despite the research relevance of such studies, and the research interests and commitment of the evaluators, there is a risk that policy evaluation issues can undermine the research credibility of findings – see section 3.5, p. 46 above for a discussion of political issues which can arise. The notion of policy evaluation has been a particular problem in methods evaluations, and is explored in detail in chapter 7 below. Here the evaluation focus is on target language use in the classroom, determined through comparing the learning outcomes of secondary school students taught by native speaker teachers of English with those in non-native teachers’ classrooms. Second, evaluations can adopt a theoretical approach to data collection and analysis. The Communicative Orientation to Language Teaching (COLT) (Frolich, Spada and Allen 1987) set out a descriptive account of the kinds of interactions which characterise communicative language teaching. This was used for what Long (1984) termed process-product studies to explore relationships between instructional strategies and learning outcomes (for example, Spada 1987). Different studies using similar approaches to documenting classroom interaction were carried out in Scotland (Mitchell et al. 1981; Mitchell 1989; 1992) to inform on aspects of second language learning theory. Mitchell (1989) discusses a range of evaluation studies in the Stirling tradition – evaluations of language programs by Mitchell, and her colleagues Parkinson and Johnstone, which might be characterised as the ‘systematic and qualitative’
Language Program Evaluation and Applied Linguistics
61
study of language classrooms and programmes. The aims of these evaluations have a clear descriptive and explanatory element, for example: To document current instruction practice . . .; To document the attempts of committed teachers to implement a communicative approach . . .; To explore the potential of using the target language . . .; To develop an operationalised model of communicative competence . . . (Mitchell 1989: 196) The primary function in identifying the key constructs and collecting data was to document instructional processes, and subsequently to use the data sets to evaluate the program, or to explain phenomena relevant to a research agenda. The Lawrence evaluation of ELT in Zambian schools (reported in Rea-Dickins and Germaine (1992), and Lawrence (1995); see also chapter 5 below) developed a conceptual framework from the main pillars of communicative language teaching which permitted a mapping of the actual curriculum. This evaluation explored the extent to which there was a match between the needs of the learners (in terms of proficiency in English in an Englishas-medium-of-instruction context) and ‘the appropriacy of a syllabus in relation to the context in which it is used’. From an examination of the curriculum literature in general education, four continua were used as a conceptual framework for locating ESL teaching between traditional/structural and communicative poles. The four continua are: 1. 2. 3. 4.
Perception of language in syllabus synthetic – analytic Function of language in the classroom use – usage Nature of language practice activities cognitive – mechanical Nature of language teaching strategy inductive – deductive
Adapted from Lawrence (1995) This framework facilitated the description of the curriculum, using data from teacher questionnaires, teacher interviews and systematic observation in classrooms. It is evident from Lawrence’s own account of teacher perceptions in this curricular context that while there were clear links between the policy (the syllabus set out by the Ministry of Education) and the theory which informed her evaluation strategy, the teachers’ intentions and perceptions of the classroom process were quite different (Lawrence 1995: 84). Lawrence’s evaluation data do not inform on the reasons why the teachers’ intentions seemed out of line with the official syllabus (the evaluation assumed such alignment would exist), and her conclusions note the importance of analysis of such ‘discrepancies’ (1995: 86).
62
Program Evaluation in Language Education
Quote 4.2
Lawrence on the intentions of teachers
The overwhelming emergence of synthetic, usage, cognitive and deductive as the features of teaching behaviour in all stages of the structure lesson confirmed that teachers were ignoring the syllabus requirements for structure to be taught and practised in meaningful contexts using either a situational or communicative approach. They also seemed to be ignoring the syllabus advice to keep rule teaching to a minimum. . . . The point being made here is that the syllabus recommended certain approaches in the structure lesson, and in certain crucial aspects, the teachers’ strategies were not in harmony with the syllabus or with the learners’ needs. (1995: 84)
In recent years, the study of language learning has largely migrated from the domain of evaluation to that of research. We see three principal reasons for this. First, there are fewer sources of funding for large-scale evaluation studies such as the Bangalore, Stirling and Lawrence evaluations. Second, the real world of language teaching has proved too complex to be validly represented by constructs from language learning theory alone. Socio-cultural and ecological perspectives (for example, Lantolf 2000; Van Lier 2004) illustrate that cognitive language learning accounts are partial, and curricular perspectives (for example, Graves 1996; Markee 1997; Jacobs 2000) show that new approaches to language teaching need to be understood as innovations before their innate pedagogical potential might be elaborated as an evaluation construct. Third, the field of language teaching has in the post-commmunicative era seen fewer grand theories which might provide policy options for large-scale implementation, such as characterised the earlier ages of audiolingualism and communicative language teaching. Important curricular strategies, such as the use of Information and Communication Technology and the development of autonomy in language learning, are context-specific and evaluated on a small scale within institutions and programs. Task-based learning (TBL) is an example of a pedagogical strategy which clearly illustrates the migration from evaluation to research. A series of studies of TBL in the 1980s (e.g. Long 1985; Candlin and Murphy 1987; Prabhu 1987; Legutke and Thomas 1991) illustrated the potential of the approach for language classrooms in different contexts. As discussed in section 2.4, p. 23 above in the context of the Bangalore evaluation, curriculum evaluation studies could not explain how tasks facilitated learning, or which features of task-based teaching led to successful learning in particular contexts or with different age groups. Peter Skehan, who played a leading role in TBL research in the 1990s, describes such early investigations as ‘data-free’ and ‘largely speculative’ (1998: 98). Skehan’s exploration of task-based learning has led to a series of studies into different features of TBL to explain the cognitive processes of language learning (e.g. Foster and Skehan 1996; Skehan and Foster 1997). While these research studies are not evaluations – they do
Language Program Evaluation and Applied Linguistics
63
not inform on how TBL programs work as a whole – they provide curricular building blocks which may in the future facilitate efficient language program evaluation designs. This trend in TBL is replicated in other areas. Instructional strategies for teaching grammar in language programs was a key element of the methods evaluation in the 1960s and 1970s. The research agenda here, however, was furthered in a limited way only by evaluations which compared methods which addressed grammar teaching in different ways (Beretta 1992). More recently, investigation of grammar teaching in areas such as the use of dictogloss techniques (Storch 1998; 2001; Swain and Lapkin 1998; Fortune and Thorp 2001), focus on form/focus on forms (Long 1991; Ellis 2003), grammar input strategies (Van Patten and Cadierno 1993; DeKeyser 1997) and the role of formative feedback in extending use of language forms (Rea-Dickins 2001) have been addressed in focused research studies. These examine particular aspects of the classrooms and programs in which the investigations are carried out, and while they do not afford overall judgements of classrooms and programs, they may, as in the case of TBL, generate frameworks which in the future can be used for evaluation purposes. Language program evaluation and educational development projects1 Approaches to the evaluation of language education programs in recent decades have emphasised accountability and development. Such evaluations should demonstrate to the sponsor and other stakeholders that resources have been properly and appropriately used, and can also contribute to the development of the program through improved decision-making, policies and practice. One context where there has been important development of language program evaluation theory and practice is in English as a foreign or second language in development project contexts, such as those funded by the UK Department for International Development (DfID) (formerly Overseas Development Administration/Ministry – ODA/ODM). These evaluations typically have had important accountability and development remits: they use public funds, and there are issues of extending or curtailing projects on the one hand, and opportunities to identify opportunities for program improvement on the other. These evaluations represent two important dimensions of international, intercultural evaluations. First, they highlight context, often as the focus of program development. A problem here has been narrative accounts of program activities which do not engage with a cumulative body of theory. As Pawson (2002) states, learning from past mistakes is not easily aligned to current problem-solving (see Quote 3.4 above). Bowers (1983), Swales (1989) and Mackay (1994) label
1
In this section the term ‘project’ is used. This is the term used in the UK and other contexts, and corresponds to ‘program’ in US usage.
64
Program Evaluation in Language Education
such evaluations as War Stories and Romances to represent their descriptive approach. Second, these program evaluations has been carried out in the education systems of countries other than that of the sponsors and evaluators: programs which are part of aid projects have evaluations typically commissioned by the DfID and the British Council, and carried out by UK evaluators. Evaluations often have to be carried out in a manner which looks at the English language program, but not at the educational policy context or management culture within which it is implemented. One effect of this distance between program evaluation and educational policy has been to concentrate on ‘neutral’ theoretical issues, and issues of practice, without engaging with issues of educational policy and values (Tomlinson 1990; Coleman 1992; Alderson 1992). The evaluation of developmental aid education projects funded by British Institutions (ODA/DfID; British Council) follow a standardised project design and management matrix, known as a logical framework or log frame (Alderson 1992; Weir and Roberts 1994). A similar methodology is used for the evaluation of projects sponsored by AUSAID, the Australian international development agency.
Concept 4.1 Logical framework as approach to project evaluation Project structure
Indicators of achievement
How indicators can be quantified or assessed
Important assumptions
Wider objectives Immediate objectives Outputs Inputs
The focus here on accountability and quantification of activity and benefits often has the effect of orienting evaluation away from development and the concerns of practitioners in the context. As discussed in section 3.4, p. 44 above, political factors inside and outside the program may determine the focus, implementation and use of the evaluation. Evaluations based on log frames often tend towards the bureaucratic (MacDonald 1976). In addition, the limited resources available for evaluation, and the use of JIJOE (Jet In Jet Out Expert) evaluators mean that evaluations produce at best a partial account of the factors which contributed to the success (or lack of success) of the program (Alderson and Beretta 1992). The wider objectives, what Nuttall (1991)
Language Program Evaluation and Applied Linguistics
65
labels strategic objectives, are much more difficult to evaluate than immediate objectives, a factor which may lead to evaluations providing a skewed or incomplete view of program success.
Concept 4.2 Wider and immediate objectives in language project evaluation Examples of wider objectives To raise the level of proficiency in English so that English can be effectively used as a tool of national development in the fields of agriculture, tourism, tourism and health; To improve the level of skills in English language of those who have left school and need English for further training or employment, including future teachers of English language. Examples of immediate objectives (i) Tactics, for example: To improve the competence of English teachers at school and university level, of teacher trainers and of secondary school students; (ii) Support, for example: To train the trainers providing INSET (iii) Material, for example: To devise materials for teaching and learning. The immediate objectives are used to generate very specific indicators of achievement, which in turn become the targets which shape the management of the project, and determine what is focused on in the evaluation. Nuttall comments: ‘It must be concluded that by and large, the attainment of strategic objectives is not evaluated, but only the objectives at the level of tactics, support and materials. How might such an evaluation be carried out? Indicators could be specified, but in many cases the evidence would have to be collected several years later to establish fairly whether those who studied English had indeed benefited at work, in science and education, and in other ways. It is a still further step to examine the effectiveness of ELT as a tool for national development, or as the language of manpower development in education training and the market place. If it is effective, one would expect to see a correlation between ELT aid and social and economic development. Social development is multi-faceted and appropriate indicators would have to be developed. Economic development is more regularly measured. . . . It must be stressed however, that while the absence of any correlation between ELT aid investment and economic development would suggest no link between them, a positive correlation does not prove that they are causally linked, only that they are associated, perhaps as a result of a third factor, or another resource of educational aid.’ (1991: 4; underlining in original)
66
Program Evaluation in Language Education
To enhance the development remit of these typically accountabilityoriented project evaluations, Alderson (1992) identifies two strategies: initiate evaluation activity at an early stage in the development of a project or programme; and specify an on-going role for an evaluation consultant. Alderson and Scott (1992) set out a model of participatory evaluation, in an overall strategy which reflects the utilisation approach of Patton.
Concept 4.3 evaluation
Alderson and Scott’s three axis of participatory
Sharing in three ways: (i) Planning Sharing the decisional planning roles amongst all involved. (ii) Implementation Getting involved rather than standing apart, doing a fair share of the donkey work. (iii) Benefits Gaining benefit from the work carried out. (1992: 38)
Two evaluation initiatives in the 1990s address in different ways the issues and challenges implicit in the positions of Nuttall and Alderson above: the evaluation of the English language program in a group of Indonesian universities, led by Ronald Mackay; and the Project Development Support Scheme (PRODESS), a British Council scheme to support through evaluation a series of English language education projects in Eastern and Central Europe in the context of the political changes following the collapse of the Soviet Union. In both cases the overall aim was to establish enduring project benefits and to maximise participation through the development of evaluation skills. In the Indonesian project the focus was on the development of a model of evaluation for quality management, while in PRODESS, a key objective was the provision of training in evaluation for project personnel, so that all evaluations would be characterised by a participatory approach. Mackay (1994) and Mackay et al. (1995) set out an approach for the evaluation of programs in language centres in Indonesian universities which dovetailed with routine management tasks within the project context. In this model, evaluation involves five key activities: 1. the identification of key features of quality for the particular program; 2. the identification of performance indicators which represent these; 3. iterative scrutiny of the program to build a management information database on the different program processes;
Language Program Evaluation and Applied Linguistics
67
4. team level engagement with the findings to discuss implications for program improvement; and 5. setting priorities for action to improve the program. Mackay’s framework for what he labels ‘intrinsically-motivated program review’ (1994: 143) is based in identified units or components and performance indicators for the key areas under review. The goal of evaluation activity to the development of a database, a systematic account of activity, for each program, and processes to use this to enhance quality. The project proposes no specific curricular or instructional strategy: rather the focus is one of documenting what is happening, and in a notion of improvement which echoes Stenhousian and constructivist perspectives (see sections 2.6, p. 30 and 3.3, p. 40 above), using these data to develop practice.
Concept 4.4
Mackay’s intrinsically-motivated program review
Total review
Entire project
Review of project components
Language centre 1
Language centre n
↓ Possible focus within each component
1. 2. 3. 4. 5.
Programmes Staff Institution Resources Finances
Key areas for each focus, e.g. programs
• Complete program documentation • Appropriate placement, progress and final tests • Student counselling team • Record of student attendance • Course materials based on adequate needs analysis • Teacher evaluation of courses • Student evaluation of courses Appropriate credible data collected on each performance indicator
Interpretation of levels of performance for each key area
(Mackay 1994: 147)
• •
Major strengths – a very good performance Strengths outweigh any weaknesses – some improvement desirable • Strengths outweighed by weakness – significant improvement needed • Major weakness – an unsatisfactory performance
68
Program Evaluation in Language Education
The notion of database in Mackay’s approach reflects a number of strands in current evaluation practice. Patton (1997) sees utilisation-focused evaluations as contributing to and developing from the Management Information System (MIS) of the program (see chapter 15 below for a detailed exploration of program evaluation and MIS). Tribble (2000) sees baseline assessments and needs analyses at early stages of projects as establishing a database which can guide activity and contribute to evaluations at later stages. Evaluations carried out for quality management purposes (see chapters 9 and 10 below) have an increasing bureaucratic dimension. To meet both accountability and program development functions, a database needs to be established to illustrate the quality of the program to visiting inspectors, and to permit identification of strengths and weaknesses by program personnel. The Project Development Support Scheme (PRODESS) was set up by the British Council and the Evaluation Unit at Thames Valley University led by Pauline Rea-Dickins, partly as a response to the perceived weaknesses of external JIJOE-type evaluations, and partly as a response to similar evaluation and development needs in a series of English language education projects in countries in Eastern and Central Europe in the 1990s.
Quote 4.3
Potter and Rea-Dickins on the aims of PRODESS
The Project Development Support Scheme (PRODESS) was set up as a consequence of feedback received through project networking seminars held during the first year of the Council’s English Language Teaching projects in Eastern and Central Europe (ECE). The Scheme’s main aim has been to promote the uptake of developmental evaluation within ELT evaluation activity, seen as a prerequisite for quality project developments, and more specifically, to realise more efficiently and sustainably the objectives of projects through professional development of all their staffs. (1994: 2)
The projects in the region focused on development of English language teacher training and development, English for Specific Purposes and English as a second/foreign language curriculum design. Each project was in control of its evaluation resources, which it could use to initiate activity and to get advice from PRODESS. Concept Box 4.5 (from an evaluation consultancy report to one project) illustrates the nature of this support. It set out the evaluation tasks which might be undertaken, and for which training might be provided. In addition there is a focus on evaluation use: the last four points emphasise dissemination and action. In addition to consultancy to individual projects, PRODESS developed an extensive networking infrastructure. This included two colloquia (Kiely et al. 1993; 1995a), a series of ten newsletters 1994–98; and a set of Evaluation Guidelines (Kiely et al. 1995b).
Language Program Evaluation and Applied Linguistics
69
Concept 4.5 Initial list of project evaluation tasks and decisions from PRODESS • • • • • • • • • • • • •
deciding on who is to provide the evaluation information; deciding on who is to manage the evaluation process; deciding on who is to collect data; deciding on how the evaluation data will be used, i.e. who will have access to the information? How will it be reported? etc.; identifying or developing appropriate evaluation instruments; agreeing on appropriate timescales; collection of data; analysis of data; interpreting the data; taking decisions; acting on recommendations and action plans; presenting the evaluation findings; evaluating the evaluation.
(from Rea-Dickins 1993)
A core idea of PRODESS was to facilitate internal development of projects through analysis of need and capacity, and awareness of developments in similar project contexts. This has continued in the form of the English Language Teaching Contacts Scheme (ELTECS), a network of professional expertise and resources, which though sponsored and supported by the British Council, operates in a very different way from the JIJOE approach characterised by Alderson (1992). ELTECS represents a realisation of the concepts underpinning democratic, developmental and utilisation-focused evaluation, as well as the degree to which evaluation relates to all aspects of professional and project management. In many of the project contexts supported initially by PRODESS, evaluation activity has continued to play a leading role in resource development and professional learning. Bardi et al. (1999) illustrates this impact: it is an evaluation of PROSPER, an ESP project based in Romanian universities and carried out by project personnel after six years of project activity. The study (which has four editors and twenty authors) explores not only the impact of immediate objectives (see Concept 4.2 above), such as professional skills development, materials and tests, but the wider objectives such as graduate employability, and impact on local and regional professional culture and identity. This section has examined developments in two areas of language program development – language learning and the evaluation of language program within an educational aid project framework. The story is one of retreat and advance: while the study of particular curricular strategies and language
70
Program Evaluation in Language Education
learning processes is now located in the domain of Applied Linguistics research rather than language program evaluation, project evaluation has become established as internal program and curriculum development, in a way which reflects many of the strands in program evaluation discussed in this chapter. There is room for further advance in our understanding of language education programs in the context of international development. While the Mackay and PRODESS approaches emphasise practices within project contexts as the bases for evaluation constructs, they do not necessarily provide for critical engagement with the wider discourses which generate such projects. Holliday (1992; 1998) uses soft systems methodology and critical discourse analysis perspectives to explore the complexities of both such projects and notions of success.
Quote 4.4 Holliday on international development projects and the role of English Because of the nature of English as a commodity within the international world of work, and as a symbol of educational status in many parts of the world, the aspirations of the wider community will also come into the picture. Hence the uncertainties and issues surrounding English language education may be internationally, politically and institutionally critical than in some other professional areas. Post-colonial and postmodern discussions of various types of imperialisms that sustain themselves after the decline of empire are often therefore centred around the role of English, not only as national language policy, but on its influence on how education generally should be administered, from classroom to curriculum. The question of managing evaluation and innovation is thus tied up with the whole role and status of English language education and the melange of relationships within which these roles and statuses are derived. (1998: 202–3)
In this section we have documented the development evaluation as an opportunity for a sustained process of sense-making, involving all program stakeholders and attending to the quality of professional and management process as well as to the achievement of objectives and targets. We see further development in this area as engagement with the themes highlighted by Holliday, and also by constructivist evaluators (such as Yvonna Lincoln; see section 3.3, p. 40 above). Language program evaluations in their sense-making function are likely to benefit from these wider perspectives, addressing issues of power and dominance as well as practice in classrooms and institutions, and the emerging roles of all languages in the project context as well as the language which is the particular focus of the project.
Language Program Evaluation and Applied Linguistics
71
4.4 Language evaluation outside language teaching programmes In this volume we use the term language program to represent programs where languages are taught and teachers trained or educated. Section 4.3 illustrates the range of issues and potential lines of enquiry which might be addressed in the context of an evaluation. Another set of issues relates to language use in fields which are not language programs. Advances in discourse analysis suggest two areas where evaluative perspectives are developed in relation to communication skills and use of language, even though the learning in such programs is not explicitly set out in terms of language skills: 1. In the assessment of communication skills, detailed criteria will include linguistic constructs. Discourse analysis – for example, Halliday’s (1974) functional grammar and Boyle’s (1996) application of this framework – now enables us to describe and identify systematically the linguistic elements which contribute to effective communication. There is some evidence of use of discourse approaches in the assessment of professional competence in health care (for example, Candlin 2004; Sarangi and Roberts 2002). As communication skills become more widely established as curricular areas in all professional education, and oral presentations are used to assess learning, we expect increased attention to linguistic constructs in assessment, and in wider evaluation of such programs. 2. As indicated in the preceding section, program evaluations often address issues of management and innovation. These issues themselves constitute discourses (Fairclough 1995; Holliday 1998), and can be the focus of, or the data for, evaluations. Thus, the notion of evaluation as sense-making within programs can develop an applied linguistics dimension. The way stakeholders and informants frame notions of change and development in their discourse can be revealing, if examined from a linguistic ethnography perspective or in a critical discourse analysis (CDA) framework (Holliday 2003). These roles for applied linguistics perspectives in program evaluation are areas for further development. They have the potential to reveal the significance of patterns of language use, and address their functional complexity. The attention to issues of language here parallels our understanding of evaluation as set out in this chapter: from initial positions of evaluation as the measurement of predicted program outcomes, and language as straightforward representation of meanings and positions, each has matured into a complex account of social life and human experience. The case studies in Part 2 address in detail some of this complexity, and the practice and research chapters in Part 3 provide direction for further exploration of this enterprise.
72
Program Evaluation in Language Education
4.5 Summary In this chapter we have outlined ways in which developments in language program evaluation which correspond to developments in other fields, and which are driven by particular features and contexts of language programs. In the latter discussion we have shown a redrawing of the boundaries between research and evaluation in language programs, and the emerging focus on evaluation as a platform for internal sense-making and development in language programs which are part of development projects. Finally, we have identified a new dimension of language program evaluation: the task of understanding language use as a quality indicator in non-language programs where communication and interaction are key program activities. Part 2 sets out seven evaluation case studies where the themes and issues introduced in the historical overview in Part 1 are revisited, and explored in context and in greater detail.
Part 2 Cases and Issues
This page intentionally left blank
Introduction
In Part 2, we present seven evaluation case studies which we analyse with reference to the concepts, frameworks and issues introduced in Part 1. We also consider the tensions that arise in the practice of evaluation. These case studies do not follow a similar format. Each illuminates different facets of the evaluation process and provides an opportunity to problematise the design and implementation of evaluations and to suggest ways in which readers might undertake evaluative inquiries of their own, which is our focus in Part 3. The case in chapter 5 is an evaluation of teachers’ English language teaching skills and professional development needs in a policy context where the medium of instruction has recently been changed to English. The evaluation examines the development of scales and bands, and the role of classroom observation data in using these. Chapter 6 looks at an evaluation of a Europe-wide initiative to integrate the teaching of science and English as a foreign or additional language. Information and communication technology has a role in this evaluation, both as a means of supporting learning and teaching, and as a means of managing data collection in this dispersed, multi-site program. Chapters 7 and 8 recount evaluations of pilot programs in the development of language education policy. Chapter 7 examines an evaluation of a program in Hong Kong to deploy native speakers in secondary schools to improve English skills in the workforce generally and to enhance the learning of English at secondary level. A key feature of this evaluation is the mix of methods: a classic experimental design is complemented by a classroom ethnography component. Chapter 8 examines the evaluation of the Primary Modern Languages Project in Ireland. This case study is an example of a particular implementation of a wider policy – the Council of Europe threelanguages policy – and of an evaluation which combines a broad survey approach with case studies of selected schools. Chapters 9 and 10 examine the practice of evaluation in the university sector. Chapter 9 sets out an account of the development of an evaluation 75
76
Program Evaluation in Language Education
policy in a British university, and the implementation of this in the context of English for Academic Purposes programs. Chapter 10 examines a specific evaluation, using data from two levels of enquiry: the evaluation data; and the data from an ethnographic study of the evaluation. The analysis illustrates the ways in which evaluation for teacher and students constitutes an opportunity to enhance the program, but also the ways in which processes and interaction may be skewed to the disadvantage of some students. Chapter 11 looks at the evaluation of assessment frameworks which are becoming increasingly visible as a means to evaluate language proficiency, especially in the context of assessing learners with English as an additional/ second language. Chapter 12 takes stakes, stakeholding and stakeholders as its focus, and uses data from different evaluation studies and position papers to examine the constructs here. These case studies reflect the major themes in our field: evaluation for accountability and development; internal and external evaluation, evaluation for learning within programs, evaluation as sense-making, and evaluation as evidence-based policy development. They illustrate the themes and concepts surveyed in Part 1, not in any particularly organised manner, but in the way which evaluations engage with the complex realities of programs in action. Readers will see in these accounts some resonances with their own practice and will identify opportunities for new designs and approaches, and also aspects of evaluation to research further. These themes are picked up in Part 3.
5 Evaluating Teachers’ English Language Competence
5.1 Introduction The first case study presents an evaluation with baseline and survey characteristics which was designed to inform on a planned for nation-wide innovation and involved the evaluation of Basic Education teachers’ levels of English language competence. In the Logframe (Concept 4.1 and 4.2, pp. 64 and 65 above) for the development project, of which this evaluation formed a part, this was specified as follows: Table 5.1 Output 6 Activity 2
Logframe specification for the evaluation A strategy to improve teachers’ ability to operate effectively in English. Provide short-term consultants to support the design and conduct of research into (a) English language use and the proficiency in English of teachers and student teachers; and (b) Basic Education teachers’ and principals’ perceptions of English language use in their classrooms.
Our first example thus did not involve the evaluation of a specific program; rather, the evaluation project was to be used as a means to gather data for national strategy development. An important focus of this chapter is the way in which the construct of English language proficiency for instructional purposes within English-medium classroom contexts was operationalised through the design of valid evaluation procedures, and by the use of an approach that extended beyond the administration of so-called ‘easier’-toimplement English language tests.
5.2 Context This evaluation was undertaken in a complex, multilingual situation in which English was the first language for an extremely small minority of the population in a country where English, with the status of an international 77
78
Program Evaluation in Language Education
language, had been chosen as the lingua franca for this relatively recently independent nation. The study was initiated in order to evaluate teachers’ English language proficiency in a country where English is used as an Additional/Second Language (EAL/ESL) in schools. Education in the early years (Grades 1–3) is primarily through the medium of a heritage language. Thereafter, in Grade 4–12, English is the medium of instruction. The teachers here include both teachers of English as a curriculum subject in its own right and teachers who use English across the full range of school subjects as the medium of instruction. As in many countries – e.g. Bangladesh, Tanzania (see chapter 13), Hong Kong (see chapter 7) and England – concerns are frequently expressed by a range of groups (e.g. government, employers, parents, teachers) about school learners’ standards of English and levels of literacy. In addition, in the context described here, as a consequence of a colonial legacy, there were particular concerns expressed over the teachers’ English language proficiency levels, with specific reference to their teaching of school subjects through English. Whilst there was a prevailing view about the inadequacy of teachers’ English, the imperative was to capture ‘hard’ data and to provide an objective account of teachers’ English language proficiency nationwide.
5.3 Scope and aims of the evaluation The evaluation was thus framed in response to a perceived dilemma and need for data in an area of national priority – as a guide for action. The evaluation findings were to feed into decision-making for a strategy to improve teachers’ ability to operate effectively in English. The specific purposes for this evaluation study reflected in the TOR (Terms of Reference) for the project were expressed as follows:
Data 5.1
Evaluation terms of reference
Assess the current overall situation as regards the use and/or role of English within the country’s basic education and teacher training cycles through discussions with key players – Ministry of Education staff, university (e.g. departments of education and English) and teacher training college staff, distance education bodies, non-governmental organisations working in schools to support school and teacher development, and members of English language teacher development and support teams. Assess the use of English in the school classroom and teacher training institutions, through visits covering all three phases of basic education (lower and upper primary and lower secondary). Evaluate the attitudinal aspects of the evaluation through discussions with school principals, heads of department and other senior personnel.
Evaluating Teachers’ English Language Competence
79
Propose and design the range of procedures to satisfy both the language and attitudinal requirements of the evaluation study. Propose the size of the evaluation sample(s) and the distribution for the specific evaluation instruments and devise a schedule for:
• completion of the instruments, including the piloting phase; • the implementation of the main study; • the post-implementation stages, including the analysis and presentation of data/ findings. Identify the resources and time needed, (including visits, support services, logistics etc.) to carry out the evaluation implementation plan, as indicated above. (Evaluation tender document.)
It is relevant to note here, in the same way as the Science Across Europe evaluation which we report in chapter 6, that this evaluation was subject to budgetary constraints. In this connection, it is interesting to reflect on how little is routinely spent on evaluation activities relative to the total costs of some educational programs: a curriculum innovation or project costing several millions of pounds allocates a minuscule proportion for any evaluation of its development and/or impact. As we have argued in Part 1, evaluation does not function as a primarily gatekeeping artefact focused on whether learning outcomes have been ‘delivered’. Importantly, evaluation may also be viewed as a learning ‘tool’ – for example, learning about a curriculum; learning for teachers and other professionals involved; learning about theoretical constructs in applied contexts. In these respects, evaluations should receive greater priority. With this case study, these constraints were stated by the commissioners of the research at the outset of their discussions with the evaluation consultants, in the following manner:
Data 5.2
Evaluation constraints
We recognise that a full and thorough research investigation into the language competence issue could easily take more time, funds and person hours than are available. . . . two points on this: (a) the problem is already well appreciated, so while the purpose of the evaluation is to arrive at a more accurate and precise analysis of the situation, we must ensure that we draw on substantial existing knowledge of the problems during the investigation; and (b) we shall have to accept a manageable quantitative work load in the evaluation, with perhaps a small number of fuller case-study investigations which can elicit more qualitative data. (Evaluation project correspondence, October 1998)
80
Program Evaluation in Language Education
In summary, then, as established by its terms of reference, this study set out to evaluate on a national basis: Table 5.2
Objectives of the evaluation
(a) the English language use and the proficiency in English of teachers and student teachers; and (b) Basic Education teachers’ and principals’ perceptions of English language use in their schools. Evaluation tender document
5.4 Evaluation planning Where do you begin? What are the starting points and priorities? As we saw in Part 1, different guidelines and frameworks exist for the development of coherent evaluation studies (on English language programme evaluation, see, for example, Rea-Dickins and Germaine 1992; Kiely et al. 1995; Jacobs 2000); indeed, some of these were used in the present study. In Table 5.3, we present some of the specific key issues and focal points that Table 5.3
Exemplar focal points for evaluation decision-making
1. The nature of the constructs underlying the evaluation: for example how do you define general English language proficiency (see section 5.5). Is there a case to differentiate this from a teacher’s use of English in classroom contexts? 2. Taking account of the ‘real’ time constraints of the project (see Data 5.2): what is the role of piloting in the overall evaluation design, i.e. in relation to the time to be set aside for the main study and data collection phase? How do you integrate the piloting and development of the evaluation procedures without compromising their quality? 3. An interesting feature of this evaluation was the respective roles and the nature of participation/collaboration between the evaluation consultants on the one hand, and the in-country evaluation team on the other. Although these latter comprised senior teachers or other qualified ELT staff, they all required induction into the evaluation and its multifaceted processes. By the same token, the evaluation consultants, although experienced in the country in question, needed further induction into the specific context and requirements of this evaluation study. Thus, a key preliminary issue revolved around the form that the induction and evaluation training would take, with particular but not exclusive reference to data collection in the main study and the need to achieve consistency in the implementation of the different procedures. 4. Sampling issues: in a large country divided into seven distinct educational zones, with significant diversity across schools that extends beyond the conventional divide of rural, urban, or peri-urban or pupil numbers on the school roll, to include the nature of resourcing and differential levels of teacher qualifications and expertise; in turn the issue of ‘control’ over appropriate sampling is raised, e.g. who makes the sampling decisions, and to what extent is the planned for sampling actually feasible? 5. When interviewing informants, such as school teachers and principals, and in situations where there is the possibility of their having a hesitant knowledge/use of English, which language do you use for the interviews? How are issues of translation across languages managed?
Evaluating Teachers’ English Language Competence
81
needed discussion and thereby informed the decision making at the planning stage of the evaluation. In addition, as with any evaluation undertaking and research more generally, there are ethical issues that need to be addressed (see also Part 3). The kinds of questions and issues that are likely to surface in terms of assuring the ethical integrity of an evaluation are shown in Table 5.4. Table 5.4 • •
•
• •
Sample ethical questions
Consent: how do you gain consent: from parents, children, teachers? etc. What form can and does this take? Cultural sensitivity: e.g. the majority of guidelines for ethical practice (e.g. the BAAL Recommendations for Good Practice, BERA, ILTA Guidelines, American Standards; see Part 4) are drawn up by international associations based in Europe or North America. To what extent, then, are these appropriate and acceptable more widely in different parts of the world? (See chapter 13.) Information: what or how much do you ‘tell’ those involved in an evaluation, especially the key informant groups (e.g. teachers) about why the evaluation is being undertaken. What are the terms of reference? For example, if an evaluation includes classroom observation, as in this case study, how explicit are you about what specifically you are looking for during your classroom visits? Use of data: for example, should you find examples of bad and/or unethical practice, what do you do with this information? If asked, do you ‘tell’ on individuals? Are there grounds for suppressing data? Analysis and data interpretation: for example, if data are provided in confidence, to what extent can various stakeholder groups be involved in developing interpretations of the data (a key issue in so-called large participatory evaluations; see chapter 13).
With the rising tide of governance in the ethical arena, this is an important area in which practitioners need to be aware and fully informed and to which we return in Parts 3 and 4.
5.5 Articulating constructs and procedures Overview In order to develop appropriate evaluation procedures based on sound constructs appropriate to the professional domain concerned, a visit by one of the consultant evaluators to the country in question was undertaken. It was agreed by both the commissioners of the evaluation and the evaluators that during the first visit the consultant would:
Data 5.3
Specification of activities for evaluation visit
1.1 Review the current overall situation as regards the use/role of English in the basic education and teacher-training cycles through discussions with key players: Ministry officials, university staff (faculty of education, centre for extra mural studies), other distance education bodies, colleges of education staff, NGOs, teachers, school principals and inspectors, the project team.
82
Program Evaluation in Language Education
Data 5.3
(Continued)
1.2 Observe the use of English in the classroom/training setting through visits to schools in (at least) three regions covering all three phases of Basic Education (lower and upper primary and junior secondary), and to (at least) two colleges of education in two different regions. 1.3 Make recommendations for the measurement of the attitudinal parts of the research through discussions with school principals, heads of department, teachers and other relevant personnel at a select number of schools. 1.4 Agree the range of evaluation instruments to be developed and the size and distribution of samples for these different instruments. 1.5 Draw up pilot evaluation instruments to satisfy both the language and attitudinal requirements of Output 6 as defined in the Logframe, through appropriate consultation with ELT colleagues/Ministry/colleges of education/school personnel and the project team, and forward these for trialling to the country team within three weeks of completion of the Phase 1 visit. 1.6 Devise and agree a schedule for (a) the completion of instruments; (b) the implementation of the full survey (including identification of any training needs); and (c) the post-implementation stages, including the analysis and presentation of data/findings. 1.7 Agree with the project director, Ministry and the funding agency the resources and time needed (including the Visit 2 program, support services, logistics, etc.), to carry out the implementation plan as agreed in 1.6. 1.8 Agree with all relevant parties the TOR for the second consultancy visit and subsequent stages to complete the research program successfully. (Report on the Phase 1 Visit 1–2)
Potential tensions As demonstrated through the tight specification of evaluation activities for Visit 1, and recalling the different affiliation and advocacy tensions evaluators might face (MacDonald 1976; Stake 1995), the ‘required’ accountability of the evaluation team to the commissioners of this evaluation was evident at every level of the evaluation ‘partnership’. This also comes across with reference to the constraints identified in Data 5.2 above, where from the outset, the budget allocation was always visible and of significant concern throughout the evaluation design phase, e.g. how many days could be ‘afforded’ for the planning and piloting of the evaluation procedures? The near-inevitability of compromises was apparent. At the same time, the commissioners were ‘buying in’ the specialist expertise. They expected that the evaluation would be strong in terms of its theoretical underpinnings and fit to the relevant Applied Linguistics constructs, such as the assessment of language proficiency, thereby invoking MacDonald’s notion of ‘loyalties’ to the discipline. From the outset budget considerations alongside the desiderata for ‘best evaluation practice’ were constantly in the balance. More generally, the ‘freedom’ for action at all stages of an evaluation is, typically, a tension that prevails in many evaluation contexts – far more so
Evaluating Teachers’ English Language Competence
83
than in researcher-driven projects and ones that are funded by research councils – and it is one that also exists within the context of much government funded research projects where accountability is strictly to those who commission the research and who, in turn, may insist on certain design features or impose specific methodological applications. Evaluation procedures How do you decide on what constitute appropriate procedures for a given evaluation? Three different data elicitation procedures were agreed for this study. First, the focus on teachers’ levels of language proficiency in relation to English language use in their teaching of all curricula subjects, including English as a subject, implied the need to articulate in detail the nature of language proficiency, not only in general terms but also with specific reference to classroom language use (see Table 5.2, evaluation objective (a)). It was agreed that a formal language proficiency test, relatively practicable and ‘efficient’ in several respects, had the potential to yield data about levels of teachers’ general English language competence. However, importantly, it would not capture the dynamics of teachers’ classroom English language performance. Thus, data capture was achieved not solely through the administration of language tests, in response to the requirement to measure teachers’ general proficiency in English, but also through an assessment of teachers’ uses of English in real class time through classroom observation. For these reasons, the development and application of classroom observation bandscales became a prominent evaluation procedure. Second, questionnaires were developed to capture perceptions about standards of English and English language use by teachers (see Table 5.2, evaluation objective (b)). Whilst interviews are a strong contender for gathering attitudinal data, they were precluded in this study on account of the large-scale survey nature of this evaluation, which was to be implemented across all the education zones in this vast country. Finally, for purposes of later analysis, a ‘background details’ questionnaire was devised to capture key characteristics of the informant groups as well as the professional experiences of these different groups. In summary, the following data collection procedures were developed to meet the aims of the evaluation. Table 5.5
Overview of evaluation procedures
1. Reading and English Usage Tests (i) English usage: grammar and vocabulary; (ii) graded reading tests. 2. Perceptions Questionnaire for: (i) school principals, deputies and heads of department; (ii) senior managerial staff in the education service (i.e. regional officials, colleges of education staff, inspectors, advisers); (iii) teachers and student teachers.
84
Program Evaluation in Language Education
Table 5.5
(Continued)
3. Classroom Observation Bandscales for: (i) teachers of English as a subject – language bandscales; – classroom language use bandscales; (ii) teachers of subjects other than English – language bandscales; – classroom language use bandscales. In addition: 4. Background Details Questionnaire for: – all informants.
Piloting How relevant is piloting as a distinct phase within an evaluation study? In research, it is important to pilot and validate data collection procedures and to be able to provide evidence of the validity of any used. The same requirement applies to procedures used in evaluation studies. There is, however, a lot less written about this aspect of assuring quality in evaluation studies (but see Saville and Hawkey 2004). Much of the time, therefore, of the first visit of one of the evaluators to the country and development project in question was devoted to trialling, discussing and revising the survey procedures identified in Table 5.5. It was also important to pilot the Background Details questionnaire to ensure that the ‘tick boxes’ were valid. For example:
Data 5.4
Sample background data question
3. What is your mother tongue? (This is the first language that you learnt in your parents’ home. You may have two mother tongues if you learnt two languages at the same time in your parents’ home. If this is true, tick two boxes.) (There then followed a list of 13 languages plus a category ‘Other: Please specify’) (from the Teachers’ background questionnaire)
A further example for the need for piloting related to informant qualifications.
Data 5.5
Sample background data question
6. What is your highest academic qualification? (a) (b) (c) (d)
Standard 8/Grade 10 Standard 10/Grade 12 BETD Other Teacher Training college diploma/certificate (e.g. HED)
ⵧ ⵧ ⵧ ⵧ
Evaluating Teachers’ English Language Competence (e) (f) (g) (h) (i)
B Primary (Education) BEd MEd Currently studying for the BETD at a college of education Other (Please specify): .....................................................
85
ⵧ ⵧ ⵧ ⵧ
(from the Teachers’ background questionnaire)
Whilst it might appear that questions such as these are straightforward, our experience working with the types of diversity represented across populations such as this one strongly suggests the need to ensure – through trialling – that all possible answers are featured on this type of informant data questionnaire. The following paragraphs summarise some of the modifications that were made on the basis of the first evaluation visit to the project, highlighting in particular the attempts to develop further the validity of the data collection procedures on the basis of the initial piloting phase, namely the language proficiency tests and the classroom observation bandscales. They are extracted from the report on the first evaluation visit, which formed part of the evaluation procedures’ development and piloting phase.
Data 5.6 Summary of initial piloting in developing the evaluation procedures Reading and English Usage Tests A prototype of these tests was piloted with 28 teachers in nine schools, covering a range of subjects, in lower and upper primary and junior secondary phases, in both urban and rural schools in two diverse regions. These tests were also piloted with 30 student teachers at one college of education. The data arising from these trials have now been analysed and a summary of this analysis is available. On the basis of this analysis, the English Usage test has been revised and the Graded Reading Test further developed, incorporating textual materials gathered in-country. These versions . . . should now be further piloted by the project team both with teachers and student teachers. Perceptions’ Questionnaire The Terms of Reference for the consultancy required the gathering of perceptions on the use of English by teachers in Basic Education. A prototype version of a questionnaire developed for this purpose was discussed/piloted with ten school principals and discussed with eight further informants (Ministry officials and college of education staff). It was also discussed with the project team at the planning meeting on 19 February. The questionnaire was revised during the visit and has now been further revised by the evaluation team as a result of these discussions. The latest version of the questionnaire is attached to this report and will in the next few weeks be piloted further in country by the project team. The questionnaire in its latest version is addressed to three groups: (1) senior management teams in schools; (2) senior management at the level of the
86
Program Evaluation in Language Education
Data 5.6
(Continued)
education service (regional officials, college of education staff, inspectors, advisers); and (3) teachers and student teachers. Bandscales for Classroom Observation Prototype bandscales for measuring teachers’ use of English in the classroom were piloted with 17 teachers in ten schools, covering the range of types of schools in the different education zones. The bandscales were extensively discussed at the meeting of 19 February by the project team. The scales were revised twice in the course of this visit and this revision and discussion has now led to the latest (fourth) version of the Classroom Observation Bandscales, which will be piloted in-country by the project team over the next few weeks. . . . They consist of two main scales. The first of these measures, a teacher’s general English language ability, focusing on key linguistic features of classroom talk, and the second measures her/his ability to use English for teaching purposes in the classroom. This second scale has undergone the most revision and is intended to deliver information on teachers’ classroom English language in a ‘user-friendly’ form, i.e. in a form that should make it easy for decision-makers in the education service to act on it. The piloting of the scales also revealed that it would be difficult to use one scale for both teachers teaching subjects through English and for specialist English language teachers (especially in lower primary schools). Consequently, a separate set of Classroom Observation Bandscales has now been developed for use with English language teachers. It will be piloted alongside the newest version of the scale for all other teachers. (Clegg et al. 1999: 5–6; Report on the Phase 1 Visit)
We have cited from the report at length to underscore the need for and stages of trialling for the validation of evaluation procedures. As emphasised in chapter 4, it is rarely appropriate to rely on ‘ready-made’ evaluation designs, or ‘toolkits’ (Murphy 1996), as elicitation procedures need to demonstrate context sensitivity, specificity and goodness of fit for the given purpose. During the discussions with the project team, test content – inevitably a contentious area – was largely agreed, but it is only through trialling that other significant issues emerge, as we illustrate below with reference to the initial item analysis results for both the grammar and the graded reading tests.
Data 5.7
Data from piloting of the tests
English Usage Test (n = 50)
N Mean SD Range alpha
All (except 115/18)
Teachers
Student Teachers (except 115/18)
47 28.8 (57.6%) 10.14 7–46 0.926
29 30.9 (61.1%) 10.17 9–46 0.930
18 25.9 (51.4%) 9.43 7–41 0.914
N.B. 115 and 18 answered no items
Evaluating Teachers’ English Language Competence
87
Interpretation • • •
Difficulty level: perhaps acceptable? Or, should it be a little more difficult? SD and range: acceptable? Reliability: high and satisfactory
Reading Test (n = 25)
N Mean SD Range alpha
All
Teachers
Student Teachers
51 12.8 (51.4%) 4.41 2–22 0.788
31 11.5 (46.1%) 4.85 2–22 0.824
20 14.9 (59.6%) 2.51 11–21 0.408
Interpretation • • • •
Difficulty level: acceptable? The most difficult items should perhaps be easier SD and range: acceptable? Reliability for a 25-item test Problem with the student group: higher mean; narrower range; lower SD; low reliability (see below)
Performance of Teachers and Student Teachers A contradictory discrepancy is observed between teachers (TT) and student teachers (ST) on the Reading and Usage tests: TT did better on the Usage test than ST (X = 30.9 and 25.9 respectively). In particular, there were two ST who scored zero on the Usage Test (as a result of answering no questions), one who answered only 9, and a significant number (8) who could not finish, whereas only a very small number of teachers could not finish the Usage test. This may have been caused by different instructions regarding the test administration. The two tests had separate times specified, but it seems that they were given to the test-takers at the same time (as opposed to the second measure being circulated on completion of the first). Thus, the way the two tests happened to be ordered by invigilators may have determined the test-takers’ order of answering the questions and the amount of time spent on each. One plausible explanation is that many of the students started with the Reading tests and ran out of time, without much time left for the Usage test. The above results suggest that the test could be relatively easy if the examinees are given more time than specified. In particular, the lowest score of the students (11) suggests that the easiest items could be made a little more difficult, provided that a sufficient amount of time is given. However, it is essential to ensure that all test-takers be given an equal amount of time for the test. Otherwise an unequal test condition such as this could lead to low reliability and a narrower range of scores. (Report on the Phase 1 Visit: pp. 5–6)
88
Program Evaluation in Language Education
The bulleted points under the Interpretation headings in Data 5.7 are those highlighted for further discussion within the evaluation team and should thus lead to firm decisions being made about the final versions of the tests and/or any further revisions or piloting. There then follows in the report a brief discussion of what might have contributed to the findings highlighted by the analysis of the pilot test versions. An example, with specific reference to the Reading test is provided in Data 5.7 above. Above we have stressed the imperative to develop appropriate evaluation procedures through systematic trialling and discussions with appropriate stakeholders to assure their quality for the intended purposes. Given the importance of and inherent complexities of using real-time classroom observation as part of an evaluation, we focus further, below, on issues in the development of the classroom observation bandscale descriptors used in this evaluation.
Evaluation of classroom language use The criterion of practicability is very much a pragmatic one and it proved influential in the development of the classroom bandscales in that they needed to be: accessible for observation training purposes; easy to use by members of the evaluation team in schools; and realistic in terms of how much data can be captured in one 30–40-minute lesson. The data also had to be in a form that could be easily coded and analysed statistically. Where, then, do the criteria come from that inform the teacher observation component of an evaluation and, in particular, the individual descriptors for each of the bandscales developed? The requirement to measure teachers’ general language proficiency in some way was operationalised (as seen above) in the form of English usage and Graded Reading tests. However, there was unanimity in the view that data from these tests would not provide a valid account of teachers’ English language use in classroom contexts and that it was therefore necessary to develop bandscales that would inform on both teachers’ English language proficiency in action in their classes and their ability to perform a set of key classroom behaviours through the medium of English. Two bandscales were therefore developed as described below.
(a) The language bandscales The decision was reached to report the results for an individual teacher on a single language scale, the data thus being eminently manageable from an analysis perspective. This overall scale was designed as follows:
Evaluating Teachers’ English Language Competence
89
Table 5.6 Survey of teachers’ English language use in basic education: global language scale Classroom observation bandscales: Language scale A
This teacher has an excellent command of English
B
This teacher has a good command of English
C
This teacher has an adequate command of English
D
This teacher has only a basic command of English
E
This teacher has a very poor command of English
On the basis of observing a lesson, a global rating was thus provided for each teacher observed. Each teacher had also completed a background details questionnaire as well as both the English Usage and Reading tests. However, it was also felt useful for the evaluators, i.e. the team that would undertake the main evaluation study, to have – in addition to the above single language scale – global descriptors in other ‘language’ elements ranked important for effective teaching through English in the areas of grammar, pronunciation, stress and intonation, and vocabulary. The descriptors used in the main evaluation study were as follows: Table 5.7 Survey of teachers’ English language use in basic education: the language scale Classroom observation bandscales: Language scale How accurate is the teacher’s grammar? A
Very few grammatical inaccuracies
B
Generally accurate, but a number of minor errors noticeable
C
Generally fairly accurate, but some basic and consistent errors
D
Frequent grammatical errors
E
Most utterances contain errors
How accurate is the teacher’s pronunciation, stress and intonation? A
Very few errors in pronunciation, stress and intonation
B
Good pronunciation, stress and intonation, with a number of minor errors
90
Program Evaluation in Language Education
Table 5.7 (Continued) C
Generally adequate pronunciation, stress and intonation, but with some basic and consistent errors
D
Frequent consistent errors in pronunciation, stress and intonation, putting a strain on listeners’ comprehension
E
Errors in pronunciation, stress and intonation numerous enough to make the teacher almost unintelligible
How accurate and appropriate is the teacher’s use of vocabulary? A
Vocabulary used accurately and appropriately
B
Occasional inaccuracies and words sometimes used inappropriately
C
Adequate use of words but some inaccuracy and inappropriateness
D
Many words used inaccurately and inappropriately, often causing difficult in presenting ideas
E
Difficult to understand due to frequent inaccurate and inappropriate use of words
These categories were developed after extensive discussion, trialling and retrialling and were also informed through video and live observation of teachers in different subject classes. Further, the evaluation team, which consisted of members of the project team and senior teachers, underwent extensive training in using the bandscales through a workshop held over several days. (b) Teachers’ Classroom Language Use Scale Turning next to the Classroom Language Use Scale, there are a number of points to make. First, as we demonstrated in Part 1, it is highly probable in most evaluation studies that the classroom focus relates to innovative practice in some form that, in turn, is informed by particular views of language learning and teaching. The approach taken in this case study has been informed by a number of other studies in which theoretical constructs/underpinnings are intended to inform innovations in classroom pedagogy. In the case we report on here, the innovation was to be framed around the findings of this evaluation in the form, initially, of a national strategy to improve teachers’ ability to operate effectively in English. Subsequently, presumably, a national accreditation scheme for the upgrading of teachers’ English language proficiency might follow. It was, thus, important to develop a valid observation scale capable of capturing key features of dynamic English language use in instructional contexts.
Evaluating Teachers’ English Language Competence
91
Second, an analysis of previous studies was undertaken so that a ‘best practice’ model could be elaborated. For example, it will be recalled (section 4.3, p. 59) that the rationale for the Bangalore project was premised on the hypothesis that a task-based approach to grammar instruction can lead to the acquisition of grammatical form, but the evaluation of this theory of language learning was not realised. However, the evaluation, as we saw in section 2.4, p. 23 above, adopted an objectives-driven approach, with an almost exclusive use of tests to determine project progress with no effective classroom observation incorporated at the level of evaluation implementation. A second link can be drawn with the Lawrence evaluation (1995; see also section 4.3, p. 59 above) of grammar teaching in Zambian schools. In that study a systematic observation schedule, developed from the rationale for a communicative approach to teaching grammar, was used. The instrumentation and criteria in the Lawrence study illustrate a tight specificity of the curriculum construct underpinning the evaluation design and inform directly on what precisely to look for when in the classroom teaching context. In our evaluation study, for which the observation of teachers formed a key part, the criterial features for the evaluation of the teachers’ classroom English language use were operationalised with specific reference to their relevance and appropriateness in relation to overall teacher effectiveness in using English as the medium of instruction (see, for example, Coelho 1992; Rueda et al. 1992; Harklau 1994; Pennington 1995). As with the previous Language scale (see above), an overview bandscale descriptor against which teachers received a global grade was devised and agreed, as follows: Table 5.8 Survey of Teachers’ English Language Use in Basic Education: global classroom language use scale Classroom observation bandscales: Classroom language use scale A
This teacher uses English effectively in his/her teaching, encouraging a lot of learner participation and conveying concepts very clearly
B
This teacher uses English fairly effectively in his/her teaching, encouraging good learner participation and conveying concepts clearly
C
This teacher uses English satisfactorily in his/her teaching, encouraging little learner participation and conveying concepts clearly enough
D
This teacher uses English fairly ineffectively in his/her teaching, encouraging little learner participation and conveying concepts unclearly
E
This teacher uses English ineffectively in his/her teaching, encouraging hardly any learner participation and conveying concepts very unclearly
92
Program Evaluation in Language Education
As with the Language scale, and following keen observation and experienced teacher input, there was agreement on which key features of classroom language use should be observed separately and through which, it was anticipated, the observers would arrive at a global grade. The descriptors were centred on five central questions:
Table 5.9 Survey of teachers’ English language use in basic education: classroom language use bandscale descriptors Classroom Observation Bandscales: Classroom Language Use Scale 1. How well does the teacher use English to encourage learners to participate? A
Considerable learner participation: teacher uses a range of techniques to elicit many responses, both long and short. Involves many learners
B
Good learner participation: teacher elicits many responses, but room for improvement in the number of long responses and the number of learners involved
C
Satisfactory learner participation: teacher elicits some responses, more short than long. Several learners participate, but many do not
D
Poor learner participation: teacher elicits occasional responses, mainly short. Most learners do not participate
E
Very poor learner participation: teacher elicits hardly any learner responses
2. How well does the teacher adjust his/her English to the level of learners’ English? A
Teacher adjusts own language well to the language level of all learners
B
Teacher normally adjusts own language to the language level of most learners; occasionally pitched inappropriately
C
Teacher’s language level about right for the class, though sometimes inappropriate. Sometimes not attuned to the language level of a minority
D
Teacher’s language often not pitched at the right level for the class. Shows little sensitivity to the language level of many learners
E
Teacher fails to take the language level of the class into account
Evaluating Teachers’ English Language Competence
3. How well does the teacher use English to help learners understand concepts? A
Teacher presents concepts very clearly: introduces, explains, illustrates and often summarises. Checks that all learners understand
B
Teacher presents concepts clearly, but additional explanation or illustration sometimes needed. Normally checks that learners understand, but may overlook some
C
Teacher presents most concepts adequately, but does not explain or illustrate enough and may need to backtrack. Often checks understanding, but not of all learners, and not of all concepts
D
Teacher presents concepts often very unclearly: does not explain or illustrate; only very occasionally checks whether a few learners understand
E
Teacher’s presentation of concepts is extremely confusing: does not explain or illustrate and fails to check whether learners understand
4. How well does the teacher use English to organise the lesson and the learners? A
Teacher signals very clearly the organisation of the lesson, gives very clear instructions, and is very clear in managing tasks and learners
B
Teacher is normally clear in signalling the organisation of the lesson, in giving instructions and in managing tasks and learners; but occasional unclarities in lesson organisation require him/her to rephrase/repeat
C
Teacher signals lesson organisation adequately, but should signal more often and more clearly. Gives clear enough instructions and is comprehensible in managing tasks and learners; but needs often to rephrase/repeat and sometimes omits to do so when necessary
D
Teacher rarely signals the organisation of the lesson, often gives unclear instructions and often fails to be clear in setting tasks and organising learners. He/she may sometimes be aware of this but cannot provide a remedy
E
Teacher fails to signal the organisation of the lesson, gives very unclear instructions and is very unclear in setting tasks and organising learners. He/she is often unaware of this
93
94
Program Evaluation in Language Education
Table 5.9
(Continued)
5. How well does the teacher reinforce and extend her use of English and provide learners with language support? A
Teacher uses all available means (board work, visuals, gesture, etc.) to reinforce her/his use of English; and employs a range of techniques to provide learners with the language support they need to complete classroom tasks
B
Teacher uses several means to reinforce and extend her/ his use of English and provides learners with good language support, though some tasks (especially reading and writing) could occasionally be more guided
C
Teacher uses some means of reinforcing and extending her/his use of English; board work is satisfactory. Provides learners with adequate language support, though tasks in general need to be more guided
D
Teacher uses very few means of reinforcing/extending her/ his of English: board work is patchy. Provides learners with some language support to complete classroom tasks
E
Teacher uses hardly any means of reinforcing/extending her/his use of English; board/visuals are not used, or board work very poor or non-existent. Provides learners with little or no language support needed to complete classroom tasks
The final ratings for teachers on both the Language and the Classroom language use scales were reported as a single grade on a five-point scale (A–E). However, given the truism that ‘the whole may not equal the sum of the parts’, members of the evaluation team were encouraged to provide Other Comments, i.e. field notes, as part of their classroom observation. These would also have the potential to assist their own decision-making in arriving at a particular grade and be called on as evidence for a particular decision, if required, in any discussions of individual ratings by the evaluation team. Two examples are provided below from field notes for a maths lesson in Grade 6 in a rural school.
Data 5.8
Observation field notes 1
Classroom Observation/10 March/version 1 OTHER COMMENTS
Evaluating Teachers’ English Language Competence
95
In another lesson, the following field notes were made to support the grade given:
Data 5.9
Observation field notes 2
Classroom Observation/10 March/version 1 OTHER COMMENTS
A criticism of the observation bandscales might be the high inference nature of the categories within each descriptor level, but these bandscales, and the descriptors within them, were empirically driven, based on evidence from the classroom and experienced teachers. Thus a criterion such as a teacher’s use of the blackboard, which figures prominently in the last set of descriptors for the classroom language use scale, may be considered an inappropriate or rather quirky criterion in another education setting. In this context, however, this criterion had high resonance. Taking another criterion such as ‘providing learners with adequate language support’, this is one with which the observers were exceedingly familiar since all were involved in in-set training with teachers which involved supporting teachers in this particular area of classroom practice. The bandscale descriptors were thus very familiar to all those who would be taking the observation schedules into the classrooms and connects to the validity of these procedures. Thus:
96
Program Evaluation in Language Education
Validity is an integral element which: ‘becomes largely a quality of the knower . . . and forms of knowing’. . . . Validity in a qualitative sense is a personal strategy by which the researcher can manage the analytical movement between fieldwork and theory. (Bailey 1999: 172, original emphasis, citing Marshall, then Wainwright) Given the need to assure the reliability of application of the bandscales, as indicated above, evaluation team members were provided with training workshops and opportunities to trial the use of the bandscales prior to the main evaluation study. In this way, acceptable levels of consensus were achieved in both their interpretation and their application with classroom video data for these training purposes.
5.6 Evaluation implementation and implications Subsequent chapters also focus on evaluation design and implementation. One of the main objectives of this first case study has been to sensitise readers to the need to develop valid procedures that are evaluation context-specific. However, in this final section we present an overview of three central features that impacted on and/or influenced the design and implementation of the survey of teachers’ English language proficiency. Time allocation The commissioners of this evaluation research had hoped to complete the evaluation as close to a six-month period of time as possible. (The case studies in chapters 7 and 8 also consider the issue of time allocation in the program evaluation and decision-making process.) In several respects, six months was an unrealistic time frame within which to develop and implement a quality evaluation survey, especially as it involved an evaluation consultancy team at an overseas university considerably distant from the site of the study itself. The development of valid evaluation procedures involved a range of activities.
Table 5.10 • • • • • • • • •
Exemplar evaluation activities
A needs analysis of English language use in English medium classrooms. A documentary analysis of the teacher training curriculum and materials and relevant in-service documentation. School visits. The development of training videos of a number of ‘typical’ classrooms across the diverse range of schools. A list of all schools in all education zones nation-wide, together with the performance in national examinations. Coding and preparation of instruments. Discussions with key stakeholders both prior and subsequent to the piloting phase. Piloting by the project team. Classroom observation training.
Evaluating Teachers’ English Language Competence
97
Sampling Sampling often presents a number of challenges. In this particular countrywide national evaluation, a number of complexities was identified, across educational zones and schools within them.
Table 5.11 • • • • • • • • •
Factors influencing evaluation sampling
The total number of zones, namely seven. Different sizes of zone (access and travel time implications here). Different class sizes. Different staff:pupil ratios. Different qualifications levels of teachers. Different numbers of teachers. Different achievement levels. Diversity in the allocation of resources. Accessibility: distance, terrain, dependence on climatic conditions.
The data collection process, which involved visits to all regions of this large country with a fairly rudimentary transport infrastructure, added to the frustrations of meeting deadlines. The analysis of the results could, thus, only take place after a considerable time lapse. These points are not made in any anecdotal sense, but are intended to reveal the almost omnipresent imperative in evaluation studies for ‘immediate findings’, sometimes (although not in the case study described here) at the cost to the integrity of an evaluation study. This is one of the points of contrast with research projects where the findings and interpretations emerge within a longer time frame and there is less of an imperative for the immediate use of results. Induction and training As indicated above, the external evaluation consultants required induction to the evaluation context and the in-country team needed an induction to the evaluation project, its design and implementation and, in particular, preparation for their involvement in the data collection processes: through their school visits (to administer tests and background teacher questionnaires, observe classes and complete the observation bandscales), and interviews with the senior management teams in schools and in the regional education zones (to administer the perceptions’ questionnaire). They also had to be knowledgeable about how the data were to be coded, to maximise efficiency in the return of the data to the project management for data entry and analysis (see also section 13.5, p. 236 below, in particular Data 13.4). Validity in evaluation approach: survey vs. case? As a final point, and a theme which permeates a number of our evaluation case studies, both the commissioners of the evaluation and the consultants
98
Program Evaluation in Language Education
were acutely aware of the limitations of a survey approach to evaluating teachers’ language proficiency.
Data 5.10
The case for case studies
It was originally proposed to carry out, in addition to the main survey, a small number of case studies in schools, which would generate qualitative data sets on a range of issues pertinent to the evaluation which the main survey instruments would not capture. Piloting of the main instruments reinforced this view that information on a number of important matters would not emerge from the main survey. These issues might include, for instance, the views of learners on the use of English as a medium of instruction, teachers’ ability to write English, the extent to which school managements and school staffs use English outside the classroom for professional and administrative purposes (i.e. the extent of a professional ‘culture’ of English within schools), and the existence and nature of school language policies). (Clegg et al. 1999: 6)
Thus, it was recognised that the English language competence of teachers would be demonstrated in schools in other ways, for example in their lesson planning and preparation, during meetings with other teachers and within the community of the school as a whole. This holistic perspective on English language use in schools necessitated data captured through the implementation of a small number of case studies across the represented regions. An example of such school-based evaluation case studies are those in the Primary Modern Languages Project in Ireland, described in chapter 8 below.
5.7 Summary This case study has illustrated in some detail the development and validation work on evaluation procedures, in particular the tests and the classroom observation bandscales that were used in an evaluation focused on levels of teachers’ English language proficiency and classroom language use. We have described some of the implications for evaluation design and the development of procedures in implementing a nation-wide evaluation: (1) in a relatively short period of time; (2) with a small evaluation team, some members of which were novices in program evaluation; (3) in a large country with widely dispersed populations and rudimentary infrastructures that would (4) yield trustworthy data on the basis of which to formulate strategies for a national teacher upgrading strategy.
6 Evaluating a Language through Science Program
6.1 Introduction It is clear that the future of language teaching holds considerable promise in terms of diversity within and across instructional contexts. One set of current challenges facing curriculum developers is in relation to innovation in language curricula that are being created through the availability of enhanced systems in information and communication technology (ICT) and the significant increase in technology-mediated language teaching, learning and assessment. Initiatives include distance programs with video links, supporting academic literacy in global online contexts (Goodfellow 2003; 2004; Goodfellow et al. 2004), national projects in the university sector such as Student Online Learning Project (SOLE) (Timmis 2004); or program-based studies involving the use of Virtual Learning Environments Blackboard (Paran et al. 2004; Timmis 2004). Part 1 shows clearly how crucial it is to evaluate any such innovation. The second case study we present is an example that has relevance for secondary school age learners: an evaluation of the language component of the Science Across Europe Project (Clegg, Kobayashi and Rea-Dickins 2000). As in chapter 5, we introduce the underlying constructs the evaluators identified in collaboration with the commissioners of the research, to inform the development of the evaluation procedures. We also illustrate some of the potential complexities in evaluation design and sampling in the implementation of a large-scale, multi-site evaluation with multinational participation, as well as how this complexity impacts on the confidence one can have in, and the utility of, the evaluation findings.
6.2 Context Overview Science Across the World is the international science education flagship program for its founding partners, British Petroleum and the Association for Science 99
100
Program Evaluation in Language Education
Education (ASE) in the UK. It was initiated with the aim of bringing a global dimension to science education and described in the program publicity as follows:
Data 6.1
The Science Across the World Program
About the Program It provides a forum for students aged 12 to 17 years, to exchange facts and opinions with young people in other countries . . . in up to 18 languages. Simplicity of communication is the key to the success of this award winning program, and it is this global exchange process, by post, fax, email, and increasingly through our website, that makes Science Across the World different and stimulating . . . Starting as Science Across Europe in 1990, this flexible program has expanded to Asia Pacific, Africa, America, and most recently to Latin America. In each region, we have developed distinctive collaborations with . . . educational partners to bring the program to the grass roots in schools. By joining Science Across the World, your students can communicate about a range of globally important issues with others world-wide. (Publicity flyer: ASE & GlaxoSmithKline; see also www.scienceacross.org)
Program aims and resources The program has several aims, as shown in Table 6.1. Table 6.1 • • • •
Program aims
bring an international dimension to education enabling students in different countries to exchange knowledge and ideas about their varying perspectives and ways of life; raise awareness of the ways science and technology affect society, industry and the environment in different ways; provide opportunities for teachers and students to collaborate with their counterparts from other countries; and to develop communication skills, especially in languages other than their own; encourage collaboration between science and foreign language teachers to develop both science and language awareness and skills.
In October 2004, there were 2,924 teachers in 99 countries worldwide participating in the scheme (www.scienceacross.org). The program materials were developed to encourage students in a range of schools to work with everyday science-related issues oriented around the following themes:
Evaluating a Language through Science Program
101
Data 6.2 Program units TOPICS AVAILABLE Age range:
12–14
12–15
12–16
14–17
Acid rain Using energy at home What did you eat?
Biodiversity around us Keeping healthy Renewable energy
Chemistry in our lives Domestic waste Drinking water Road safety Solar energy
Global Warming
Dwellings Plants in our lives
Disappearing wetlands Tropical forests
Available in English only
(publicity flyer: ASE & GlaxoSmithKline)
These science-oriented materials are intended to be used by program participants as the means to exchange ideas and findings with students in other countries using different languages. Some topics (12 units) are available in six languages: English, French, German, Italian, Portuguese and Spanish. Certain topics are also available in other languages, including Catalan, Danish, Dutch, Greek, Japanese, Polish, Thai and Vietnamese. Of specific interest to language educators is the dual aim of this initiative: to promote the use of a foreign language in working on the science materials and the exchange of information with schools in other countries, through the use of ICT. The program was therefore innovative potentially in at least two respects: 1) the promotion of language use across the curriculum – in this case the science curriculum; and 2) the use of ICT within the curriculum experience. Full details of how the program works is available on the program website (www.scienceacross.org). The following provides some of the key points:
Data 6.3
How the SAE program works
• On registering with the program, schools receive a list of participating schools from which they make a selection, say three schools.
• Contact is then made with the chosen schools, using a ‘First Contact Form’, via post, fax or email.
• When students have collated some of the information required as part of the unit activities, they then send this to their partner schools, using the ‘Exchange Form’.
102
Program Evaluation in Language Education
6.3 Aims and scope of the evaluation The evaluation brief Above, we have presented the espoused aims of the innovation. In March 1999 a call to tender for the evaluation of the program within Europe was circulated. In the documentation provided, it was clear that the program funders were themselves well aware of the need to gather systematic and empirical data both about prevailing attitudes towards the program and how it was being implemented. Specifically, they stated:
Data 6.4
Extracts from evaluation tender call
. . . the SAE units/materials may be a useful tool to aid language learning [and there is a need] to explore this statement in further detail and then to identify: – the different ways in which the SAE units/materials enhance language teaching and learning, as perceived by teachers and students – the different methods employed by teachers in using SAE units/materials – any differences in language attainment between the sexes. (SAE email communication, 14 January 1999: 1)
Rather than evaluate the program worldwide, the evaluators were asked to focus more narrowly on the Science Across Europe program. When the evaluation took place, there were already over 1,000 secondary schools world-wide participating in the scheme within which there were different options in terms of mode of participation, i.e. the possibility existed for the program to be implemented in different ways within schools. For example, it could be used:
Table 6.2 • • •
Science Across Europe: modes of implementation
To supplement existing materials, within a modern languages department. As a vehicle to incorporate ICT within a modern languages department. In a collaborative approach whereby both a modern language and a science teacher would team teach the program within a school.
This evaluation was commissioned in 1999, with the scope, aims and specific objectives largely specified for the evaluators by the SAE program coordinator. These are as follows:
Evaluating a Language through Science Program Table 6.3
103
Scope and aims of the evaluation
(i) To evaluate the SAE materials, in particular the ways in which materials: (a) are generally perceived by teachers and students to enhance language learning and teaching; (b) promote appropriate and different pedagogic practice by teachers. (ii) To evaluate learner language development and achievement and to identify: (a) how well learner language has developed from the perspective of both science and language teachers; (b) how well learner language has developed from the perspective of the learners themselves; (c) any gender differences in perceived levels of student attainment. (iii) To identify ‘new directions’ for SAE, in relation to, as examples: (a) classroom pedagogy; (b) in-service training; (c) web-based resources. (Rea-Dickins February 1999: 1)
As Table 6.3 shows, the Terms of Reference for the evaluation were not conceived solely as outcomes in terms of student performance, and there was keen interest in an analysis of how the different thematic units worked, how the program was seen to contribute to students’ language development, as well as identifying specific pointers on the basis of which further program developments could be initiated. In this last respect there were similarities with the evaluations in chapters 5 and 8, also conceived as having a process dimension which would lead into educational strategy and policy developments. In addition, there was a lack of detailed knowledge about the different ways in which the program was actually being implemented (see Table 6.2 above). Thus, the program sponsors were interested in (1) finding out more about the different ways in which the program was used by the modern languages and science teachers, independently or collaboratively; and (2) gathering perceptions about the program before it had been introduced within a school and again after a period of implementation. It can be observed that point (i) has an orientation towards understanding classroom processes and aspects of program implementation whereas (ii) focuses on the identification of any perceived changes in stakeholder perceptions (teachers and students) as a result of using this program. Gauging stakeholder perceptions (see also Table 6.8, p. 107), it will be noticed, has now become almost a routine element in program evaluations (see chapters 5 and 12).
104
Program Evaluation in Language Education
Underlying constructs We have stressed elsewhere (e.g. chapters 1 and 5) the importance in the design process of developing a detailed specification of the constructs and key issues relevant to the different aspects of the evaluation as the basis, in particular, for decisions about evaluation approach and procedures. Different parameters of the SAE program were identified, some of which focused on the points listed above in Table 6.4. Table 6.4
Sample evaluation focal points
1. The international context of this program, and the need to know which languages the students were learning and using as the medium of engagement with the SAE materials. 2. Student perceptions of the program including the confidence with which they engage in the different classroom activities, especially those which are SAE program specific; 3. The ways in which foreign language and science teachers engaged with the program and its activities. 4. The use of the different classroom activities by teachers and students as recommended by the program.
For example, the program encourages both greater involvement of foreign language teachers and collaboration between the science and language teachers (Table 6.4, point 3). Thus, of particular interest to the commissioners of the evaluation was the extent to which SAE promoted collaboration between teachers within schools and the exchange of information with schools in other countries, through the use of ICT. In this respect, underlying program constructs were informed by good practice in science education, in the use of information technology more generally and in language education in particular (e.g. Thompson 1993; DfEE 1997; Warschauer 1997; Grey 1999; more recent publications of relevance include Chapelle 2001; 2003), and in areas in applied linguistics, such as self-directed learning (e.g. Ellis and Sinclair 1989; Little 1989; 1991; Nunan 1992; Dam 1995; 2001; Benson 2001), and content-based language teaching (e.g. Mohan 1986; Crandall 1987; Legutke and Thomas 1991; Chamot and O’Malley 1992; 1994; Brown and Brown 1996; 1998; Marsh and Langé 1999; Mohan et al. 2002).
6.4 Evaluation design Overview The evaluation was undertaken between 1999 and 2000, with a lengthy (5–6 months) first phase – very much as observed in the development of evaluation procedures reported in chapter 5 – spent on evaluation decision-making and the development of appropriate procedures. These discussions identified a number of evaluation design nodes particularly relevant to this program.
Evaluating a Language through Science Program Table 6.5 • • • • • • • •
105
Evaluation design nodes
Key variables (e.g. teachers and students new to, vs. experienced users of, SAE) Sampling options Locating the evaluation participants Achieving the planned sample The type of evaluation procedures to be used The content of the procedures The development and trialling of the procedures Translation needs as well as arranging for translation of the evaluation procedures
More generally, exploring ways in which the evaluation could be effectively undertaken across such a broad range of schools across several different countries (see Sampling below) was a sizeable task – no mean feat!! Equally, the evaluation design, the procedures, and the strategy for implementation had to be agreed by three different groups: the commissioners of the research, the evaluation team comprising applied linguists, and the science education team. Schedule The timescale for the evaluation was tight (see also chapters 5, 7 and 8). The schedule for the evaluation was developed as follows: Table 6.6
Evaluation schedule
1999 April April–June June–September October November–December
Identification of evaluation focus and variables Preparation of data collection instruments Pilot study: visits to UK schools, piloting of procedures, translation of data collection instruments Questionnaires circulated Questionnaires returned
2000 January–February March April–May June September–October December
Analysis of teachers’ responses Interim Report 1 completed Analysis of students’ responses Interim Report 2 completed Case studies planned (not conducted) Final Report completed
The preferred design was to combine the survey of teachers and their students in a selected number of schools in countries representative of the SAE program with two to three case studies. These were envisaged to complement the self-report data by in-depth analysis of the processes of implementation of the program in typical school contexts. The rationale for the case studies
106
Program Evaluation in Language Education
was linked directly to the very early observation by Stenhouse that: ‘Curricula are hypothetical procedures testable only in classrooms’ (1975: 41). In other words, there may be detailed program aims, specified content and suggested teaching and learning approaches as well as proscribed materials. There may be the self-reports of teachers and students on their attitudes towards SAE and how they worked with the program. However, there is no other way of ‘knowing’ and understanding how a program of instruction is actually implemented other than through observation in real class time (see Taba’s observation in section 2.3, p. 21 above). In the event, regrettably, these studies were not undertaken and hence the findings of the survey are limited, to the extent that they are based solely on self-report data (see also Limitations below). Variables and sampling Through discussions with the commissioners of the evaluation, the following key variables were identified and expressed in terms of diversity: Table 6.7 • • •
Key variables for sampling decisions
In the experiential base of teachers using the SAE program As represented in different countries In implementation between foreign language and science teachers
The initial sampling design was summarised as follows:
Data 6.5
Planned sample: the SAE evaluation
It was planned that sufficient and comparable numbers of samples would be taken from each variable group, e.g. teachers experienced in SAE vs. teachers new to SAE. Six countries were identified and it was planned that data be collected from approximately 40–50 schools from each country, at least 5 of which were new to SAE, and with a comparable ratio between foreign language and science teachers. (Interim report 1: 4)
However, it proved far more difficult to engage teachers who were experienced users of SAE materials and yet, for some reason, teachers who were using the units for the first time were very keen to be involved in the evaluation. This resulted in a far larger number of ‘new to SAE’ participating schools and countries. It may also suggest that teachers view participation in an evaluation as an opportunity for induction to or training for innovative practice. In addition, as the evaluation procedures were sent out to evaluation participants centrally by the program, more teachers and countries than anticipated in the sampling design were included.
Evaluating a Language through Science Program
107
The actual sample achieved was summarised by the evaluation team as follows:
Data 6.6
Achieved sample: the SAE evaluation
In September 1999, questionnaires were sent out to 146 teachers in 21 countries (125 teachers new to SAE and 21 experienced with SAE). In total, 65 teachers from 15 countries responded but six responses were excluded from the analysis due to electronic transmission problems. (Interim report 1, March 2000: 5)
There was thus a significant shortfall between the planned and the actual sample of teachers achieved, with only 43 language teachers and 16 science teachers from 14 different countries (one country was deselected as a returned questionnaire was corrupted when transmitted); responses from 595 students from 13 countries were received. The achieved and ‘unplanned for’ sample was a consequence of 1) the breadth of the program operating across Europe; 2) the voluntary nature of participation in the program by schools; and 3) the challenges presented by electronic data-gathering. As a consequence, much planned analysis of attitudes and experiences prior and subsequent to the use of SAE materials and implementation of classroom activities became problematic, not least on account of the small or varying numbers in the different data sets which may well have influenced the results. Thus, the original intention to secure equal numbers of experienced and new to SAE teachers raises the more general point about the relevance of a quasiexperimental and survey design for a program evaluation – even more so in this multi-site, multinational study. Further, there are a number of other evaluation issues to be considered with reference to this specific design. For example: Table 6.8 • • • • • • •
‘Noticing’ program effects and change
To what extent is it actually feasible to use an experimental, or quasi-experimental, design in an evaluation study? How can you be sure that program materials are implemented in the ways intended? How can you be sure that students engage in a range of program activities? How can student performance be fully interpreted without a classroom observation component? How soon into a program would enhanced levels of proficiency be realistically expected? Equally, when would it be realistic and ‘meaningful’ to gauge any changes in perceptions towards the program or facets of language teaching and learning more generally? Is it ‘fair’ to evaluate aspects of a program when it is being used by both teachers and students for the first time?
108
Program Evaluation in Language Education
In this evaluation, we failed to achieve a balanced number of participants, e.g. teachers new to SAE vs. experienced users, and systematic data collection prior and subsequent to the teaching of an SAE unit and accompanying materials. Thus, in certain respects, our experience of using a pre- and post-program design as part of this evaluation was fundamentally similar to that of the Bangalore experience. This meant that we were limited in the amount of confidence we could have in the findings and that we were not in a position to make valid comparisons between groups, as was originally anticipated. This is a problem also experienced by Weir and Roberts (1991) in their study of teachers in Nepal, in the evaluation of the Molteno Language and Literacy program in South Africa (Rea-Dickins 1994) and in the evaluation of the Expatriate English Language Teachers Scheme (EELTS) in Hong Kong described in chapter 7. Evaluation procedures A variety of data collection instruments were developed to meet the aims of the evaluation. These were: Table 6.9
Evaluation procedures
1. Teacher questionnaires (i) (ii) (iii) (iv)
for foreign language teachers new to SAE (nfl) for foreign language teachers who are experienced users of SAE (efl) for science teachers new to SAE (nsc) for science teachers who are experienced users of SAE (esc)
2. Student questionnaires (i) for students new to SAE (ns) (ii) for students who are experienced users of SAE (es)
In addition, it was envisaged that case studies would be conducted in two to three schools to gain in-depth and first hand insights into SAE practices (see Limitations below). These case studies were regrettably abandoned on grounds of cost and time. Operationalising constructs In any evaluation, decisions have to be made about the structure and content of the evaluation procedures. As illustrated in chapter 5, the content itself should be driven by the construct(s) underpinning the evaluation, as embodied within the program and reflected in the scope and focus of the evaluation itself. Two examples of ways in which constructs relevant to SAE were operationalised in the student questionnaires are provided next. First, in order to gain an understanding of how the SAE was implemented in their schools, students were asked:
Evaluating a Language through Science Program
Data 6.7
109
Student questionnaire: Question 10
With which teachers did you use Science Across Europe materials: (a) with the science teacher only? (b) with the foreign language teacher only? (c) some lessons with the science teacher and some lessons with the foreign language teacher? (d) with both science and foreign language teacher?
ⵧ ⵧ ⵧ ⵧ
A sample question for foreign language teachers new to the SAE program was:
Data 6.8 Question 28: questionnaire for foreign language teachers after teaching an SAE unit for the first time 28. How have you and your colleagues used the SAE materials? • • • • • • •
Science teachers initiate the program and foreign language teachers collaborate with science teachers? Foreign language teachers initiate the program and science teachers collaborate with foreign language teachers? Science and foreign language teachers collaborate on an equal basis? Science and foreign language teachers work independently on an equal basis? Only science teachers teach the program, using the students’ first language? Only science teachers teach the program, using the foreign language? Only foreign language teachers teach the program, using the foreign language?
ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ
Other?/Comments Examples of teachers’ spontaneous comments included: • • • •
‘ The material is being looked at by the science teachers. We hope to work together on future topics. ’ ‘It depends on the ability of the science teacher to speak English. ’ ‘ We worked on an equal basis, but we worked independently. ’ ‘ I sometimes need to pre-teach words and concepts. ’
These examples were relatively ‘straightforward’ in design and were informed through the pilot visits to UK schools and discussion with SAE staff. The issues were more complex, however, in connection with the kinds of learning activities students were involved in during the SAE program and, in particular, which ones they worked on in the foreign language. In addition, we also wanted to know whether any of these activities were unique to the SAE learning experience or were a regular part of the learning portfolio in other subjects. This complexity is exemplified in questions 26–28 in the student questionnaire. The final version of the question on students’ use of different learning activities in the survey questionnaire is shown in Data 6.9.
110
Program Evaluation in Language Education
Data 6.9
Student questionnaire: Questions 26–28
26. Which activities did you regularly do in Science Across Europe lessons? Put a tick (√) in the spaces provided in Column 2. (You may tick more than one box). Add any further activities in the spaces provided in Column 1. 27. Which Science Across Unit activities did you normally do in a foreign language? Put a tick (√) in the spaces provided in Column 3. (You may tick more than one box). Leave blank if it is not applicable. 28. Which activities do you not normally do in your other school subjects? Put a tick (√) in the spaces provided in Column 4. (You may tick more than one box). 1 Activities
Write letters, faxes or emails to partner schools Read letters, faxes or emails from partner schools Look for information in books, magazines, etc. Look for information on the internet Discuss topics (as a class or in small groups) Draw graphs or charts Write short texts Carry out experiments and analyse data Conduct a survey Interview people Other: please specify .................. ..................... Comments
2
3
4
Which activities did you regularly do in Science Across Europe?
Which of those activities did you do in the foreign language?
Which activities do you not normally do in other subjects?
Evaluating a Language through Science Program
111
The actual learning activities selected for inclusion in the student questionnaire were arrived at through: (a) consultation with SAE staff and a science consultant, (b) documentary analysis and, importantly, during the piloting phase, (c) a small number of visits to UK schools which were considered effective users of the program.
Data collection Given the nature of the program, involving as it did electronic communication between schools and across international borders, the most obvious means of circulating the survey questionnaires to the participants was agreed to be through email attachments. In practice, however, questionnaire translation processes proved extremely time-consuming. Thus, there was very little opportunity to pilot not only the questionnaires themselves, but the delivery of these questionnaires. With hindsight, the problems experienced should have been predicted (see section 11.3, p. 190). In the transmission of the questionnaires, a major problem occurred through loss of formatting in some of the respondents’ questionnaires. The complexity of the layout of a few of the questions (as in Data 6.9) thus had a significant impact on the usability of the data received electronically but which in a postal (hard copy) questionnaire would not have been a problem (see section 11.3, p. 190).
6.5 Some findings This evaluation revealed a wealth of interesting findings about different aspects of SAE with reference to each of the evaluation questions investigated (see Table 6.3, p. 103), too numerous to report. Below, four different kinds of finding are reported, most of which could, it is asserted, prove useful for future SAE program development and implementation.
Teacher profiles The teachers had a range of years of experience, varying from 1–5 to over 35 years, with a fairly even spread across this range. The vast majority of teachers surveyed used English as the foreign language in SAE. It was observed that the science teachers who used a foreign language (60 per cent of science teachers new to teaching the program and 66.7 per cent of those experienced in using the program) reported an intermediate level in their command of the language, with some advanced and a few beginners. Data of this kind contributed to the development of a SAE teacher-user profile which had not been available to the program prior to this evaluation, thus raising, in particular, considerations of the role and level of proficiency in the target language of the SAE instructors.
112
Program Evaluation in Language Education
Approach to using SAE in schools It will be recalled (see also Table 6.2, p. 102) that the evaluation was expected to inform on the kinds of partnerships and approaches to program implementation amongst both foreign language and science teachers. The findings are shown in Data 6.10.
Data 6.10
Approach to using SAE in schools
The majority of NFL teachers used SAE (62.2 per cent) independently, i.e. without the collaboration of a science teacher. The pattern observed was similar for the NSc teachers: 60 per cent teach SAE without collaboration with a FL teacher, 30 per cent of whom use the students’ first language and 30 per cent a foreign language. On the other hand, teachers experienced in SAE report greater collaboration between foreign language and science teachers. Half of EFL teachers initiate the program and then work with the science teacher. All ESc teachers collaborate with FL teachers in some form or other. (Interim Report 1, March 2000: 6) Key: NFL: language teachers new to SAE; NSc: science teachers new to SAE; EFL: experienced SAE language teachers; Esc: experienced SAE science teachers
Thus, in spite of the interest on the part of the program designers in promoting interaction between science and language teachers, this appears not to be a strong feature of program implementation for teachers new to SAE, as revealed by the self-report data of this survey. On the other hand, greater collaboration appears to be linked to having previous experience of the program. Classroom practice (a) Information technology: teacher perspectives Rather a surprising finding related to the use of IT as summarised in the interim evaluation report. (relating to Q10, 11, 41/49, 42/50 in FL; Q12, 13, 43/50, 44/51 in Sc)
Data 6.11
Teachers’ reported use of ICT
The great majority of NFL teachers (78.4 per cent) do not use IT in their regular FL classroom or in their SAE classroom, though some of them (5 per cent) are planning to do so. The situation is similar with NSc and ESc teachers. In contrast, 50 per cent of EFL teachers (n = 3) use it. Those who use IT do web searches on SAE topics, send emails, word process texts or use spreadsheets. Some teachers use email for exchange activities. (Interim Report 1, March 2000: 7)
Evaluating a Language through Science Program
113
(b) Self-directed learning: learner perspectives (relating to Q12, 13, 51/43, 52/44 in FL; Q14, 15, 52/45, 53/46 in Sc)
Data 6.12 activities
Student engagement in autonomous learning
Over half of the FL students (62.1 per cent of NFL and 50 per cent of EFL) spend time on self-directed learning in non-SAE lessons. In SAE lessons, the proportion of EFL rose to 83.3 per cent but the situation is unknown about NFLs because 73 per cent did not respond to this question. Nearly 70 per cent of science students reported spending little or no time on self-directed activities in non-SAE lessons, but about half spend quite a lot of time in SAE lessons, e.g. conducting experiments or investigations and researching topics. (Interim Report 1, March 2000: 7)
The description of the extent of self-directed learning evident here in Data 6.12 belies the difficulty involved in making an evaluative comment or judgement of worth. Without a more detailed account of the practices which prevail in these schools it is not possible to conclude that a particular feature(s) of the SAE materials was (were) a causal or contributory factor in the increase in self-directed learning. It would therefore be interesting, through case studies, to investigate how teachers encourage students to engage in more self-directed activity and which activities are of particular value for this. (c) Learning activities: learner and teacher perspectives There was a keen interest in knowing more about the kinds of activities that the language and science teachers, respectively, used in their teaching. The extract below from the Interim Report summarises the findings for learners new to (NSc) or with prior experience of SAE (Esc) in their science lessons. (see Q16, 35/28, 36/29, 37/30 in Sc)
Data 6.13
Use of activities in science lessons
The most frequent activities in SAE lessons for NSc students are: • • •
discussions in plenary or small groups (90%) looking for information in books and magazines (80%) interpreting data in graph/chart form (80%)
On the other hand, the most frequent activities for ESc students are: • • •
writing letters, faxes or emails to partner schools (100%) reading letters, faxes or emails from partner schools (100%) discussions in plenary or small groups (100%)
114
Program Evaluation in Language Education
Data 6.13 • • •
(Continued)
constructing graphs or charts (100%) interpreting data in graph/chart form (80%) conducting a survey (80%)
NSc and ESc teachers both report that discussions and interpreting data (graphs and charts) are common activities. ESc teachers emphasise exchange activities as important in SAE lessons, in contrast to NSc teachers. There is also a difference between NSc and ESc in the use of the FL. Fifty per cent of NSc teachers report that their students use the FL when looking for information in books, magazines, etc., discussing topics and interpreting data. On the other hand, ESc students use the FL more in exchange activities, e.g. reading letters from, and writing letters to partner schools (80%). (Interim report 1, March 2000: 8)
The above findings highlight differences in the use of SAE activities and a foreign language by students and teachers who are using the materials for the first time, as compared with those who have prior experience of the program.
(d) Summary of findings The key findings from this evaluation case study are shown in Table 6.10.
Table 6.10 • • • • • •
Summary of evaluation findings
The SAE materials are perceived to be generally easy for their students (see sections 4.8 and 4.9). The SAE program promotes a wider range of classroom activities, e.g. self-directed learning and exchange activities, as evidenced by experienced teachers’ responses (see sections 4.5, 4.6, 4.7 and 4.10). Activities effective in SAE include: pair and group work, whole class discussion, students presenting topics, searching for information, working with charts, graphs, etc. (see section 4.12). SAE materials are perceived as useful for promoting motivation and foreign language and science learning. In particular, experienced teachers’ perceptions are encouraging (see sections 4.8 and 4.10). SAE materials benefit boys and girls equally in their motivation for learning and improvement of foreign language proficiency and science knowledge (see section 4.8). SAE experienced teachers have more collaboration between foreign language teachers and science teachers than their counterparts who are new to SAE (see section 4.4).
Evaluating a Language through Science Program • •
115
IT facilities are unavailable in many schools, and this limits a range of activities carried out both in non-SAE and SAE classrooms (see section 4.5). The teachers’ responses suggest a number of areas for improving SAE materials and for INSET (see sections 4.13, and 4.14).
(Interim report 1, March 2000: 12) N.B. The cross-referencing in brackets is to specific sections of the report from which these data are extracted.
Thus, overall, the findings suggested that the project had a positive impact on aspects of learner motivation, with some evidence for enhanced learner achievement. It also raised some implications for classroom activities and the use of ICT in a cross-curricular context in the development of foreign language learning (e.g. IT not available in all schools). However, the fact that the data were exclusively in the form of student and teacher self-report was limiting and the evaluators were, thus, constrained in being able to explain the data elicited more fully.
6.6 Implications for evaluation Limitations 1. Several limitations can be identified in relation to approach, sampling, data collection, generalisability and sustainability of the findings. The associated difficulties are not, we suggest, unique to this evaluation. Experience suggests that: • Participant selection and comprehensive data gathering, even within one institution, let alone across countries, should not be underestimated. • Situational/context-specific constraints are important to identify during the beginning stages of an evaluation as they may impact considerably on (a) evaluation approach, design, data gathering procedures, and (b) opportunities to engage with program participants either as respondents or at the data analysis and interpretation stages (see Rea-Dickins and Germaine, 1992; Jacobs, 2000). Nonetheless, as illustrated above, this evaluation did yield findings of use to both further development of the program and further evaluative enquiry. 2. As indicated, the plan to evaluate classroom implementation of the program did not materialise. Again, we suggest, this is not a unique occurrence but the need to have a multifaceted approach is not always appreciated by those involved and/or the budget allocated to an evaluation is relatively small to overall program costs. In-depth case studies come at price in commissioned evaluation research projects!
116
Program Evaluation in Language Education
Even if planned respondent samples had been achieved, there was a clear need for in-depth studies of program implementation taking an ethnographic approach (see chapter 7, an experimental evaluation design augmented by an ethnographic study). The questionnaire surveys raised many useful issues, from which emerged more questions than answers. For example, the mode of SAE implementation was identified through questions put to both teachers and students (see Data 6.8 and 6.9) but the specifics about modes of collaboration between science and FL teachers and about experienced users of SAE and teachers new to the program were simply not available in the depth that would have provided a really sound foundation for policy recommendations. Similarly, it would have been informative to explain why teachers enrolled in the SAE program if they had no access to the internet in their classes (see section 6.5). There is thus an ‘honesty’ required on the part of the evaluators and/or sponsors to avoid generalising from findings that have the status of only being ‘indicative’, rather than having a firm foundation. Cronbach and colleagues (1980, cited in Nevo 1986: 16) describe an evaluator as: an educator [whose] success is to be judged by what others learn’ [p. 11] rather than a ‘referee [for] a basketball game’ [p. 18] who is hired to decide who is ‘right’ or ‘wrong’. Thus, as early as 1980, Cronbach drew attention to the potential within evaluation practice of a learning dimension. Clegg et al. make an observation that links closely with evaluation as a developmental and learning experience. They make the important point that the evaluation attempted to capture the experience of the SAE curriculum regardless of its quality and that: It would be useful . . . to get a clear idea of what happens when SAE is used to its very best effect. Teachers will achieve this in different ways depending on a number of variables, such as their enthusiasm, teaching style, collaborative strategy, science/FL knowledge, science/FL methodology etc. Students will also achieve it according to a range of variables such as their age, gender, motivation for science/FL and so on. It would be a very useful exercise to gather such data, from a limited number of schools representing different contexts and approaches, and using a variety of methods, such as audio and/or video recordings, interviews, students’ work etc. (2000: 83) If, then, one of the evaluation purposes is to develop curricula and improve professional practice – which some sponsors may desire – this should
Evaluating a Language through Science Program
117
influence the evaluation design. We would anticipate that an ethnographic approach through a few case studies in selected schools could have illuminated facets of ‘best practice’. 3. This evaluation survey was thus based on teacher and student self-reports, part of which involved students’ self-assessment of their language skills and perceived improvements over the period of instruction, as opposed to a measure of their ability obtained through a formal external procedure. An obvious question is: what would be a suitable measure to use in such a context? Some five years later we are in a position to suggest that the Common European Framework (e.g. Council of Europe 2001) might have provided one suitable means against which to map learner performance. It is to be noted, however, that the students’ language achievement selfassessment questionnaires (circulated before and after an SAE teaching unit) were informed by this framework. In addition, the evaluators suggested that ‘the measurements should be taken over longer periods of time to ensure the sustainability and generalisability of the findings’ (p. 82). 4. In the same way that ICT is changing the teaching and learning experiences of many language learners, electronic communication is, potentially, a powerful medium for data collection (see section 11.3, p. 190 for a further detailed example). 5. Other less obvious limitations surface in terms of what the evaluation does not show. For example: • Have there been, for example, any unintended outcomes from the SAE implementation, positive or negative, for individual teachers and their students or for a department or even a school? • To what extent has the SAE, or indeed the evaluation of the program, served to raise teacher awareness of relevant language learning and teaching issues? • What, in particular, do experienced users – teachers and students – have to say about how they use SAE materials and what they have found most useful for language learning or student motivation? In the words of Clegg et al. (2000: 83): This is particularly the case with regard to the sustainability of learning gains from SAE. What effects, in the long run, do teachers and students feel SAE has, and what particular techniques do they use in order to secure these effects?
6.7 Summary The different purposes participants determine and/or envisage for evaluation are well documented (e.g. Rea-Dickins and Germaine, 1992; Weir and Roberts
118
Program Evaluation in Language Education
1994; see also chapter 10 below). In the evaluation study documented here, there were two underlying purposes. First, the sponsors wanted an evaluation of the program (i.e. British Petroleum, wanted to know whether there was value for money). Second, program management (i.e. SAE program staff) needed to obtain valid data in order to inform management decisions in particular, in relation to further program development. To some extent, this evaluation achieved these goals. Even though the findings were drawn from a non-random sample and, from this perspective, do not provide a strong empirical basis for drawing any definite conclusions, the findings did provide key pointers for the SAE team. In order to address the limitations of the survey – some of which were not controllable by either the evaluation or program team and administrators – it was strongly recommended that case studies be conducted with a small number of ‘typical’ SAE user-schools. One approach which might be useful here is realist evaluation; see chapter 3 above. A Context (secondary schools) – Mechanism (SAE materials) – Outcomes (learning; interest) approach might have identified the features of the program which had engaging and constraining effects in a range of schools and for different learner types. Finally, as with other features of the curriculum, any evaluation is socially constructed. In this connection, Morrison comments (1998: 59): educational practice, far from being the rational, linear, planned, controllable and predictable activity which decision makers might wish for [evaluators, commissioners of evaluation studies], is, rather, socially situated and affected by people. People matter! Thus, dialogue with and among program participants in an evaluation will contribute not only to the kind of data captured, but will also influence the extent to which findings from an evaluation may be acted upon and actually used.
7 Evaluating the Contribution of the Native Speaker Teacher
7.1 Introduction In this chapter we look at a large-scale evaluation of an educational pilot program in Hong Kong. The program involved the deployment of nativespeaking English teachers in state secondary schools as an initiative to raise achievement in English. The rationale for the program emphasised importance of use of English in the classroom, a feature of the curriculum guaranteed by English teachers who did not speak the language of the school community, namely Cantonese. The increased use of English for communication in the classroom would lead to increased learning. The evaluation examined in detail in this chapter sought to measure this learning achievement. It also explored related attitude and motivation factors, and an ethnographic component of the evaluation provided a perspective on the social and cultural factors which affected the integration and performance of expatriate teachers in schools. The evaluation thus used different data sets to inform on policy development, language learning and program management issues. Our presentation of the evaluation in this chapter is informed by the final program report (Kirton et al. 1989), a range of program documentation, including a television documentary on the EELTS broadcast in the first year of the program, and the experience of one of the authors (Kiely), who was a teacher in the program. The discussion, therefore, illustrates both insider (although program participant rather than program evaluator) and outsider (although reader rather than user) perspectives. We also include a brief Afterword section, where we consider what has happened in the program context in the intervening 15 years, and the contribution of this evaluation to what is a growing issue in the profession: the curricular contributions of ‘native speaker’ and ‘local’ teachers.
7.2 Context The context of the program is the secondary school English language curriculum in Hong Kong. As the name of the program suggests – Expatriate 119
120
Program Evaluation in Language Education
English Language Teacher Scheme (EELTS) – the curriculum innovation is the deployment of native-speaking English language teachers to bring about improved learning in the secondary school sector. In 1987 approximately 80 (numbers varied slightly through the period of the pilot study) expatriate teachers of English from UK, Ireland, Australia and Canada were placed in 41 HK secondary schools. They were integrated into the English Panel (department) of the schools, teaching mainly Secondary 1–3 (age 12–15), and occasionally Secondary 4 and oral skills classes to all years. The overall aim of the program was to improve learning of English in schools. This was interpreted differently across the project: in some schools the work of the EELT was viewed as an opportunity for curricular development and change, in others it was seen as merely as a welcome additional staffing resource. Four additional features of the program are particularly relevant to its evaluation. Four features of the program (a) The program was initiated in the context of perceived declining standards of achievement in English, and the consequent need for the Education Department to take action to improve the situation. A range of curriculum and teacher development initiatives had not lessened the reports in the media and from employers that the prosperity of Hong Kong was threatened. This phenomenon is not, of course, unique to Hong Kong: critical media accounts of the worth of educational processes and achievements are a feature in many contexts, and very often are part of the motivation to innovate. (Similar perceptions of falling standards prevailed in the context and program described in chapter 5 above.) (b) The program was informed by the principles of Communicative Language Teaching, such as Widdowson (1978) and Brumfit (1984), principles which characterised many English language teaching initiatives managed by the British Council in this period (Phillipson 1992). While the communicative here shaped in a general way, both the nature of the curriculum innovation, and the nature of improvement to falling standards, there was no parallel change in the approach to assessment, which had only a limited role for communicative language use. (c) The program was introduced with limited consultation with Hong Kong teachers’ unions and professional groups. One result of this was a negative perspective on the program disseminated in the media. Another was the view that the program constituted a staffing resource rather than a context of curriculum development and change. (d) The aims of the program centred on the ‘additional dimension’ native speaker teachers would bring to schools, departments and classrooms, ‘as a linguistic and cultural resource’ (EELT Handbook 1987: 1). There was no specific methodological orientation for the deployment of this resource. There were no new syllabuses, materials or tests. The focus,
Evaluating the Contribution of the Native Speaker Teacher 121
therefore, was on the teachers’ lack of L1, and the expectation that this would afford enhanced opportunities for language use and language learning in the classroom. We see in (d) above an inherent flexibility in the program – it is for teachers, departments and schools to elaborate the details of the innovation. The opportunities provided by this flexibility, however, were constrained by the very different factors reflected in (a), (b) and (c). The improvement in standards of learning achievement in English (as viewed in the wider socio-political context) was to be determined by existing tests. This limited the scope for curricular innovations which emphasised communicative language use rather than linguistic knowledge. In addition, the antipathy towards the scheme by Hong Kong teachers (in schools and more widely through the Professional Teachers Union – PTU) limited the development of ways in which the ‘linguistic and cultural resource’ might be used, especially as a collaborative, school-wide curriculum renewal initiative. These complexities, in many ways typical of curriculum innovation programs (Fullan 1991), present specific challenges for the program evaluation. Six features of the evaluation (a) The evaluation examined an aspect of teaching, largely through the measurement of learning outcomes. The central element of the evaluation construct was the teacher identity (native speaker/non-native speaker) input variable, with only limited attention to the actual classroom processes deriving from this, or co-existing with it. In this respect the evaluation is in an established tradition of ‘methods’ evaluations, such as the Bangalore evaluation (Beretta and Davies 1985; see also section 2.4, p. 23 above; Harris 1990; Mitchell 1992). (b) The evaluation design had an innovative mixed-methods feature: in addition to the classic experimental design, ethnographic case studies in three schools sought to identify the factors which facilitated or impeded deployment of the ‘linguistic or cultural resource’, that is, the native-speaker teacher. (c) The focus of the evaluation – the value for learning of having a nativespeaker teacher – represents an issue which has strong resonance in the professional and folk history of EFL and language education. The issue has not been the focus of a systematic and large-scale evaluation, though there have been a book-length study (Medgyes 1994) and a range of accounts of the similar JET scheme in Japan (Kiernan 2004). (d) The evaluation adopted a broad stakeholder approach – in addition to the ethnographic case studies, a broad range of stakeholders (schools, teachers, students) were able to contribute to the evaluation as informants. (e) The evaluation was carried out by the program management team: they were responsible for evaluation design and data collection in both
122
Program Evaluation in Language Education
control and experimental schools, and for day-to-day management of the program in experimental schools. (f) The evaluation was carried out in a context of ongoing media attention, much of it hostile. Such was the intensity of this attention that the final evaluation report included an analysis of this coverage. These features illustrate three contending forces in the management of the evaluation. First, the two principal components of the evaluation – the experimental study which measured outcomes, and the ethnographic case studies which described process impacted differently on the management of the program. The latter contributed to the tasks of the management team in supporting teachers and schools, while the experimental study was administratively demanding and did not contribute to the day-to-day management of the program. Second, while the evaluation sought to capture the views of a range of stakeholders, the program itself was initiated with limited consultation with HK English teachers. The issues here relate to the wider challenges of stakeholder identification and involvement in evaluation explored in chapter 12. Third, the evaluation was led by the program management team, supported by external consultants. This feature, on the one hand, led to questions regarding advocacy and bias in the evaluation (expressed by critics of the program in a TV documentary made and broadcast eight months after the start of the program), and on the other, a complex defence of this feature by the program leader and the Education Department. Both articulated the justifications for, and positive indicators so far of, the program, and the reliance on the evaluation for an ultimate judgement of worth and value for money. The two issues for evaluation here – insider and outsider stakeholding, and perceptions of these stakes in the media – are examined in section 7.7 below.
7.3 Aims and scope of the evaluation The aims of the evaluation derive from the four objectives of the program. The evaluation sought to provide a measure of success (or otherwise) in relation to each of these objectives, using the different quantitative data sets. The objectives are set out in Table 7.1, and a detailed elaboration of the design in Table 7.2. These aims suggest a precise, academically-oriented study, which would contribute over time and in complex ways to the detail of education Table 7.1 1. 2. 3. 4.
Objectives of the program
To improve the English language achievement of pupils in EELTs classes To increase pupils’ motivation and interest To improve pupils’ English language learning To contribute to fulfilling the general aims of the school
Evaluating the Contribution of the Native Speaker Teacher 123
policy development. However, the public interest in the issues surrounding languages in education in Hong Kong and the media attention, particularly to the question of value for money, redefined the scope of the evaluation. In addition to measures of impact in the four areas, the evaluation came to address process factors: the ways in which the linguistic and cultural resource that was the EELT in the school was facilitated or impeded. To inform on these process aspects of the program, ethnographic case studies were added at the end of Year 1 of the two-year pilot study. The evaluation thus consisted of two separate studies: a conventional experimental study based on learning outcomes (see chapter 2 above), and a qualitative study of cultural, affective and social dimensions of school life and learning English. The experimental study was carried out in 56 schools – 28 (in Year 1) experimental and 28 control schools – with the expatriate teacher as the independent variable. The qualitative study had two parts: (1) a series of reporting protocols over the duration of the program from the expatriate teachers, and a range of other program stakeholders, as planned from the outset of the program; and (2) ethnographic case studies of three schools, initiated towards the end of the first year of the pilot study. Data 7.1 describes the approach to the ethnographic study.
Data 7.1
The ethnographic study
In these studies the Research Officer attempted to immerse herself, as far as was possible in the time available, in the life of the school. Her role was that of a sympathetic and neutral observer. Wherever possible, as with notices, assemblies, meetings, and lessons observed by the Research Officer, she was able to give an objective account. A considerable part of the report however consists of attitudes, opinions and perceptions expressed to the Research Officer by local staff members, the EELTs and students. In these situations, the Research Officer is reporting what she has been told. Inevitably, there are contradictions as well as areas of agreement. The aim here is not to uncover the ‘truth’, but to reflect the feelings of those concerned. This needs to be born in mind in reading the reports since, to avoid tedium, the writers have not continuously inserted expressions such as ‘in his opinion’, ‘felt that’, ‘believed’, etc. (EELTS Final report, p. 119)
This excerpt from the report is revealing in terms of the purpose of the case studies and the epistemological positioning of this part of the evaluation. The purpose was to understand the dynamics of these different program sites in order to understand process aspects of the variable which was the focus of the experimental study – the EELT. In order to integrate the case studies into the evaluation as a whole, two articulations are evident. First, the case for the objectivity of the ethnographic study is set out, though it remains an
124
Program Evaluation in Language Education
unresolved aspiration: the objectivity does not generate truths, but rather reflects feelings. Second, the case studies are included in the overall design as a qualitative cross-check (see Table 7.2). These design issues derive in part from the scope of the evaluation. The experimental study, to inform on the impact of one particular variable, is an external perspective. The ethnographic case studies address internal program issues: day-to-day management concerns, in particular the problems of individual teachers and schools, and the media attention. Both are articulated as dimensions of the evaluation. An alternative view might be to preserve the epistemological integrity of the experimental study by retaining its focus as the definitive evaluation construct and representing the case studies as a service to program management, distinct from the evaluation study. It is an indication of the ever-increasing inclusiveness of the evaluation task in recent decades (see chapter 3 above) that all understanding and knowledge-building in the program came within the scope of the evaluation. The issues involved in evaluations which use such a mix of methods and mix of paradigms are discussed later in this chapter.
7.4 Evaluation design Table 7.2 presents a summary of the evaluation design. Operational hypotheses are generated from the objectives of the program in four areas: 1. 2. 3. 4.
improved English language achievement; increased pupils’ motivation; improved language learning opportunities in the classroom; contribution to fulfilling the aims of the school.
The hypotheses facilitate in each case a judgment about the significance of the contribution of the expatriate teachers. The data set for measuring improved English language achievement comprises test results of three groups of pupils: Group 1: The pupils in the expatriate teachers’ classes. Group 2: The pupils in matched classes in the same schools, taught by HK English teachers. Group 3: The pupils in matched classes in matched schools. The tests included the standardised test in use in all secondary schools, the Hong Kong Attainment Test (HKAT). In addition, two specially designed tests to measure listening and oral skills were used (the final evaluation report does not provide details of validation and piloting of these tests). While the HKAT might slightly favour Groups 2 and 3 (as their teachers might orient their teaching towards preparation for such tests), the listening and oral tests might favour the pupils in the expatriate teachers’ classes, in so far as the reliance on English for teacher–pupil communication could enhance these
Evaluating the Contribution of the Native Speaker Teacher 125 Table 7.2
Summary of the evaluation design
Aims of the EELTS
Operational Hypotheses
Data sets
1. To improve the English language achievement of pupils in EELTs classes
EELTs do not significantly improve pupils’ English language achievement
(a) Hong Kong Attainment Test (HKAT) a standardised attainment based on usage, writing, and a small listening component, administered to pupils in Years 1–3 (b) A specially designed listening test (c) A specially designed oral test (d) Case studies ‘as a qualitative cross-check’
2. To increase pupils’ motivation and interest
EELTs do not significantly increase pupils’ motivation and interest
(a) Attitude questionnaire to pupils, exploring: – their motivation towards learning English – proficiency in English – the medium of instruction in the classroom – the vocational value of English (b) Case studies ‘as a qualitative cross-check’
3. To improve pupils’ English language learning
EELTs do not significantly increase pupils’ exposure English language learning
(a) Reporting protocols (open-ended questionnaires) for: – expatriate English language teachers – school principals/English Dept chairs – project managers (b) Case studies ‘as a qualitative cross-check’
4. To contribute to fulfilling the general aims of the school
EELTs do not significantly make a positive contribution to the general aims of the school
(a) Reporting protocols (open-ended questionnaires) for: – expatriate English language teachers – school principals/English Dept chairs – project managers (b) Case studies ‘as a qualitative cross-check’
126
Program Evaluation in Language Education
skills. This analysis was viewed as a form of constructed neutrality of the series of tests overall, similar to the analysis of the tests used in the Bangalore evaluation (see section 2.4, p. 23 above; Beretta and Davies 1985; Beretta 1990). In Part 3 we explore neutrality as a fairness and ethicality issue in evaluation. Objectives 2, 3 and 4 were determined through two sets of self-reports. First, to measure the impact on pupils’ motivation, pupil attitude questionnaires measured attitudes to self (Proficiency), attitudes to teachers (Personality) and attitudes to English (Medium of instruction) and perceptions of the vocational value of English of pupils in Groups 1, 2 and 3. More open questionnaires, termed reporting protocols (RP), were used to inform on particular contributions of the expatriate teachers, that is, Objectives 3 and 4. These related to the experimental classes only, and were returned by head teachers, heads of department and members of the program management team. The RPs included closed or scaled items about a range of aspects of school life in three topic areas: professional issues, pupils, and relations in school. These are set out in Table 7.3. Table 7.3
Program areas examined in the RPs (questionnaires)
Topics Covered by Reporting Protocols Professional Materials Major tests Homework/class work Scheme of work Resources – quality and amount produced Contribution to development of teaching Contribution to extra curricular work Pupils Level of classes Motivation Discipline Value of English as a Skill Attitude of pupils taught by EELTSs Pupils’ attitude (general) Language skills improvement Advantages/disadvantages of being taught by EELTS Language skills EELTs/local teachers Relations in school Professional relationships Social relationships LT attitudes General integration Contributions to aims of school Internal administration systems Contributions to teaching within the English Department Professional impact
Evaluating the Contribution of the Native Speaker Teacher 127
7.5 Data collection In Year 1 of the Pilot Project, eight classes – four EELT-taught (Group 1) and four HK teacher taught (Group 2) – in 28 experimental and four classes, matched to EELT-taught classes (Group 3) in 28 control schools, took the HKAT, and a sub-set of ten of each took the special listening tests. A third of the pupils in each of these ten schools took the special oral test. In Year 2, the tests were repeated in 24 experimental and 24 control schools, and in addition, in the three schools new to the scheme which were the subject of the case studies. Two new groups of pupils were also identified in the data collection: Group 4 pupils: Those in EELTS classes in Year 1 but not in Year 2. Group 5 pupils: Those not in EELTS classes in Year 1, but who were in Year 2. Each year the pupils filled in an attitude questionnaire designed to determine their views on classroom factors such as motivation, teacher personality, English as medium of instruction, discipline and control. A series of reporting protocols returned by the school principals/panel chairs, the expatriate teachers and the program management team (responsible also for the evaluation) addressed the issues set out in Table 7.3. Informants were asked to rate aspects of the scheme and the teachers on a five-point scale, and in addition provide additional comments. During the first year, the management team, busy with the complex features of the experimental study, became aware that there were unforeseen difficulties with the integration of the expatriate teacher in the schools. Public dimensions of these difficulties as discussed above included the opposition to the Professional Teachers’ Union (PTU), and the attention of the media, especially in relation to the public funding of the scheme. These pressures, in many ways typical of the process of managing innovation and change in educational contexts (Fullan 1991), raised awareness about the limited perspective which the data gathered for the experimental study would provide. The program management and evaluation team decided to explore systematically the cultural and social aspects of this particular innovation within schools. This led to the ethnographic study (described in Data 7.1 above) of the Scheme in three schools, selected according to the achievements of their pupils on entry. One school received Band 1 and 2 pupils (that is, high achievers), one Band 3 pupils, and the third, Band 4 and 5 pupils (low achievers). A bilingual (Cantonese and English) Research Officer immersed herself in the life of the school and developed a data set of field notes and interviews on the range of factors which affected the performance of the expatriate teachers. As stated above, the data from this part of the evaluation provided a useful perspective on the bedding down of the innovative aspects of the program, and thus provided a valuable service to the management of the program. As an
128
Program Evaluation in Language Education
evaluation data set it remained unclear how it would contribute to the judgements of impact, in particular how it would constitute a cross-check on the quantified measures.
7.6 Some findings The main conclusion of the evaluation derives from the experimental study.
Data 7.2
The main conclusion
Our main conclusion in this report is that the EELTPS has improved English proficiency and helped change attitudes to English. It has therefore been a success and in the strict terms of experimental rigour we can reject our Null Hypotheses 1, 2 and 4. As to Hypothesis 3, no significant effect on the use of English in the school environment has been measured. (EELTS Final report, p. 55)
The quantitative results show significant improvement in English proficiency in the experimental classes, notably in speaking and listening performance, especially after two years. The attitudinal data point to an increasing appreciation of the expatriate teachers. In both these issues the time frame of the pilot program is viewed as too brief to capture the full effect of such a scheme: what the final report points to may be the beginning of a possible trend. This is understandable in terms of innovation and change theory (Fullan 1991; Markee 1993), but a problem for policy decision-making. The reporting protocols illustrate a generally positive view of the expatriate teachers’ contribution by the school principals and panel chairs, and a more negative view of their contribution by the teachers themselves. This strong conclusion is balanced by a more considered view of what the statistics might represent. A preface by the external consultant evaluator (Professor Alan Davies) in the final report sets out both the focus and the challenge of the evaluation:
Data 7.3
The challenge of interpreting findings
. . . the effect we are checking on is hugely fugitive. We are measuring the effect of a group of expatriate native speaking teachers after only two years in certain Hong Kong schools. These teachers were not usually supernumerary. That means that the experimental studies received no more English classes than the controls. There was probably greater exposure to spoken English though this is not certain since we have no evidence of comparison with local teachers’ classes. (EELTS Final report, p. 20)
Evaluating the Contribution of the Native Speaker Teacher 129
The identification of ‘hugely fugitive’ effects is a challenge for all evaluations, a problem for experimental designs (see also chapter 6, where similar issues are explored in the context of the Science Across Europe evaluation) and also for evaluations which incorporate more description of program processes (see chapter 8 below). The ethnographic case studies provide some detail of life inside the scheme, and significantly constitute 68 of the 189-page final report. They provide valuable insights into processes within classrooms, particularly on how the exclusive use of English operated, and inform on the nature of the change for the schools and communities of English teachers. There is, however, a further challenge in interpreting the case study data, the ways in which having an EELT in the classroom (or in the school) contributed to enhanced opportunities for learning English. This issue is discussed further in the next section.
7.7 Implications for evaluation The experimental study and the ethnographic case studies in this evaluation might be seen as reflecting attempts to meet conventional validity and reliability requirements. The former is strong on reliability as it involves rigorously structured data collection and statistical analysis of these data. However, these data might be considered weak on validity: the extent to which the measures relate to how the key variable of the native-speaker teacher as a linguistic and cultural resource can be linked to the factors which promote language learning. It is unclear that they had a different method, or a specific motivational effect, and if these would be evidenced in language tests in the two-year period. The case studies do probe these factors, but as is evident from the excerpt from the Introduction to the studies in the final report, there is a sense of inadequacy in relation to objectivity and generalising the findings to the project as a whole, and to the wider context of policy development and decision-making. The case studies work with a view of truth which derives from quantitative analyses and seem hesitant about the significance of its findings for the evaluation as a whole. For example, the report on the three case studies concludes that although the teachers were not using Cantonese, the pupils were.
Data 7.4
Language use in the classroom
Contrary to expectations, a great deal of Cantonese was used by students in EELTS classes, not only in pair and group work, where Cantonese was used almost exclusively, but also as a background to the general conduct of the lesson, sometimes explanatory, sometimes disruptive in content. (EELTS Final report, p. 48)
It is not the purpose of our discussion here to consider the substantive issues involved in the classroom processes generated by native-speaker teachers of
130
Program Evaluation in Language Education
English (that is, teachers who are not users of the pupils L1). However, three aspects of this comment relate to the evaluation construct and design, and to more general principles. (a) The expectation of decreased use of Cantonese in English classrooms, and consequently the greater use of English, was part of the general rationale for the scheme (expatriate teachers who were speakers of Chinese or Cantonese were not employed on the scheme). However, this aspect of the program construct was not operationalised as part of the evaluation. There was no specific hypothesis relating to the language of interaction in lessons, and no classroom observation component (other that the ethnographic case studies) which might inform on this. (b) There is reference to even greater complexity in language use in the classroom: a suggestion that such classrooms are naturally bilingual contexts and Cantonese has a role in collaborative learning. This raises questions about the appropriateness of a policy or innovation based on exclusive use of one language in these classrooms. The evaluation, thus, might be considered to have a focus on measurement of the effect of the policy, without consideration of its worth (see sections 2.4, p. 23 and 2.5, p. 26 above). (c) There is the reference to disruption. More detailed accounts in the case studies relate disruptive behaviour to attitudes towards English as a subject, the challenge of following lessons in English, and the opportunity presented by the monolingual teacher of English for use of taboo language in Chinese. These issues were understood at the start of the scheme, and were not explored in any significant way by the initial evaluation design. In chapter 8 we explore a similar finding in relation to classroom activities in the evaluation of the Primary Modern Language Program (PMLP) in Ireland: the problems encountered in exclusive use of the target language in the classroom. In both cases the absence of a classroom observation in the evaluation design made the issues difficult to explore. These issues relate to new approaches to evaluation explored in chapter 3. A more constructivist approach would focus on what this program meant for different stakeholders, and would develop data-gathering processes and data sets which facilitated sense-making in relation to the innovation. A realist evaluation would have examined the mechanisms involved in the innovation, and explored the outcomes of these in particular contexts (such as schools in the different bands). A utilisation focus would have developed these aspects to enhance the management of the scheme and professional support for the teachers and others. In addition to these observations on how this evaluation informed on the central innovation of the program, a number of more general points about evaluation design and implementation can be made.
Evaluating the Contribution of the Native Speaker Teacher 131
Public perception versus evaluation data in policy decision-making As the end of the two-year pilot period approached, the Education Department announced the termination of the project. School management and individual contract factors meant that a decision on the future of the program had to be made before the final evaluation report was ready. Two factors in particular contributed to the decision to close the program: (1) the interim results (analysis of the data from the Year 1 cycle of tests, attitude questionnaires and reporting protocols) failed to show a positive impact; and (2) the sustained media attention to the program; in particular, value-for-money questions. The closure of the scheme, however, did not mean the end of native-speaker teachers of English in Hong Kong secondary schools. The factors which motivated the scheme in the first place proved enduring for many of the participating schools and others. A key feature of the succeeding scheme, the Native English Teacher (NET) scheme, was low visibility: it was an initiative of schools rather than of the Education Department; it was managed by schools rather than by the British Council; and the cost of the scheme was determined by local contractual arrangements rather that a financial statement by the Education Department. The time-frame for data-based decision-making An evaluation set up as in this example to determine the worth of a pilot program will always present the challenge of use. On the one hand, to wait for the complete report on the pilot program (in this case presented one month after the end of the pilot program) would mean a hiatus between the pilot and a continuation or more extensive roll-out of the scheme. The PMLP Pilot Project in Ireland illustrated in chapter 8 had two extensions before the publication of the Evaluation Report. On the other hand, to make decisions for the post-pilot phase in a time-frame appropriate for school planning and management, and recruitment and contracting of staff may mean deciding on the basis of interim findings. This was particularly problematic in this program where the interim results were less positive than later returns when the innovative features of the program had bedded down. Internal and external evaluators The team responsible for managing the program and supporting the expatriate teachers was also responsible for designing and implementing the evaluation. Thus, while there were external consultants, the internal evaluators had to manage two tensions. First, their interactions with, and written reports on, teachers and schools were, on the one hand, part of the evaluation datagathering process, and on the other, a professional support service to teachers and schools. Second, as the public face of the program, they were involved in both justifying the program and adjourning claims for its effectiveness until the completion of the evaluation. They were acting as external, independent
132
Program Evaluation in Language Education
evaluators and as program managers and advocates at the same time, in a critical media context and also in a program context where many individuals were having personally and professionally stressful and distressing experiences. The challenge of integrating evaluation and management processes is taken up in Part 3. Difficulties in linking quantitative and qualitative case study data sets This evaluation illustrates a mixed-method design, though it may be unusual in that the ethnographic component constitutes an additional component as a response to aspects of program management. The challenge of drawing together the different analyses to inform on a particular evaluation focus is similar to that experienced in many evaluations: one cannot add together an ethnographer’s telling episode and a significant statistical measure. However, they can contribute to the same critical debate about what the data may mean. The findings in relation to Hypothesis 3 illustrate this point. The hypothesis states: EELTs do not significantly increase pupils’ exposure English language learning. The findings from the quantitative data state that no significant data to reject this hypothesis were found, the only hypothesis not rejected. The issue here relates to the more complex view set out in the ethnographer’s report (see Data 7.1 above) which suggests that exposure to English may lead to increased rather than decreased use of Cantonese, and thus may not be an opportunity for learning. An integrated analysis of these findings and the extent to which they may be informing on the same phenomenon provides an opportunity for understanding the role of the native-speaker teacher ‘as a linguistic and cultural resource’, the nature of bilingualism in the foreign language classroom, and policy development in relation to these. Involvement of stakeholders – the teachers The commitment in the design of this evaluation to stakeholders was compromised by the practicalities of getting the scheme of the ground in a short time. The schools involved were considered stakeholders, but there was little consideration of the range of stakes within this constituency – it was assumed, for example, that the reporting protocols returned by head teachers would reflect the views of the English teachers in the schools, but experience within the scheme showed that this was rarely the case. The program was designed to include incentives, in terms of resources such as staffing, for participating schools. However, these were incentives only from the perspective of school managers; in many cases, individual teachers were disadvantaged in terms of workload. Perceived unfairness to such teachers served to maintain negative accounts of the program in the media, and provided local English teachers, whose main issue may have been the tacit critique of their performance implicit in the program, with a more tangible issue for the public arena. The challenge for evaluation of getting stakeholder involvement right is explored in chapter 12 below.
Evaluating the Contribution of the Native Speaker Teacher 133
The external environment – the media coverage It is appropriate that the evaluation of social programs funded from the public purse is the subject of public interest and media debate. It is an enduring view of evaluation that it should facilitate such interest and discussion (MacDonald 1976; Kushner 1996). This evaluation raises two issues regarding such input from the wider context. First, there is the issue outlined in above relating to the different evidential requirements of debates in the media, and debates towards policy evaluation and decision-making. Second, there is the need to defend a program based on its rationale in the context of unbalanced media coverage, and as part of the process of program implementation generally, even though such a defence prior to evaluation findings may be considered biased.
Human dimensions of evaluation A key feature of this evaluation was the intercultural dimension at different levels. As an evaluation of a foreign language program, the program raised socio-cultural and socio-linguistic issues of affect, identity and affiliation, as well as issues of language use in classroom interaction. The program also had a significant host/guest structure, which established social requirements on both hosts – the participating schools – and the guests – the expatriate teachers. This aspect of the program had two effects on the evaluation. First, as reported in the final evaluation report (EELTS 1989), it found the hosts were more positive about the guests’ contribution than the latter were about their own. It is necessary to consider these professional judgements through the lens of social relations. Second, in the context of negative media attention, such social relations may become strained. When the social dimension of being guests and hosts is damaged, the challenge maintaining a professional working relationship is very great indeed.
7.8 Afterword – 15 years on The EELTS Scheme was followed in 1989 by the Native English Teacher (NET) scheme, which in both its design and implementation is a more devolved, loosely coupled program. In addition to differences in structure and administration, the approach to the management and evaluation of the NET is different: the emphasis on school-based self-evaluation of English language provision, and examination of the deployment and contribution of the NET within this (Poon and Higginbottom 2000). Evaluation activities include collaborative workshops within and across schools, and the dissemination of best practice in schools with NETs. However, a key issue in the EELT program and evaluation – the professional status of the native-speaker teachers and local English teachers – remains unresolved. Poon and Higginbottom
134
Program Evaluation in Language Education
(2000: 52) describe four ways in which the NETs contribute to English language in schools: 1. 2. 3. 4.
NETs as curriculum designers. NETs as contributors of specific skills and ideas. NETs as an English language resource. NETs support in creating a language rich environment.
The view of one head teacher in the program emphasises the notion of special contribution of the NET: If we treat the NET as an additional English teacher to reduce the general workload of the English Panel, a local teacher might do equally well, and that defeats the purpose of the scheme. We hope to take full advantage of our NET, not only as an English teacher to benefit our students, but also to bring innovative ideas to our English panel, and to serve as a human resource to support the professional development of all our teachers. Poon and Higginbottom (2000: 25) There is no reason why any teacher should not contribute to school development in this way. In a situation, however, where English is a school subject and is the only subject where ‘ordinary’ teachers work with such an authoritative resource in the school, this view may contribute to tensions or be seen as compromising the autonomy of teachers. In addition, there is a wider, critical discourse of the role of native speaker teachers (Phillipson 1992; Pennycook 1994), and there are ongoing debates in countries such as Japan, China and Thailand (Braine 1999; Lai 2003; Kiernan 2004) on whether native speaker teachers or local teachers of English provide better learning opportunities, and whether native-speaker teachers merit preferential terms and conditions. Given this background to language programs and their evaluation, it is necessary to develop more comprehensive accounts of the pedagogic interactions afforded by these different teacher types than has been developed by the evaluation described in this case study. A key issue is the validity of the construct of native-speaker teacher as linguistic, cultural and professional resource, and realisations of this in the particular context of English. It is interesting to note that in the Primary Foreign Language Project in Ireland described in chapter 8, native speakers (of French, Spanish, German and Italian) constitute one category of teachers, but a particular contribution deriving from this factor does not emerge as an issue.
7.9 Summary This account of the EELTS evaluation has profiled a range of evaluation issues. The focus on a specific language program issue – the native-speaker teacher
Evaluating the Contribution of the Native Speaker Teacher 135
as a linguistic and cultural resource – provides an opportunity to address a research as well as an evaluation issue (see section 11.4, p. 197 below). The mixed method approach raises issues of data analysis and interpretation. As a pilot study it raises issues of evaluation and policy development. The complex architecture of people and interests raises stakeholding issues, and the impact of media attention and day-to-day program management on the evaluation provide insights into these issues. The next chapter also presents an evaluation of a pilot study – the introduction of modern foreign languages into the Irish primary school curriculum – and extends the discussion of many of these issues.
8 Evaluating Foreign Language Teaching in Primary Schools
8.1 Introduction The focus of this chapter is the evaluation of an initiative to introduce a modern foreign language into the primary school curriculum in Ireland, the Primary Modern Language Project (PMLP). The initiative and its evaluation share five features with the Expatriate English Language Teachers’ Scheme (EELTS) case study presented in chapter 7. (1) It is an evaluation of a pilot study established as part of the development of a public sector, state-wide innovative policy. Both pilots were invested with the capacity to inform decision-making vis-à-vis language in the school curriculum. As discussed later in this chapter, the decision-making function played out somewhat differently in this case compared to the EELTS evaluation. (2) The language education innovation is introduced – as in the case of English in Hong Kong – in a complex sociolinguistic context. In Irish primary schools two languages, Irish and English, are core parts of the curriculum, and the modern foreign language thus constituted a third language. While the three-language policy conforms to the Council of Europe position on languages in education and social life, it creates particular challenges for teaching resources and learning effort. (3) The evaluation addressed learning and teaching issues such as teachers’ skills, learning materials and social attitudes. As the Hong Kong evaluation set out to build knowledge on the contribution of the native-speaker teacher which might inform language teaching policy in other contexts, the PMLP evaluation sought to identify factors and practices which contributed to effective teaching and learning. (4) Although both programs initiated major changes in classroom processes and interactions, neither evaluation had a major classroom observation component such as that developed for the evaluation described in chapter 5. Whereas the Hong Kong EELTS evaluation sought to understand what happened in classrooms through scrutiny of learning outcomes and attitudinal change (with an ethnographic element added on later), the PMLP relied largely on self-report, and assessment of language skills. (5) Finally, both evaluations 136
Evaluating Foreign Language Teaching in Primary Schools
137
were integrated into the day-to-day management of the programs. In the EELTS case, the management team was responsible for evaluation processes. This role, particularly in the context of intense media interest in the program, meant that members of the team were viewed as program insiders by the media, and as outsiders, i.e. independent evaluators, by teachers and schools. In the PMLP evaluation, the commissioning of the evaluation at the outset, before there was any actual teaching and learning to evaluate, meant that the evaluators had an opportunity to become familiar with teachers, trainers and pilot school administrators, and the problems they were engaging with in setting up the program. This study differs in two ways, from the large-scale evaluations described in chapters 5–7. First, the evaluation was initiated early, so that there was time to understand the program as an innovation, and design instrumentation and procedures with the benefit of these insights. Second, the evaluation team from the ITE was from the same language education community as the program participants. They had carried out a series of studies on Irish language learning and teaching in these schools, and thus could avoid the linguistic, cultural and conceptual divides which had to be managed in the evaluations described above in chapters 5–7. The evaluation in chapter 7 was firmly grounded in the experimental tradition (see section 2.2, p. 18 above). The PMLP evaluation takes explicit epistemological direction from Stake (see section 2.4, p. 23 above): A full evaluation results in a story, supported perhaps by statistics and profiles. It tells what happened. It reveals perceptions and judgments that different groups and individuals hold – obtained, I hope by objective means. It tells of merit and shortcomings. As a bonus it may offer generalizations (‘The moral of the story is . . .’) for the guidance of subsequent educational programs. (Stake 1967: 5, in Harris and Conway 2002: 1) In this chapter we examine how the subjective is captured by objective means. We consider how this methodological compass shapes the survey study in Year 1 of the pilot, and the case studies (including language tests) in Year 2. In the final section we consider six implications for evaluation: 1. 2. 3. 4. 5.
Pilot project evaluation and use for policy development. Implementing the Stake perspective. Dynamic account of the baseline. Focus on views and achievements – not lesson processes. The bilingual hinterland of these classrooms – expectations of high levels of use of the TL in the classroom; pupil difficulty in following the lessons. 6. What the project did not develop: use of ICT; and transition to secondary school modern language learning.
138
Program Evaluation in Language Education
8.2 Context There are three principal context factors in this evaluation: (1) the program is in Irish primary schools where two languages – Irish and English – are core curriculum subjects at all stages of the primary school curriculum; (2) the issues of additional languages in the primary school is both a theoretical language learning issue, particularly related to age factors in language learning, and a wider policy issue for curriculum development. For several decades, the benefits and processes of introducing an additional language in the primary curriculum have presented questions for policy development in a range of countries. In the European context, where the three-language policy is set out as an educational norm, there is value in an account of how this policy might be implemented; (in this connection, see Cumming 2001 and the comparative language education study in 25 countries). In this case, the three languages are: • a language of everyday communication and literacy (English); • heritage language (Irish); and • a foreign language (French, Spanish, German or Italian). The policy development issues are particularly relevant in this context, since there has been some debate about the teaching effectiveness of and achievements in learning in Irish, the heritage language (Harris and Murtagh 1999), and therefore, questions about whether an additional language component in the curriculum would aggravate further this situation, or improve linguistic achievement in the curriculum overall. Finally, (3) the evaluation was established at the start of the program, so that the evaluators: were able to examine the start-up phase in some detail rather than just the end result after two years. In particular, we have been able to document teachers’ early experience during the start-up phase – something which other evaluations have not been able to do. (Harris and Conway 2002: 21) The evaluation, therefore, in telling the story of the PMLP, has the potential to inform our understanding of how different language elements interrelate in the primary curriculum, how in practice the age factor impacts on the curriculum, and how the innovative aspects of the program are managed. The program The aims of the program emphasise affect, opportunity and administration. There are no specific learning goals, such as achieving a certain proficiency
Evaluating Foreign Language Teaching in Primary Schools Table 8.1
139
The aims of the program
(i) To foster positive attitudes to language learning. (ii) To establish co-ordination between language teaching at the first and second levels. (iii) To encourage diversification in the range of languages taught. (iv) To enable a greater number of children in a wider range of school types to study modern languages in our schools. (Harris and O’Leary 2002: 14)
level (but see Table 8.4) or completing a syllabus. The broad aims of the pilot program (Table 8.1) thus present both opportunity and challenge for the evaluation team. They have to ‘discover’ the program constructs as it is implemented, and inform on broad education goals which may be influenced by factors other than the pilot program, for example Aim (i), and which may not be clearly evidenced within the two-year pilot period, for example, Aim (ii). The PMLP team interpreted this program remit from the perspective of what might be considered a broadly Communicative Language Teaching approach (Hedge 2000), with a particular concern to reflect good practice in the young learner context (Moon 2000; Cameron 2001). Table 8.2
Principles for implementation
1. As much use of the target language as possible as the normal language of the classroom. 2. A concentration on language as a means of communication rather than as a body of knowledge to be learnt. 3. An emphasis on discovery learning rather than on the didactic approach. 4. The use of active learning approaches, including drama, songs and games. 5. Appropriate use of IT, especially the use of school links and virtual school exchanges via email or the internet. 6. The creation of an awareness of the way of life of the people who are native speakers of the language, and of the European dimension of living in Ireland. (Harris and O’Leary 2002: 14)
The program involved pupils in the 5th and 6th classes (11–13 year olds) in selected primary schools taking a foreign language (French, Spanish, German or Italian) for 1.5 hours per week. The project was initiated by the Ministry of Education, and schools were selected to provide a nationally representative account of how the program operated in different school contexts. Data 8.1 illustrates the approach to selecting schools for the pilot program.
140
Program Evaluation in Language Education
Data 8.1
Selecting schools for the pilot program
Schools were notified of the project and the conditions for participation and were invited to apply for inclusion. From a total of 1,300 schools applying, 270 were chosen. Care was taken in the selection so that the full range of educational, social and linguistic variables relevant to the enterprise would be taken into account. The final group of project schools represented a mix of urban/rural/town schools, boys/girls/mixed schools, all-Irish schools, schools in Gaeltacht areas [communities where Irish is the language of everyday communication], disadvantaged schools and special schools. While it was hoped, in the interests of language diversity, to strike a balance between the four target languages, this proved difficult to achieve. The final selection included 133 schools for French, 71 for German, 44 for Spanish and 22 for Italian. (Harris and Conway 2002: 14)
The project from the outset had its own administrative and resource centre, and seven project leaders whose main task was ‘to support the teachers of the different languages and provide in-career development’ (Harris and Conway 2002: 14). Four categories of teachers were involved: Table 8.3
Teachers involved in PMLP
Staff teachers
Regular class teachers Exchange teachers from within the school
Visiting teachers
Secondary level teachers Native speaker teachers
These teacher categories, it was hypothesised, represent different teaching strengths: the staff teachers are particularly skilled in primary level teaching and learning; the secondary level teachers in modern foreign language teaching; and the native speakers in linguistic and cultural knowledge. In addition to bringing different professional strengths to the program, these professional groups brought different perspectives to the evaluation. Speaking with the voices of their professional communities, they articulated both their own stakes and interests, and the pedagogical task of the program in ways that required careful interpretation by the evaluators. This insider perspective taken by an external evaluation team illustrates the complexity of insider/outsider notions in evaluation (see also section 12.3, p. 206 below). The project was set up for two years (1998–2000) and then extended for a third year (2000–1). At the end of the third year, the pilot became an initiative, an aspect of the curriculum which was not a trial to develop policy, but rather an accepted part of policy and provision. The aims of the program and the principles for implementation provide a perspective on the initiative
Evaluating Foreign Language Teaching in Primary Schools
141
which relates to educational policy on the one hand, and the practice in classrooms and schools on the other. This aspect of the program presents a challenge for the evaluation: on the one hand, the evaluation construct might derive from the aims, and focus on the provision of learning opportunity in both cultural and curricular terms. On the other, it might focus on learning achievements, and assessment of the quality of provision in terms of language skills development. In section 8.3 we explore how the evaluation examines both: the Stake orientation to providing guidance for subsequent initiatives constitutes a stimulus to understand the elements of the program which give for success. Telling the story, however, presents some limitations: aspects of the policy and practice such as transition and use of ICT which are not central to program implementation during the pilot period remain largely unexplored. These issues are discussed further below.
8.3 Aims and scope of the evaluation The stated purpose of the evaluation reflects different evaluation discourses explored in chapters 2–4. There is an emphasis on accountability, in terms of value for money, measurement of outcomes and contribution to the wider field of language teaching policy in Ireland. The perspective here represents an outsider-for-outsider view: educational policy-makers and evaluation commissioners addressing the concerns of the sponsors in the program, that is, the Irish and European Union funding bodies. The aims of the evaluation represent a marrying of the classic accountability focus in Table 8.4 with the more process account deriving from the approach of Robert Stake (see sections 8.1 and 2.5, p. 26 above). Aims 4 and 5 in Table 8.5 might be considered to address directly Reason 2 in Table 8.4. The other aims represent a documenting of curricular process, which might be the basis for critical debate (Rea-Dickins 1994), or for policy-makers, language learning researchers and practitioners to use for their different needs (Mitchell 1989).
Table 8.4
Reasons for commissioning the evaluation
Reasons for commissioning the evaluation 1. To ensure that value for money was obtained. 2. To study the outcomes in terms of levels of proficiency attained and attitudes inculcated. 3. To derive maximum value from the experience so that overall policy on language teaching in Ireland can be informed by the outcomes.
142
Program Evaluation in Language Education
Table 8.5
Aims of the evaluation
Aims of the evaluation 1. To document the organisation of the pilot project within schools. 2. To identify methodologies and approaches used in modern languages classrooms and assess their effectiveness. 3. To ascertain teachers’ views on participation in the pilot project, detailing the positive and negative implications for all those concerned. 4. To study pupils’ achievement in the modern language. 5. To investigate pupils’ attitudes and motivation and other affective aspects of modern language learning such as lesson anxiety, self-concept as a language learner and perception. 6. To obtain pupils’ views of the modern language lesson in their own words. 7. To examine the extent to which attitude and motivation are linked to achievement.
8.4 Design of the evaluation The evaluation of the pilot project had two main strategies: a survey of teacher attitudes, and a study of 22 classes, which included assessment of learning outcomes, pupil attitudes and teachers’ assessment of pupils’ competence. Table 8.6 sets out the focus of these studies. In Year 1 the principal activity was a survey of teachers, providing base-line data on their expertise, experience and professional development needs. These data also served to initiate an account of teaching and learning within the program, and to evaluate the activities of the program management team. In Year 2, the 22 Classes Study provided for engagement with actual learning and the range of factors which influenced this. The approach to assessment of learning outcomes is set out in Table 8.7, p. 144. The design of the test had to address four principal factors. First, it had to be communicative in orientation in order to address this central aspect of the program construct. Second, it had to be practicable in terms of the resources available. The sampling strategy facilitates an individual, interactive assessment of selected pupils from selected schools which might be considered representative of the program as a whole. Third, it needed to capture the ‘wide spectrum of experiences which characterize project classes’ (Harris and Conway 2002: 147). The same testing procedure, therefore, had to identify learning achievement at all points of this spectrum. Fourth, it had, as with all aspects of this evaluation, to contribute positively to the program. Thus, there was a concern to avoid a test which was so difficult as to be discouraging or damaging to pupils’ own perceptions of progress and skills. For this reason the test started with simple items and then progressed to more difficult ones. The guidelines to the examiners emphasised support and mutually satisfying management of the interaction: pupils were, for example, encouraged to repeat correct answers provided by the examiner, even though this would
Evaluating Foreign Language Teaching in Primary Schools Table 8.6
Summary of the evaluation design
Project
Evaluation strategy
Instrument
Components
Year 1
Teachers’ Survey
Questionnaire with 62 items in six sections
Organisational issues within schools
143
General support and in-service Teaching the modern language Pupils’ reactions to learning the modern language Links with the wider community General observations on the project to date Year 2
22 classes study
Pupil questionnaire with 88 items
Nine attitude-motivation scales (61 items) Four other scales areas (24 items) Three questions on likes, dislikes and suggestions for change in relation to modern language classes
Teachers’ ranking of pupils
Teachers reports of pupil ability in the ML; pupil interest in the ML; and pupil difficulty with ML and three other subjects (English; Irish and mathematics)
Class performance
Audio-taped, ten-minute performance by each class on songs, poems, drama or other performance for entertainment Face-to-face oral test of six pupils from each of 22 schools. The sample was made up of two randomly selected pupils from the high, middle and low ability pupil categories in the teachers’ ranking of pupils.
Linguistic/ Communicative test
See Table 6.7
not register a score. The criteria used are evidenced in the band descriptors in Table 8.8, p. 145. The criteria for the spoken communication tests emphasised communication: a linguistically correct response where the information was incorrect was rated 1, while a response that was communicatively adequate but linguistically incorrect was rated 2. Despite this overall communicative orientation,
144
Program Evaluation in Language Education
Table 8.7
Specification of the tests used in the evaluation
Component
Sections
1. Listening comprehension (all pupils in each of 22 schools)
Five sections
2. Spoken communication tests (six pupils in each school, two (randomly selected) from each of three ability levels as ranked by the teachers
Five sections:
(i) Spoken word/picture matching test (eight items) (ii) Utterance/picture matching test (ten items) (iii) Spoken narrative with questions presented orally and in writing in the test booklet in the language of school (English or Irish) (four items) (iv) Spoken narrative with multiple-choice (MC) questions presented orally and in writing in the test booklet in the language of school (English or Irish) (four items) (v) Spoken narrative with multiple-choice (MC) questions presented orally and in writing in the test booklet in the modern language (four items)
(i) Social /personal conversation (nine items) (ii) Picture-based comprehension and production (twelve items) (iii) Simple reading (five items) (iv) Simple writing (five items) (v) Following directions (five items)
however, the tests were made up of a series of separate items, in a way that limited the discoursal connectedness of real world language use (Weir 2004). The evaluation team experimented with tasks such as role play, but at the pilot stage these proved problematic to implement. From a validity perspective such activities would favour those pupils who had learned through role play tasks as part of their program, and disadvantage those for whom role play was new. Thus, the assessment was simple in item design and scoring system, and more complex in administration: the examiners were encouraged to be supportive and as far as possible to mediate to allow each pupil perform to the best of their ability. We see this approach to assessment as reflecting some principles of formative assessment (Rea-Dickins 2001) and of dynamic assessment (Lantolf and Poehner 2004). It thus suggests ways in which language assessment in evaluation like this, and young learners’ contexts more generally, might be developed.
Evaluating Foreign Language Teaching in Primary Schools Table 8.8
145
Assessment criteria for the communicative test
Score
Band descriptor (description of performance)
0
The child does not understand the question, and even after prompting, does not respond adequately according to the criteria set out below.
1
(a) The child responds in English to a question asked in the target language (TL) (b) The examiner has to ask the question in English to ensure the child understood, but the child then responds in the target language. (c) The child gives the correct answer in the target language, but requires assistance to do so: – Examiner has to give examples of similar type answers, for example, ‘Tu es en quelle classe? Deuxième, troisième, . . .’ – Examiner has to begin a word or sentence for the child, for example, ‘Quel âge as-tu? J’ai . . .’ (d) The child gives a linguistically acceptable response but the actual information is incorrect: – Quelle heure est-il? Correct response: Neuf heures Child’s response: Trois heures
2
The child gives a response in the target language which is communicatively adequate, but not linguistically correct. Despite the response containing errors of grammar, vocabulary or pronunciation, the meaning is still clear and appropriate, for example: – Quel âge as-tu? Je suis douze ans.
3
The child gives a communicatively adequate, linguistically correct response in the target language. Minor errors in pronunciation are acceptable. It is also acceptable for the examiner to assist the child by pointing, gesturing, rephrasing questions or stressing key points in questions.
8.5 Data collection The data for both parts of the evaluation were collected and analysed by the evaluation team in close collaboration with the project leaders and the ML teachers. Two aspects of the data collection merit particular mention here: the administration of the pupil questionnaire and the tests. Administration of the pupil questionnaire Collecting data from children presents particular difficulties. There may be greater variation in comprehension and interpretation of items than with adults, and there may be lapses in concentration in responding to a long questionnaire (the instrument had nine pages of scale items, followed by the three open-ended questions). The questionnaire used is a slightly adapted
146
Program Evaluation in Language Education
form of an Attitude/Motivation Test Battery (ATMB) developed (validated and standardised) for use with Anglophone Canadian pupils (Gardner et al. 1979). The administration of the questionnaire was led by the modern language teacher in class. The items were read aloud by the teacher, who monitored the pupils’ progress in completing the questionnaire, but without influencing in any way the responses marked. Teachers were provided with a detailed script to work from, which included explicitly working through three practice questions (about football teams, school holidays and TV programmes, totally separate topics from the focus of the questionnaire), and use of the blackboard to ensure clear shared understanding of how to respond by encircling scale responses. This approach provided for flexibility in the time taken to complete the instrument, and for some explanations to be provided where pupils had queries. The guidance notes required teachers to provide such explanations ‘within the meaning and as far as possible within the vocabulary of the printed item’ (Harris and Conway 2002: Appendix 8). Teachers administering the questionnaire were invited to include observations on individual items or pupils’ reactions to them when they returned the completed questionnaires. Administration of the tests As with the pupil questionnaire, the tests were administered to pupils in the 22 Classes Study – the case study element of the evaluation. The assessment was carried out in four stages: 1. 2. 3. 4.
A whole-class performance in the modern language – songs, poems, dramas. Teacher assessment – teacher ranking of pupils. Listening comprehension test for all pupils in the 22 classes. Spoken communication test for selected pupils – six from each class, two from each of three ability bands (high, medium, and low) according to the teacher assessment.
Each pupil was issued with an identification number, which was written on each assessment form and on the teachers’ ranking of pupils, as well as on the attitude questionnaire. This permitted a tracking of individuals through the data, and also served to validate the sampling procedure – the extent to which the results of the smaller sample whose spoken communication was assessed corresponded to the findings of the attitude questionnaire and the listening test. The administration of the tests was integrated in three ways in the PMLP project as a whole. First, as noted in the design features of the tests, there was a concern that the tests should contribute to the effective implementation of the program. The whole-class performance and the supportive engagement of assessors in the spoken communication test as instances of such a contribution. Second, the testing was carried out by the assessors who
Evaluating Foreign Language Teaching in Primary Schools
147
developed the tests. This was possible because of the size of the sample of pupils who participated in the tests, and ensured adherence to the curricular principles of the PMLP, and limited the need for training assessors new to the program. Third, there were pre- and post-test meetings and workshops where issues of implementation and interpretation of bands were resolved.
8.6 Implications for evaluation Pilot project evaluation and use for policy development The timing problem inherent in the use of pilot program evaluations for policy decision-making discussed in the context of the EELTS evaluation in chapter 7 equally prevails here. Whereas there, the decision was taken to terminate the policy initiative, based on evidence other than the final evaluation report, in this case an extension and then an incorporation of the initiative as policy were agreed prior to the publication of the evaluation report. As we illustrated in chapter 1 and section 3.2, p. 37 above, there is an increasing reliance on evaluation findings for policy decision-making. However, the timing problem will always constitute a serious complication. On the one hand, evidence-based approaches suggest basing decisions on data from evaluation studies such as the pilot programs here and in chapter 7, an approach which would involve suspension of the program from the end of the pilot period to the point of decision-making. On the other, the interest aroused and expertise developed in the pilot program may constitute an excellent start for a new policy, and suspension would mean a loss of these. There is a need perhaps to understand pilot programs in a different way: as instantiations of a longer-term policy which contribute to understandings of how this policy might be implemented, rather than whether it is an appropriate policy and mode of implementation. In the PMLP case, the longer-term policy is the European three-language approach, which the Irish government in committed to and is unlikely to change. In the Hong Kong context, the need for English language skills in the workforce, the use of English in schools and classrooms, and the role of native-speaker teachers are all features from the wider context (Holliday 1998; Checkland and Scholes 1999; Lincoln 2001), and have not changed in the last decade. The questions for evaluations of pilot programs, therefore, is whether to relate to issues of development rather than accountability. Such evaluations need to address questions of implementation and practice rather than policy options in a context where the policy is already set. The PMLP evaluation might be seen as effective in this management of orientation, and the EELTS evaluation less so. Implementing the Stake perspective The aspiration to tell the subjective story of the PMLP in objective terms is a laudable one, but presents difficulties in terms of the construct of the
148
Program Evaluation in Language Education
evaluation. At one level there is a problem with addressing the VFM question, and thus informing policy with regard to use of resources for educational provision. At the implementation level, the evaluation provides a rich and valuable account of what the project involved in terms of the interpretations of practitioners within schools. This, however, means that aspects of the program which were not really engaged at the level of implementation – for example, use of ICT and issues of transition to secondary level (which might be considered a key VFM issue) – are not part of the story even though they are issues of policy development. Dynamic account of the baseline An innovative aspect of this evaluation was its launch at the same time as the pilot program. There is a tradition in educational innovation of evaluation as afterthought (for example, the evaluations described in chapter 4), or a focus on outcomes rather than innovative process (for example, the evaluation described in chapter 5), with the result that the evaluation of innovation fails to engage with the program as an innovation (Kiely et al. 1995; Tribble 2000). The PMLP evaluation provides valuable perspectives on the start-up processes. This baseline in terms of teachers’ pedagogic skills is, however, developed from self-report rather than classroom observation data. There is a trade-off here that characterises evaluation design in many contexts. On the one hand, the evaluation can work with self-report as in this case, and provide a non-threatening environment for teachers to represent their classrooms and their own professional development needs. On the other, it can establish a more precise account of teachers’ linguistic knowledge and pedagogic skills through a classroom observation component (see chapter 5 for a detailed example of such an evaluation), but with the risk of embarrassing, demotivating or alienating teachers. In this case reports of preparedness for teaching at the beginning and end of Year 1 are useful as accounts of the success of the professional development and project support, and might be particularly useful in the context of extension of the program and large-scale improvement of teachers’ own linguistic skills. Focus on views and achievements rather than lesson processes As the evaluation did not have a classroom observation component, it was not possible to consider what actually happened in the program. This issue is particularly interesting as the most frequently used class activity as reported by teachers – Whole-class repetition of sentences/phrases – was also an activity which the teachers reported was not enjoyed by pupils, and one viewed by the evaluation team as unlikely to conform to the principle of communicative use of language. There are a number of hypotheses which might explain both the frequency and unpopularity of this theoretically inappropriate phenomenon. (These are considered in the report but cannot be set out here for reasons of space.) From an evaluation perspective, this issue
Evaluating Foreign Language Teaching in Primary Schools
149
might be an appropriate focus for a structured classroom observation study. Innovation in language programs has proved both complex and challenging (Markee 1997; Rea-Dickins and Germaine 1998), and a study of this particular question might help wider understanding of why teachers act as they do in classrooms and programs. The bilingual hinterland of these classrooms A key feature of this program and evaluation was the expectation of high levels of use of the target language in the classroom. A key finding here was pupils’ difficulty in following the lessons. The discussion of these findings in the evaluation report sets this in context. First, the teacher dissatisfaction with managing the classroom in the target language may be influenced by the high and effective levels of bilingualism (English and Irish), which already prevails in these primary school classrooms. Second, the pupils’ experience of difficulty in comprehension and following the lesson (the most frequently noted ‘dislike’ by pupils) also reflects a point raised in the reports on the monitoring of Irish teaching in primary schools. A frequent area of dissatisfaction there is the comprehension difficulties experienced by pupils. While the evaluation concludes that there is a need to invest more care in the development of materials, there is also a need to develop a more appropriate learning culture where uncertainty and failure are accepted, and taking risks and making mistakes, are considered an integral part of the process of discovery and communication-oriented language learning (key principles of the program). The evaluation here is drawing together epistemological strands from the curricular context, as represented by the data, from other relevant evaluation studies of the bilingual classroom context and from theoretical perspective on communicative language teaching.
8.7 Summary In this chapter we have presented an evaluation of a pilot program which parallels some issues which arose in the evaluations presented in chapters 5–7. The evaluation design reflects transparent approaches to sampling, to achieving a balance between the big picture presented in the survey data, and the more detailed accounts presented by the case study element. Two particularly interesting dimensions of this evaluation are the strategies developed to elicit attitude and language performance data from young learners, and the interface with policy making and program management. The articulation of the evaluation construct represents a distinct contribution to issues of practice in the program, and the implementation of the evaluation as a collaborative enterprise with program management and training.
9 Evaluating Program Quality in Higher Education
9.1
Introduction
In this chapter and the next we present a case study of evaluation for quality management in an English for Academic Purposes program in a British university. The data for the case study derive from the evaluation processes themselves and the ethnographic study of these processes as part of a PhD study (Kiely 2000). This form of evaluation is characterised on the one hand by integration into professional practice – teaching – and, on the other, by integration with institutional quality management. Both characteristics are concerned with improving the program so that the students’ learning experience is continually enhanced. This case study engages with a range of current evaluation issues raised in chapter 3. There is an accountability element, which in many ways involves compliance with mandates (see section 3.6, p. 50), both within the institution and between the institution and higher education agencies. Equally, there is a concern with sense-making, in the constructivist sense (see section 3.3, p. 40) and with use of evaluation (see section 3.2, p. 37). In this chapter we describe the approach to this form of evaluation through analysis of institutional and departmental evaluation policies, and examine the ways in which evaluation processes provides opportunities for teacher learning.
9.2
Context
The context of the evaluation is a set of English language and Applied Linguistics programs in a British university. The university was one of a large number which became independent – that is, autonomous – universities rather than colleges of the local authority, in 1992. This development in status involved assuming responsibility for its policies and procedures from two authorities: the Council for National Academic Awards (CNAA) which before 1992 oversaw all academic processes, and the local authority, which had been responsible for policies relating to management and administration. 150
Evaluating Program Quality in Higher Education
151
The political orientation which accompanied these changes emphasised accountability, transparency and attention to student perspectives as users or clients, as well as efficiency in management practices. In addition to these factors, the nature of the student groups was contributing to a need for change: as a result of increasing participation in higher education, there were students with less strong academic credentials, mature students on professional courses and international students. For all these groups, the traditional approach to teaching and learning could not be assumed to meet expectations and needs, and to ensure success in learning (Haselgrove 1994; Laurillard 1994; Gibbs 1995). The institution which is the home of this case reflects all these features: it was in the process of establishing its own academic management procedures and policies, a process informed by the need for curriculum and staff development and student diversity. One of the policies set out (by an expert working party from across university departments) was on evaluation. The key themes of the approach to evaluation centred on stakeholders and development (see Table 9.1, p. 157). Evaluation was viewed as a demonstration of the institutional commitment to transparency and equality of status (the Directorate was listed as a program for evaluation), and the means of engaging all staff and students in a virtuous cycle of learning and development. In relation to programs of study, there were two particular foci: module evaluation which examined specific courses, and program evaluation which looked at the students’ experience of learning more broadly. A module represents a single course, a teaching unit of 30 hours, and the basic unit for the award of credits towards an award. The focus of this case study is an EAP module, which involved students from different programs. The policy devolved responsibility for implementing evaluation (that is designing instrumentation, collecting and analysing data and using findings) to programs and departments. It also set out a provision for a university-wide module evaluation questionnaire, and although such a questionnaire was developed, it was not widely used for two reasons: departments and programs generally took action to implement the policy in a way which addressed local concerns, and second, the questionnaire which covered all possible experience of learning had many redundant or vague questions. This questionnaire and the long tradition of teaching quality evaluation questionnaires are explored in the next chapter.
9.3
Aims and scope of the evaluation
The general principles of evaluation within academic programs in this institution reflect an essentially stakeholder approach, and a concern to balance accountability and development (See Data 9.1). Accountability is characterised as ‘decision-making’, and development in terms of benefits to the range of stakeholders, especially students in terms of positive changes in their own approach to learning and in the curriculum. As discussed below,
152
Program Evaluation in Language Education
these themes were particularly influential in the further development and implementation of the evaluation policy for the evaluation of English language programs.
Data 9.1 1
General principles of evaluation
GENERAL PRINCIPLES
A. Stakeholder evaluation is undertaken primarily for two purposes: accountability and development. The university is committed to use evaluation to inform decision-making and to aid the development of an effective teaching/learning environment. B. Evaluation methods used must demonstrate a balance of quantitative and qualitative approaches and seek both internal and external evaluative data. C. Evaluation methods must use appropriate criteria and systematically collect information so that the quality and effectiveness of modules and programmes can be assessed. D. Stakeholders should feel that they own evaluation through active participation in the process and by seeing that their evaluations are being used to make positive changes. E. Evaluation methods must be user friendly and the resulting information should be easy to collate and should be communicated effectively. F. In order that the evaluation process is implemented effectively staff need training. Conversely, involvement in evaluation leads to staff development. G. Stakeholder evaluation should not be confused with staff appraisal, for which entirely different systems must be set up. (Institutional evaluation policy)
These general principles accommodate the concerns of teachers in three ways: (1) teachers remain in control of key issues such as criteria and instrumentation; (2) they are provided with training; and (3) there is no link between such evaluations and appraisal or performance management. The focus, thus, is firmly placed on enquiry to understand the student learning experience, and using this understanding to improve the program. The absence of a link to staff appraisal, and assessments which might be used in processes such as promotion and extension of contracts, distinguishes student evaluations here from those elsewhere, especially in the US and elsewhere, where such uses are set out as the ‘summative’ functions of student evaluations (Marsh 1987; Pennington and Young 1989; Wachtel 1998; Fresko 2002). A key principle centres on the use of evaluation findings. Two uses are set out in the policy, one implicit and development-oriented, the other explicitly described and accountability-oriented. The use for development comes from teacher engagement with the process and the findings. One instance of such development is that of a vocabulary teaching strategy, discussed in detail below. The second is the quality assurance process which takes place after
Evaluating Program Quality in Higher Education
153
each academic year ends and involves committees at different levels in the institution receiving and considering reports. Data 9.2 sets out the route for evaluation reports: key issues are summarised and extracted for attention by relevant institutional offices (such as library, admissions, examinations). These different forms of evaluation use occur in very different time frames: the teacher’s response is within the time-frame of a ten- or twelve-week program, while the institution wide response is months later, when aggregated reports are processed through the committees.
Data 9.2 Information flow of evaluation reports Modules
Subject committee
Programmes
Programmes manager
Resources manager
Departmental board
Learning services Business services Central services
Head of unit
Academic policy committee
Directorate
In keeping with the developmental, use-oriented emphasis in the policy, actual evaluation practices were for each department to develop. The Department of English Language Programs interpreted the policy in a strongly developmental and integrated way for two reasons. First, the student body involved a high proportion of international students on English language, Applied Linguistics or teacher development courses. While some were on three-year degree programs, many were on shorter courses. Each group represented a range of needs, expectations and understandings in relation to teaching and learning. Evaluation within modules was a means of developing a dialogue about learning which would assist both students and teachers in their task. Second, the majority of teaching within this department was not in the conventional lecture/seminar model. Class size rarely exceeded 20 and involved task-based sessions or workshops, which emphasised active learning roles for students. For these reasons, a two-stage approach to evaluation
154
Program Evaluation in Language Education
was devised: a mid-course evaluation using a group discussion approach, and an end of course questionnaire. Data 9.3, from the department Quality Assurance Handbook, provides an overview of the departmental approach.
Data 9.3 Departmental approach to evaluation of programs Evaluation informs on: Module: Aims Outcomes Assessment strategy Resources
Module team: Teaching strategy Teaching style
Students: Needs & wants Contributions Problems At module-level, evaluation ensures quality teaching in two ways: 1. It provides a structure during a module for discussing and resolving any problems, and often serves a meta-learning function by raising student awareness of their learning strategies. At this level it is fine-tuning of the mode of delivery and clarification of expectations that is involved. 2. It provides at the end of the module an opportunity to discuss strong and weak points. This leads to changes in the teaching or assessment methods for future cohorts of students, and to inform on more fundamental changes needed in the content or learning outcomes, and on issues for the Subject Group as a whole. (Departmental student handbook)
The evaluation policy is thus highly prescriptive insofar as it requires teachers to engage with students in an evaluative discussion of teaching and learning. It is also flexible: it has the potential to remain sensitive to the teacher’s need to lead and act on this discussion, and to relate evaluation to experienced learning issues.
9.4
Design of the evaluation
The design of the evaluation centred on the students’ view of their learning experience at two stages of each course. A mid-course evaluation (usually in
Evaluating Program Quality in Higher Education
155
Week 5 of each twelve-week course) was an opportunity for students and teachers to take stock, assess strengths and weaknesses, and identify emphases for the rest of the course. An end-of-course evaluation was carried out by means of a questionnaire. Data 9.4 illustrates the approach to the group discussion. Its focus is the student experience and students’ concerns. The structure of the activity makes it very difficult for students not to participate – the discussion in the next chapter shows how one group of the students, for whom this was a novel experience, were adept at managing the discussion. Use of the findings within the particular course was facilitated by the requirement to report back to students the following week, and draw up a list of priorities or action points. These data were then combined with the results of the end-of-course questionnaire, and the report presented to the program group in the first instance and on to the committees of the institution.
Data 9.4 Recommended procedure for group discussion evaluation Departmental policy on evaluation interpretation and implementation strategy for institutional evaluation policy English Language Programmes Group (ELPG) policy is that all modules should a) be evaluated after four weeks by means of a classroom discussion, using the nominal group technique, and b) use a final evaluation questionnaire at the end of the module. The procedure is as follows: 1. Posing the questions Two questions are written on the board, one dealing with positive aspects of the module, the other dealing with the negative. Example questions might be: What three aspects of the work in this class have helped you to improve your knowledge of——/did you most enjoy? What three aspects of the work in this class have not helped you to improve your knowledge of——/did you least enjoy? 2. Silent nominations Individuals are give, say, 5–10 minutes to list their responses on a piece of paper/card. Complete silence is advisable. 3. Master lists The teacher compiles two master lists on the BB/OHP, taking only one positive item from each member of the class in rotation, until each student has nominated thee items, or the list is exhausted. The procedure is repeated for negative items. No editing is allowed and no evaluative comment (by the teacher) is made at this stage. It is helpful to number the items.
156
Program Evaluation in Language Education
Data 9.4
(Continued)
4. Item clarification Each item is discussed by the whole class until each student is clear about what it means. If a student feels that their item is already covered by someone else, they may request that the item is withdrawn. 5. Final listing by students (silent) Students choose from the master list their three most important positive points and three most important negative points, this time in order of priority. 6. Teacher prepares final evaluation By whatever means she/he prefers – one possibility is to award the first listed item three points, the second two points, etc. – the teacher prepares a list of the group’s priorities, and: (a) presents the list to the class for discussion in the next lesson. If required/necessary/ possible, an agreed change in the course plan for teaching and learning can be made. (b) The list of class priorities, together with any changes of procedure or content agreed with the students, should be lodged with the ELPC coordinator and with other programme coordinators, as appropriate.
Just as the general principles of evaluation for the whole institution showed particular sensitivity for the concerns of teachers, these guidelines at departmental level illustrate the need to take account of power disparities. Thus, the role and contribution of the teacher is marked out precisely, to ensure that there is no inappropriate influence on the comments made. In some groups, comments are made on cards or pieces of paper which are collected and redistributed randomly before reading out, a measure which provides anonymity to students who might not otherwise raise their concerns. Chapter 10 provides a detailed account of the operation of this procedure in an English for Academic Purposes class, in the context of an overview of the use in evaluations of Nominal Group Technique and other structured discussion methods.
9.5
Implications for evaluation
In this section we explore three aspects of this form of evaluation. First, how the evaluation process is integrated into the course and merges with teaching activities such as learning needs analysis, and the promotion of positive learning strategies (in the teachers’ and department’s view). Then we look at the relationship between evaluation and learning, how teachers’ own professional approach and activities change in response to evaluation findings, and finally how involvement in evaluation in this way enables students to become better learners.
Evaluating Program Quality in Higher Education
157
The evaluation construct The operation of this evaluation policy was researched as a PhD study (Kiely 2000). This study developed case studies of module evaluations to investigate the features of this form of evaluation, as teaching, curriculum development and teacher development on the one hand, and institutional quality management on the other. Analysis of the structure and emerging purposes of the evaluation suggest an evaluation construct more complex than feedback on student satisfaction, and more embedded in the course as a whole than the activities at mid-point and the end of each course would suggest. One case study illustrated a three-stage structure (Kiely 1998): Stage 1: Baseline evaluation – establishing the pedagogy Stage 2: Formative evaluation – taking stock and moving on Stage 3: Summative evaluation – preparing for next time Stage 1 involves introducing the module in terms of learning outcomes, teaching and learning activities, assessment requirements as well as the role of evaluation. Because it was noted as a key point in student handbooks, teachers routinely drew attention to it in the first session: on two occasions teachers suggested students prepare for this by reflecting on learning activities; others established a link between the mid-point evaluation and the activities to support students in self-evaluations of strengths and weaknesses in EAP. Stage 2 is the group discussion activity in Week 4 or 5 – a formal stock-take of how well the module is working, and establishment of emphases for the rest of the unit. Stage 3 is the questionnaire activity at the end of the session, which for the teachers is about preparing for next time: it is future-oriented in a way that distinguishes this form of evaluation from others. Kiely (1998) identified a range of purposes for the evaluation activity as it became owned by teachers: Table 9.1
Purpose of module evaluation carried out by teachers
Purposes (a) A demonstration of quality assurance at program implementation level (in many ways a higher-order function of which the others are implementation strategies). (b) A means of getting feedback from students on the appropriateness of activities for their learning needs. (c) A means of getting students to reflect on their language skills development, and identify what they still need to learn. (d) A means of persuading students to engage with the opportunities for learning outside the classroom. (e) A means of negotiating emphases for the remainder of the programme.
158
Program Evaluation in Language Education
These purposes, derived from analysis of classroom processes and interviews with tutors, illustrate an awareness of the accountability function noted in the general principles (Data 9.1 above) and in the institutional use of evaluation findings (Data 9.2 above). Teachers routinely referred to the ‘requirement’ for such an evaluation: one teacher commented in interview that she introduced it in this way so that students would be aware that it was not her decision to use class time for this activity. This point might suggest a lack of conviction about the benefits of the evaluation process (or perhaps a lack of confidence that students could be persuaded to see the evaluation as a beneficial activity for them). The other purposes – (b) to (e) in Table 9.1 – however, point to a complex curricular role for evaluation. They cover a range of aspects of teaching – evaluating materials and classroom activities, understanding students’ learning needs, sharing this understanding with students, engaging students in learning activities likely to meet these needs and negotiating teaching activities for succeeding weeks. The construct of evaluation here is thus a form of enquiry-based teaching: in Stenhousian terms (see section 2.7, p. 32 above) it is curriculum design and implementation in the action context of the classroom. (See also section 10.3, p. 162 below for further discussion of these purposes.) Evaluation and teacher development Kiely (2001) reports on the development of one aspect of the pedagogic strategy deployed in the program; the pedagogic strategies which were central to the teacher’s approach, the profile these had in the classroom discussion, and the development in the program in the post-evaluation period. In relation to the strategy for dealing with vocabulary encountered in academic reading, the classroom observation and interview data illustrated a change in the teacher’s approach. Table 9.2 sets out the changes: the teacher (named Anna in the report of the case study) moves from Strategy 1 to Strategy 2 and back again. The moves are prompted by evaluation feedback and are effected after initial hesitation or resistance. Strategy 1 involves not explaining the new words in the text, but rather allowing students to develop strategies of guessing from the context or using dictionaries. Strategy 2 involves using class time to explain the words which students ask about. These two cycles can be seen as illustrating the flexibility of an experienced and informed teacher’s principles. The analysis shows that the teacher’s concern was to maintain the recognised value of the program rather than adhere to an aspect of methodology which she believed in. Where evaluation feedback showed clearly what the students considered of value in terms of classroom activity, the teacher oriented her teaching to incorporate this feature. The change in teaching strategy here resulting from program evaluation feedback corresponds with other research findings in one way, but disagrees with much of the research in another. Anna, like the other teachers studied, did not feel that the student feedback was a significant influence on
Evaluating Program Quality in Higher Education Table 9.2 in texts
159
Anna’s response to evaluation feedback on vocabulary encountered
Feedback
Resistance
Reflection
Innovation
CYCLE 1 Anna gets feedback from students suggesting less attention to explaining words in class
➔
Anna interprets this as selfish thinking on the part of students
➔
Anna rationalises that this might not be the best way of using classroom time
➔
Anna develops a pedagogy which focuses on comprehension of ideas rather than individual words
CYCLE 2 Anna gets feedback from students suggesting more attention to explaining words in class
➔
Anna resists suggestions that her focus on ‘broad swathes of meaning’ should change
➔
Anna: ‘This group say they want it, so I try to do it for them’, and includes a short activity in Week 6
➔
Anna spends more time on vocabulary in Weeks 9 & 10, and provides vocabulary tasks to texts in Weeks 11 and 12
her teaching strategies, although she was aware that she ‘adapted’ to classroom situations. Wachtel (1998) and Fresko (2002) researching student evaluation and teachers’ views in American universities found similar attitudes. Taut and Brauns (2003), in an analysis of resistance to evaluation, note that in the context of an evaluation: Stakeholders will undertake a personal cost-benefit analysis and may anticipate more disadvantages than advantages from the evaluation, e.g. loss of approval or valued tasks, feedback of failure . . . . In this case the stakeholders will try to prevent the anticipated negative outcome and may resist evaluation. (Taut and Brauns 2003: 252; emphasis in original) Taut and Brauns also note that evaluations, people making judgments about other people, risk altering power balances within social organisations. It is possible, therefore, to see such teacher attitudes as an initial response to questions about the impact of student evaluations. The analysis of Anna’s teaching strategies in the weeks following the evaluation, however, show that within the program, she adapts her strategy to what the students prefer.
160
Program Evaluation in Language Education
This suggests that the impact of evaluation feedback on a complex area such as teaching may not be accessible through questionnaire studies of teachers, but rather through documenting patterns of classroom behaviour. The implication here for researching evaluation processes as a way of understanding the professional learning aspect of evaluation is taken up in Part 3. Evaluation and learner development The research into student evaluation of teaching suggests that students benefit from the practice of evaluation through the impact evaluation has on curriculum development. The policy examined in this chapter posits that students should benefit in a more immediate way. For this reason a mid-point evaluation is a key element of the approach. Kiely (2004) explored the ways in which students benefited from the group discussion evaluation which they participated in. The key issue appeared to be the development of an evaluation meta-language. In order for students to provide feedback effectively on the program they need to have a meta-language to talk about the components of a program and about their own needs in a way which facilitates a dialogue with the teacher. The high profile of evaluation in the programs studied seemed to engage the students in an ongoing discussion about the program elements and their purpose. This had the effect of the teacher and students discussing features of learning such as discourse markers, presentation skills and critical thinking. Kiely (2004) showed that the more successful students on the program were those who understood and used this meta-language: they integrated the program objectives with self-assessment of their learning needs.
9.6
Summary
This examination of an evaluation policy, and validation of key themes such as program and curriculum development, professional learning and change for teachers, and benefits to students illustrates the potential impact of evaluation. This impact is especially important in language and teacher development programs with international students where there may be a significant induction and acculturation element to successful learning. The account presented here suggests that there are benefits to programs from a clear policy at institutional level, which at the same time, does not ‘take over’ the quality management of the course from the teacher. Where the evaluation process is owned by participants they may implement it in a way that promotes learning by teachers and students, and ongoing quality enhancement of programs. Chapter 10 explores in greater detail the group discussion procedure and the role of students as informants, and identifies ways in which a policy such as that set out here might be augmented.
10 Evaluating the Student Experience in Higher Education
10.1
Introduction
In chapter 9 we examined a framework for program evaluation for quality management purposes in the higher education context. We looked at an evaluation policy devised to address both program development and institutional accountability requirements. A feature of the policy – devolved responsibility for the implementation and use of evaluation – was seen as realising the potential of evaluation for both teacher development and enhanced student learning. In this chapter we focus on methodological aspects of student evaluation. We explore the student experience construct through analysis of different questionnaire designs and the operation of a group discussion technique. In the contemporary context of developing teaching and learning in higher education, where the views of students as service users or clients are key drivers of policy and practice, the ways in which such views are constructed and understood are important. We identify areas for the development of evaluation in this chapter, and revisit these themes in Part 3, where we set out ways for evaluation practitioners and researchers to extend situated understanding and knowledge-building.
10.2
Context
The context of this form of evaluation set out in section 9.2, p. 150 above is a British university responding to wider participation in higher education, and concerned to develop teaching and learning to meet the needs of students on a range of professional learning programs and to improve teaching and assessment through more diverse strategies and use of Information and Communication Technology (ICT). These characteristics prevail in other higher education systems, particularly in the Anglophone countries (Ryan 1998; Jacobs 2000; Kember, Leung and Kwan 2002). There is a tradition – long-standing in the US, more recently established in other systems – of 161
162
Program Evaluation in Language Education
using student satisfaction questionnaires at the end of courses. There are three challenges in this enterprise: (1) understanding what satisfaction means and how it relates to other constructs of quality; (2) devising appropriate instrumentation to collect data; and (3) ensuring use of feedback to improve educational provision. The challenges are not new – in chapters 2–4 we illustrated how much of the history of evaluation has involved engagement with these issues. The context of meeting them, however, is new insofar as it now involves not evaluation experts, but teaching and management professionals in educational institutions.
10.3
Aims and scope of the evaluation
Student evaluation of teaching in higher education has been used as a management tool in American universities for several decades. Marsh (1987), in a major review of the research into this form of evaluation, identified five purposes for providing opportunities for students to evaluate teaching effectiveness: 1. Diagnostic feedback to faculty about the effectiveness of their teaching that will be useful for the improvement of teaching. 2. A measure of teaching effectiveness to be used in administrative decisionmaking. 3. Information for students to use in the selection of courses and instructors. 4. A measure of the quality of the course, to be used in course improvement and curriculum development. 5. An outcome or process description for research on teaching. (Marsh 1987: 259) Marsh notes that Purpose 1 here is general and universal; Purposes 2, 3 and 4 are more specific, and reflect local institutional practices; and Purpose 5, while an estimable aspiration, had, by 1987, not been realised, and arguably the links between such evaluations of teaching and research issues in teaching have not been developed. Kiely (1998), in a study of the role of evaluation within one program in a British university (see section 9.5, p. 156 above), found a different set of grounded purposes for student evaluations: 1. A demonstration of quality assurance at program implementation level (in many ways a higher order function of which the others are implementation strategies). 2. A means of getting feedback from students on the appropriateness of activities for their learning needs. 3. A means of getting students to reflect on their language skills development, and identify what they still need to learn.
Evaluating the Student Experience in Higher Education
163
4. A means of persuading students to engage with the opportunities for learning outside the classroom. 5. A means of negotiating emphases for the remainder of the programme. (Kiely 1998: 98) There is a significant difference in focus between the constructs of evaluation here. Marsh states that it is evaluation of teaching, that is, the students’ judgment of the performance of the teacher. While this is an important element of the construct in the British university context, there is also a view of program evaluation from the students’ perspectives, that is, the students’ evaluation of their learning experiences. This typically includes a focus on resources, learning support, including recently the role of information and communication technology, and assessment procedures as well as the performance of the teacher. While the intention is often to inform on learning as transformations (Checkland’s SSM – see Concept 2.4, p. 30 above) or transactions (Stake’s Countenance Evaluation – see section 2.5, p. 26 above), the focus may be on these opportunities for learning as service provision. Like Marsh’s list, Purpose 1 here is generic, an expression of policy and rationale, while the other purposes are specific and relate to practices which may or may not characterise specific programs. Marsh (1987) identified a range of factors which both supported and threatened the validity and reliability of student evaluations. He found some evidence of reliability – for example, a strong measure of peer agreement on courses (a form of inter-rater reliability), evidence of validity, for example, a strong correlation between the views of past and present students on courses (possibly illustrating that stated views were not intended to influence assessment processes) and a (less strong) association between students’ evaluation of a course and their performance in it. This last aspect suggests that achievements in learning, which determine the outcome for each student in terms of the assessment process, might also determine student satisfaction with a course. Student evaluation questionnaires One outcome of the research into this student evaluation was the development of the Student Evaluation of Educational Quality (SEEQ) questionnaire (Marsh 1987). This instrument identifies nine aspects of the student experience of a course (see Table 10.1). Students use the instrument by indicating agreement with statements on a nine-point scale, as follows: 9 – Strongly agree 7 – Agree 5 – Neutral 3 – Disagree 1 – Strongly disagree
164
Program Evaluation in Language Education
Table 10.1
Summary of the SEEQ
Student evaluation factors
Statement focus on
Learning/academic value
Intellectual challenge; learning achieved; interest; learning materials
Instructor enthusiasm
Performance of teacher in terms of enthusiasm, energy, dynamism, use of humour
Individual rapport
Friendliness, genuine interest, responsiveness, accessibility
Examinations/grading
Quality of feedback, fairness of assessments, links between assessment and course content
Organisation/clarity
Clarity of explanations, and handouts, links between content and objectives, note-taking facility
Breadth of coverage
Content: Range of aspects (theories) covered, background and implications, research of teacher and others, currency
Group interaction
Encouragements to participate in discussion, share ideas, ask questions, challenge the teacher’s view
Assignments/reading
Value, appropriateness and accessibility of readings
Workload/difficulty
Difficulty, pace and actual workload compared to other courses
In addition to this student questionnaire, the SEEQ provides a lecturer selfevaluation instrument and a format for open-ended responses. The SEEQ is designed for modular schemes where students are taking a range of separate courses: the basis for judgements is comparison with other courses. It is also designed for courses with one teacher: respondents identify the teacher and the course. Within the nine factor areas, each institution can adapt the statements or devise their own. The SEEQ has both formative and summative evaluation functions: it provides a context for the development of teaching and the curriculum more broadly by individual teachers, departments and institutions as a whole. It is summative in the way that the feedback is used to make decisions about teachers: the personnel function in terms of promotion, tenure and performance-related pay. Further details of the SEEQ are available at http://www.cea.curtin.edu/seeq/index.html. Richardson (1994; 1995) reports on a number of research studies, mainly in the British and Australian university systems to develop a questionnaire based on inventories of learning orientations and behaviours. When used for course evaluation purposes, it was found that features of courses, such as teaching and assessment strategies, influence studying behaviours, which in turn impact on learning and academic success. However, these effects are not
Evaluating the Student Experience in Higher Education
165
universal – different levels of students in different subject areas and course contexts respond differently. Richardson advises that finding of such instruments should be ‘interpreted with care’ (1995: 515). This does not always happen: In reality, student feedback tends to be collected at the departmental level in a one-off, isolated and snap-shot manner, rarely analysed effectively on a year-by-year basis, let alone published or acted upon. (Opacic 1994: 161) This wider experience with generic instruments, or institution-wide questionnaires, reflects that of the case study described below, where there was a project in the mid-1990s to develop an institution-wide questionnaire to evaluate aspects of the student experience (see Table 10.2). It was based on a service-provision approach, eliciting directly the kinds of information which such quality management systems might use: Section D (reproduced in Table 10.3) illustrates, in the focus on 70 per cent of time, a notion that derives from a management-oriented, benchmarking approach to the evaluation of service provision. As with all ‘service provision’ evaluations, the emphasis is almost exclusively retrospective, with little consideration on what has been learnt and the extent to which the learning experience has prepared students for further learning or other activity. In contrast to the SEEQ, where the construct is centred on the experience in one teacher’s course, the construct here is the provision by the university as a whole. The rationale for this approach to evaluation of the student experience had both accountability and development dimensions. The institution was acknowledging to students its responsibilities and actual services. The evaluations of these would in turn provide opportunities to develop these systems and services. In addition to this comprehensive coverage of the student experience, the questionnaire was characterised by a limited focus on the performance of individual teachers. The piloting of this questionnaire raised
Table 10.2
Structure of the institution-wide questionnaire
Aspects of student experience Section A B C D E F G H
Admissions – First years only (Questions 1a–1e) Induction – First years only (Questions 1–6) Timetable (Questions 1–5) Attendance and study time (Questions 6–11) Tutorial support and guidance (Questions 12–25) Placement information [if appropriate] (Questions 26–30) Examinations (Questions 31–34) Participation in committees (Questions 35–42)
166
Program Evaluation in Language Education
Table 10.3
Section of university-wide questionnaire
Section D
Attendance and study time
6. Are you able to attend at least 70 per cent of the timetabled classes? YES/NO 7. Are you able to do about 70 per cent of the learning outside class which is required by your program? YES/NO If you have replied NO for questions 6 or 7 (or both), please answer questions 8–11 below. 8. Is you attendance less than 70 per cent because: You have to work during term time? You are better able to work independently? You find it difficult to manage your time? You have domestic commitments which take priority? Other (please specify)——————————————–– (Please rank 1–5: 1 = most important; 5 = least important reason why you attend less that 70 per cent) 9. You are unable to do at least 70 per cent of the recommended learning outside class because: You have to work during term time? You are better able to work independently? You find it difficult to manage your time? You have domestic commitments which take priority? Other (please specify)——————————————–– (Please rank 1–5: 1 = most important; 5 = least important reason why you attend less that 70 per cent) 10. Do you think your marks have suffered as a result of attending less that 70 per cent of the classes? YES/NO 11. Do you think your marks have suffered as a result of doing less than 70 per cent of the learning expected outside class-time? YES/NO
a number of points which inform on the validity requirement of instruments which attempt to take a broader perspective than the SEEQ: • There were issues about when the questionnaire should be administered: at the time of examination, for example, the experience of admissions and induction might be distant memories. • The attempted focus on every possible learning experience, such as placement and committee work, meant that for many students there was extensive redundancy. • The limited focus on student concerns about aspects of provision which affected their success in terms of progression and awards, such as teaching and library resources, raised stakeholder and validity issues.
Evaluating the Student Experience in Higher Education
167
• The notion of 70 per cent of timetabled class and out-of-class learning time (see questions 6–11, Table 10.3), proved difficult for students to access. • All these points seemed exaggerated in the experience of international students engaging with these concepts from another linguistic and cultural background. (from interviews with managers, Kiely 2000) The use of evaluation for accountability purposes and for the development of programs generates different evaluation constructs. First, there is a teacher performance construct (underpinning the SEEQ questionnaire discussed above), which as Marsh (1987) points out is complex and may not correspond to success in learning. Second, there is a management construct, where the focus is the overall service provision of the institution (the approach which underpins pilot version of a university-wide questionnaire above). Third, there is a learning construct, the focus of the work described by Richardson (1995) and in ELT programs by Crabbe (2003), where the key issue is the set of learning opportunities afforded by the program. The development of an approach to evaluation requires engagement with two sets of stakeholder issues (see also chapter 12). First, there is the challenge of capturing the key elements of the student experience of learning. Second, there is the need to engage teachers, in terms of accepting the validity of student judgements, and devising and implementing measures to improve performance. This second point is complicated by institutional use of findings – what in the US is termed ‘summative’ use of student evaluation findings. This may be seen as an example of evaluation constituting a loss of control and threat to self image, and thus, generating resistance and reactance (Taut and Brauns 2003; see section 3.2, p. 37 above). Brivati (2000) in the ‘Don’s Diary’ column in the Times Higher Educational Supplement in the UK, provides a succinct account of this response: Collate student questionnaire findings. Our universities are the most assessed and bureaucratised in the world. We stand, like Tony Banks when he took the oath of allegiance to the Queen, with our fingers crossed behind our backs, swearing on our Gods of quality assessment-assurance-grade inflation committee regulations and hiding our judgements behind mountains of statistics. (Brivati 2000: 8) The emphasis is on 1) the bureaucratic orientation of the process, and 2) the lack of validity in the judgements made. These reflect views elsewhere in the literature (Green 1994; Chater 1998). They can be seen as part of the wider development task of using evaluation, both to understand the impact of the changes in higher education in recent years and to improve learning within programs (Gibbs 1994; Haselgrove 1994, Henry 1994; Opacic 1994). An increasing aspect of the evaluation task is understanding the learning experience of
168
Program Evaluation in Language Education
international and English as an Additional Language students, whose learning experience is particularly difficult to document (Pennington and Young 1989). The teacher resistance issues have a methodological dimension: the use of questionnaires represents a way in which data of questionable validity are used to make judgements about teaching effectiveness. Block (1998) notes that questionnaires fail to capture key aspects of the learning experience within programs. Fresko and Nasser (2001) and Pletinckx and Segers (2001) favour evaluation instruments customised to the nature of the program and the curricular focus of the evaluation, a view evident also in the Subject Overview Reports (SOR), based on teaching quality assessments in British universities. Language program SORs comment positively on democratic aspects of practice, such as the design of questionnaires by students, and the shared management of evaluation process by teachers and students (Kiely 2003). Alongside measures to improve questionnaires, there has been interest in the development of other data collection procedures. The next section examines one initiative to develop an alternative to the questionnaire, both as a response to the concerns discussed above and as part of the implementation of the evaluation policy outlined in the preceding chapter. The department in this case study developed a form of Nominal Group Technique (NGT) as the procedure for the evaluation of English language programs; (see also section 12.3, p. 206 below). Structured discussion approaches NGT was developed as an evaluation and planning procedure for health care programs in the 1970s (van de Ven and Delbecq 1972; Delbecq 1975). The procedure for groups of around 15 participants involves individuals writing down descriptive or evaluative statements about their learning needs or experience of the program. These points are then read out, listed and clarified by the facilitator (sometimes after being collected, shuffled and redistributed to distance the point from the identity of the maker). This stage continues in round robin style until there are no more points, and there is a master list. Then a tally or indexing procedure takes place where each point receives a weighting. The product is a prioritised list of actions for the improvement of the program. The round robin structure of the nominations and the transparent tallying contribute to the validity of the list of actions. McPhail (2001), describing the use of NGT to research students’ subject choices, notes that the procedure has particular validity in capturing students’ views because it is ‘unobtrusive and honest with the subjects, involves the participants in all parts of the process, and the researcher is present throughout the whole NGT procedure’ (2001: 168). In the context described in this chapter, NGT was selected for six reasons: 1. There was a link to language learning activities, in terms of the discussion aspect of the procedure. 2. The procedure was time-efficient for tutors, who in effect left the session with a report to send to the program coordinator.
Evaluating the Student Experience in Higher Education
169
3. The procedure provided an opportunity for learner training and for raising student awareness of how the language learning curriculum was supported. 4. There was an opportunity to explore vague responses by students, and the factors behinds ‘It depends . . .’ responses. 5. It avoided questionnaire fatigue, when students are asked to evaluate each of up to eight courses by questionnaire. 6. It facilitated teacher development by providing a safe environment for teachers to access feedback which had implications for their teaching. (Kiely 1999) The next section explores the implementation of this approach to program evaluation, using data from an ethnographic study of the evaluation (Kiely 2000; 2003). The account presents a narrative of the data collection and analysis, and then considers themes which inform on the nature of this form of evaluation, and relates to the development of evaluation policies as explored in chapter 9, and the evaluation issues in chapters 3 and 4.
10.4
Data collection
The evaluation process began in the first session as the teacher, Anna, highlighted various in the student handbook for the course. In Week 5, she initiated the data collection (in the field notes below the actual words used are set out in unmarked font, while those which describe events, or copied from the OHP, are in italics. These notes are from evaluation Case Study 2, Week 5 – CS2: 5).
Data 10.1
Opening of the group discussion evaluation activity
The teacher introduces the module evaluation. There are three questions on the OHP: 1. What has worked well for you? 2. What has not worked so well for you? 3. What would you like now for the rest of the module? The teacher sets out the time frame – just seven sessions left. Asks students to answer each question, and also, to see the questions as a broad guide. Anna: But you should be able to put something under each question. Unless you feel that nothing has worked well . . . [laughter all round]. Take a couple of minutes to think about that, then I’ll ask you to compare with other people. Students chat and note – nearly all seem to be taking down questions from OHP. After a few minutes: Anna: Now I want you to work in groups, to reach an agreement on what to say under each of the headings.
170
Program Evaluation in Language Education
Data 10.1
(Continued)
The teacher gives reason for this: I want to prevent an outcome where one person says there is too much of something, and another person says there is too little. In that case I won’t know what to do. After the group discussion we will have a report back. Then I will go away with something clear. The teacher then organised the 17 people into three groups – two of six and one of five. The teacher says that she will leave the room so that they can say what they like [some laughter]. (Classroom notes CS2: 5)
This introduction to the procedure reveals significant features of the teacher’s positioning of herself as teacher and as a teacher/evaluator: she symbolically hands over control of the evaluation to the students by leaving the room; she limits autonomy in this by asking for agreement (justified by the link to her acting on the outcomes); and she places a powerful external constraint – time – on students’ suggestions and requests. The students in three groups discuss the three questions set. A summary of the discussion of one group of six students, developed from analysis of an audio recording of the discussion, is presented in Data 10.2.
Data 10.2 Content analysis of the recorded small group discussion Student
Comments – What has worked well for you?
Laure
Link words for essays Topics like ‘ethnic conflict’ Group work Teacher’s feedback (‘I think it is the vocabulary I don’t know’ – difficulty for non-Europeans) Meaningful texts (Texts difficult to read) (No activity like reading comprehension) (I want to speak about my text) (More discussion based on the texts) (Difficult specialist subject text) [I like The teacher’s way of speaking] [I like the break]
Della Sao Della Arnie Della Laure Rata [Arnie] [Arnie]
Comments – What has not worked well for you? Laure Della Helen Arnie Laure Helen
Speaking in front of the class Structuring speaking (More speaking in front of the class) Problem is speaking (I like the teacher’s way of speaking) Three hours is too long in one class Texts too long to read in class
Evaluating the Student Experience in Higher Education [Sao] [Arnie]
171
[’I think it is the vocabulary I don’t know’ – difficulty for non-Europeans] [No activity like reading comprehension] Comments – What would you like now for the rest of the module?
Laure Helen Della Sao Sao Arnie [Della]
More work in groups More writing Speaking Writing – not in groups Listening comprehension Writing, e.g. writing summaries (I like the break) [Need more speaking in front of the group]
Key: ( ) = points made relating to a question other than that being discussed in their chronological place in the discussion [ ]= points made relating to a question other than that being discussed in the appropriate category in the table
For the most part the discussion proceeds according to the structure set out. The boundaries between the three questions are clearly marked: Della starts off with the invitational question ‘What do you like?’ Sao initiates discussion on Question 2 – what hasn’t worked well – at a point where Arnie has led the discussion off-task, and Laure introduces the third question – suggestions for the rest of the program – after a similar lull. A small number of points are made which are not related to the specific question being discussed (there are included in Data 10.2 within square brackets). The next stage of the event was the plenary report back and discussion. This was managed by the teacher, pen in hand at the overhead projector, taking points from each of the three groups in turn. Data 10.3 is a summary of this discussion.
Data 10.3 Question
Content analysis of the plenary feedback session
Point from recorded group
1 1
Talking; discussion
1 1
Student
Point from other groups
Ina
Writing
Arnie Ina
Linking words
Interesting texts
Laure
1
Sue
Teacher’s language; speech
2
Mat
Oral presentations
2
Mat
Vocabulary
2 2
More time for reading
Sao Mat
People coming in late
172
Program Evaluation in Language Education
Data 10.3 (Continued) Question
Point from recorded group
Student
2
More oral presentations
Della
2
Writing homework
Sao
3
Ina
3
Three hours too long
Arnie
3
More oral presentations
Della
Point from other groups
Grammatical exercises
3
Sue
Explanations of vocabulary in texts
3
Ina
Rewrite corrected essays
3
Sona
Shorter texts
3
Sue
Vocabulary focus for writing in class
3
Planning in class
3
Della Joe
Show examples of good homework
The list of factors here which facilitate and impede learning correspond closely to the list of points which the teacher drew up as part of the report written for discussion in the institution’s committees (see Table 9.2, p. 159). While this list represents a product of the evaluation, it does not capture or reflect the impact of another outcome: the meanings and emphases shared in the interaction in the classroom. The interaction, which leads to the change in the teacher’s instructional strategy described in chapter 9, also informs on how students construct their stake, and shapes the impact of the evaluation on their program. As noted above, the teacher signalled the students’ ownership of the evaluation process by leaving the room for the initial phase. She returned for the plenary but made no substantive comment on the points made by students until one of the last points arising under question 2: What has not worked well for you. Data 10.4 illustrates, in her response to Sao, where she makes her position on teaching and learning strategies more explicit.
Data 10.4 The teacher becomes an active participant in the evaluation discussion Sao: More time to read texts. The teacher takes this up to try to understand what precisely is the problem, or rather the solution: Anna: We can talk about what it is you are saying here, do you want more reading outside class? Do you really want short texts? Because short texts are not what you have to read for your main study. (Classroom notes CS2: 5)
Evaluating the Student Experience in Higher Education
173
For the rest of the time (approximately 30 minutes) the teacher and student discuss the ideas emerging, clarify the issues involved and consider measures to improve the program: what is possible and practical. The immediate outcomes – in terms of changes to the program and impact on teacher learning and students’ enhanced development of EAP skills – of this evaluation are set out in chapter 9. In this chapter we analyse the roles further and identify key features of the evaluation construct, and ways in which the evaluation processes might be further developed.
10.5
Implications for evaluation
In this section we examine four implications for evaluation: (1) the group discussion as a language learning activity; (2) the ways in which students ‘hold their stake’; (3) variations in status of different student voices; and (4) the role of the teacher. The evaluation event as a language learning activity The view of the evaluation procedure as language practice represents a teacher’s construct (not just Anna’s – the discussion procedure was adopted as departmental policy because it facilitated language practice in a way that, say, questionnaires would not, a point also made by other EAP teachers surveyed at the start of this study). This function of the evaluation, however, seems one of rationale rather than of experience; in interviews with the teacher and five students in the period immediately after the event; there is no mention of its merits in relation to the language development function. The language practice issue may reflect a different discourse for the teachers. It may be that, in a curious way, teachers are emphasising the pedagogic aspects of the activity to signal their reservations about the management or monitoring function. Thus, they establish a distance from a process which, in its managerial or quality management function, is regarded as bureaucratic and of little value, but without positioning themselves in direct opposition to it. Thus, in pronouncements in the classroom concerning evaluation, teachers may be stressing the pedagogic function as a means of signalling ownership of the activity. The teacher’s instinctive response in justifying components of a program is to relate them to students’ learning. This is a core responsibility of the teacher, and one which students have an interest in and an awareness of. This boundary management between teaching and management may also explain the reluctance of teachers to report evaluation findings and their perceived resistance to evaluation (Taut and Brauns 2003; see also section 3.2, p. 37 above). Students’ stakeholding The analysis of this activity illustrates five ways in which the students, in providing evaluation feedback on their learning experience, articulate their
174
Program Evaluation in Language Education
Table 10.4
Summary of the factors which influence students’ evaluative comments
(a) Access to learning Students sought to influence the level of difficulty, or elements of linguistic challenge of the learning tasks introduced in the program. Difficulty is represented by different students as activities which present a challenge and therefore an opportunity for learning, and as an obstacle to learning. Students refer to oral presentations, length of reading texts and vocabulary. (b) Appropriate instructional strategies Students challenge the teacher on the appropriateness of teaching strategies such as reading long texts and guessing vocabulary from context. This form of evaluation was noted as having an important persuasion function in terms of the teacher justifying activities and promoting what she saw as good learning practices (see chapter 9). However, in the evaluation discussion, there were instances of the students collectively and firmly declining such advice. They were literally defending their stake, their view of what constitutes an appropriate program. One outcome of this is the change in vocabulary teaching strategy discussed in the previous chapter and in Kiely (2001). (c) Student expectations The evaluation discussion was an opportunity for students to articulate their expectations of the program. This included the role of grammatical exercises, opportunities to see ‘examples of good homework’ and ‘locking out’ latecomers. (d) Appreciation of the teacher’s contribution The students took the opportunity to comment positively on the teacher’s personal contribution, particularly the clarity and comprehensibility of her speaking style. (e) Positive thinking Focus of the positive: students took the opportunity to articulate positive elements of the course, from materials types to opportunities for speaking.
stake (see also section 12.3 below). These are set out in Table 10.4 and illustrate the validity of the procedure in two ways. First, there is some evidence of a focus on learning, of an individual tailoring of the course to maximise learning opportunities. Second, there is attention to the social nature of the program, a recognition that in addition to evaluating abstract curriculum components, there is a recognition by participants of an investment woven into the fabric of the program, which must not be allowed to unravel. There are two sets of language program issues here: points (a) to (c) represent learning and the degree to which the curriculum supports this. It involves the students in a sophisticated discourse about the course as a series of learning opportunities, prepared and presented by the teacher, but with an awareness of their own co-constructing role as students. Points (d) and (e) illustrate an equally sophisticated awareness of the course as a social event and of requirements to manage relationships and mark appreciation in ways which are often considered somewhat romantic characterisations of the
Evaluating the Student Experience in Higher Education
175
teacher–student relationship. Overall, the analysis of the evaluative discourse supports the view that group discussions are an appropriate strategy for effective and efficient evaluation of the student experience. The student perspective is articulated, and the public sharing of key issues makes it difficult to avoid working with them (as may be the case with questionnaire findings). In addition to these dimensions of the evaluation process which support validity, there are two issues which relate to the management of the event in the classroom: the equality of status of all voices; and the role of the teacher in managing the procedure. The analysis of these suggests that the factors which contribute to its evaluation credentials are those which might be limiting its potential. Equal status of all voices The weaker students in this group, two of the three students from an Asian country, experienced some difficulty in having their concerns about the level of the course heard. In the small group discussion a stronger student took over the problem with vocabulary and solved it – a solution that had worked for her. In the plenary discussion the teacher entered the discussion to ask whether everyone agreed with a point raised by one of these students (see Kiely 2003 for a detailed analysis of this episode). It seems as though the teacher was colluding with the capable students in establishing and maintaining the level of linguistic challenge which suits them. A further complexity is introduced when in interview, one of the Asian students explained why she keeps silent in such discussions: in her experience European students have a negative view of Asian students’ skills in English, so speaking out as her classmate did would only confirm that view. The perceived prejudice or racism underpinning participation in this evaluation reflects constructions of Asian identity in other areas of TESOL (Kubota 1999; 2003). The analysis of unequal voice status relates to the validity of the evaluation in two ways (see also section 12.4, p. 215). First, in terms of the construct, there is a problem with how the teacher operates the group discussion procedure (see below). Important points relating to experiences within the program are not accessed, listed and clarified. These problems may be to do with aspects of the program which derive from wider discourses – the proficiency level of those admitted to the program, the cultural identity of some participants, and associated prejudices – which the evaluation could not be expected to resolve, but which it might be expected to engage with. Second, there is a problem with consequences: two students, arguably those in greatest need of improving their English for Academic Purposes, drop out of this program and their university study before the end of the semester, owing, it would appear, to an inability to cope with demands of studying through English. This latter point goes to the heart of what the institution wants from the evaluation of programs and the quality assurance system as a whole: the success of students as represented in progression and completion rates.
176
Program Evaluation in Language Education
The role of the teacher As noted above, the performance of the teacher is likely to be part of the construct of this form of evaluation. When the teacher is carrying out the evaluation, there are, therefore, likely to be role conflicts. The procedure described in these chapters requires a facilitating, listening role rather than a directing one. There are three issues in the conduct of the activity here where the role played by the teacher may be a validity issue. First, the teacher shifts between being a neutral scribe, writing down the points as they emerge, and a gatekeeper, questioning points which emerge if she feels the majority do not support them. Second, the teacher has ‘customised’ this event, so that it does not correspond exactly with the procedure as set out in the policy. Table 10.5 lists the differences. The key changes here – the role of small groups and the absence of the tally stage – compromise the strong role for the individual voice in conventional NGT (McPhail 2001) and place the procedure in the somewhat larger basket of group discussion procedures (Kreuger 1994; Wilson 1997). Third, the teacher’s management of the exchanges in the evaluation event, described above, works with a view that students share proficiency levels and learning needs. The assumed level corresponds to that of the stronger students who articulate clearly how the activities benefit them. The effect for weaker students is to construct and sustain their invisibility in the program. Time, coming in as a discourse of limitations rather than one of opportunity, plays a recurring role here. The teacher’s awareness of these problems seems not to suggest a solution, and leaves her with no option but teach those who ‘fit’ the program and effectively ignore those who, in the teacher’s ideal world, would not be on the program at all. The role of the teacher in the evaluation is constrained by a number of factors: the uncertain space between an authoritative, directive style and a listening, facilitator approach; the void between students’ characterisation of issues and the teacher’s awareness of their underlying complexities;
Table 10.5
Comparison of recommended procedure and actual evaluation
Procedure as set out in policy
Procedure represented in data
1. 2. 3. 4. 5. 6.
Implemented in Week 5 Three questions set Period of small group discussion Master list from groups No indexing No evaluative comment initially, but there is in response to points related to Question 2
Implemented in Week 4 Two questions set Period of silent individual work Master list of points from individuals Indexing by students (Tally) Evaluative comment from teacher
Evaluating the Student Experience in Higher Education
177
the difference between the marketing of the program and what is actually achievable in the classroom. There is a case for institutions supporting such teacher-led evaluations: they have the potential to develop a shared focus among teachers and students on the program and curriculum, and enhance the quality of learning and teaching relationships. They need, however, to be complemented by other procedures, including group procedures facilitated by people other than the teachers of the program, as well as the range of more focused research studies, action research studies and surveys which characterise the developing educational enterprise.
10.6
Summary
In this chapter we have presented a detailed account of the evaluation of an EAP program. The account illustrates important benefits to learning, evident in the fine grain of the classroom interactions and in the sophistication and reflection which characterise students’ participation. The activity does not, however, represent the concerns of all students and may be managed in a way which suppresses issues the program and institution should more actively be engaging with. As an evaluation instrument, the group discussion procedure has distinct advantages, and may be superior to questionnaires in relating evaluation to learning. These benefits, however, are evidenced, not so much in the evaluation report passed on to the committees of the institution as in the detailed ethnographic account of the evaluation. There are two messages here. First, there is a need to research evaluation processes as well as devise policies and procedures and promote their implementation. Second, while there may be real evidence of quality enhancement, the report received by the committees may not correspond, in a context of compliance with mandates, to evidence of the institution’s role in quality assurance and control. It is necessary, therefore, for institutions to consider a diversity of ways of evaluating provision and to structure their use so that there is neither a total reliance on questionnaires, nor a lack of survey data which, for example, accumulates over time and permits instrument validation and increasing understanding of the complex construct that is teaching. In Part 3 we consider ways of researching and understanding what is involved in meeting these different requirements.
11 Evaluating Assessment Standards and Frameworks
11.1
Introduction
The last decade has seen a rise in national and regional/state assessment schemes for use in both English as an Additional and Second Language (EAL/ESL)1 and foreign language (FL) contexts (e.g. North and Schneider 1993; North 2000; Council for Cultural Cooperation 2001). These are commonly referred to, interchangeably, as assessment standards, frameworks or benchmarks. A general overview and critique of outcomes-based assessment – as reflected in different frameworks – has been provided most notably by Brindley (1998, 2001; see also Cumming 2001; McKay et al. 2001a and b). More specifically, McKay (2000) has reported on the development of the NLLIA Bandscales (National Languages and Literacy Institute of Australia (NLLIA) 1994), whilst Scott and Erduran (2004) have provided a critique of two assessment frameworks: the NLLIA Bandscales and the TESOL Standards (TESOL 1997), based on the evaluation study by South et al. (2004; and see section 11.2). These assessment and curriculum frameworks have been documented in various ways and developed to address different needs. Some have received considerable financial support, whilst others have not. An example of the considerable diversity across frameworks has been evidenced in the evaluation survey of six prominent assessment schemes by South et al.
1
The term English as an Additional Language (EAL) is widely used in the UK context, whereas English as a Second Language (ESL) is used in Australia and North America; both terms refer to learners who are using English as the medium of instruction in school contexts but who are not English first-language (EL1) speakers. In addition, the TESOL task force distinguished between ESL, the field of English as a Second Language, and ESL (English to Speakers of Other languages), which ‘refers to the learners who are identified as still in the process of acquiring English as an additional language’ (TESOL 1997: 1). 178
Evaluating Assessment Standards and Frameworks
179
(2004)2 for which they identified a range of supporting material available to facilitate teacher implementation of such schemes. This is demonstrated in Data 11.1, pp. 180–2. In Data 11.1 a distinction has been made between core and supporting materials. From this analysis, it becomes clear that where there has been not only significant financial underpinning but also major infrastructure support and political will, very interesting and quality developmental guidance is available for teacher and school support. Given the complexity of such curriculum and assessment schemes, effective implementation cannot be achieved by teachers without some form of induction and training. It therefore becomes relevant to ask of such documentation: Table 11.1 • • • • • •
Guiding assessment questions
What view of assessment is conveyed? Is there an appropriate balance between summative and formative assessment in relation to the context of use? Is the pedagogic face of classroom formative assessment evident? What kinds of guidance are teachers provided with to enhance their knowledge about assessment generally and classroom language assessment in particular? How is the complexity of teacher assessment managed and conveyed? What specific tools are provided within the framework to assist teacher implementation of assessment?
What also begins to emerge from such an evaluation is the overall orientation of the frameworks. Some, for example, incorporate a strong pedagogic focus alongside the assessment standards. This is observed in the NLLIA Bandscales (1994) and the TESOL Standards (1997), whereas this is less evidence of this in A Language in Common (QCA 2000). This is already apparent from a glance at the available accompanying materials (Data 11.1). In terms of assessment framework implementation practice, too, has been diverse, with some more free-standing than others. In the case of the NLLIA, for example, the bandscales themselves were developed on the basis of extensive consultation with teachers and there have been structured programs of in-service provision to support their implementation in schools. By way of contrast, A Language in Common has the status of a largely stand-alone document through which a view of assessment emerges as a largely summative undertaking disembedded from routine classroom practice.
2
The research team comprises Hugh South (NALDIC), Constant Leung (King’s College London), Pauline Rea-Dickins (University of Bristol), Sibel Erduran (University of Bristol) and Catriona Scott (University of Bristol).
180
1. DEVELOPED BY:
National Languages Centre for & Literacy Institute Canadian of Australia Language Benchmarks
• 2. DOCUMENTATION • Core Documentation: •
• Vol. 1: Teachers’ Manual (270 pages) Vol. 2: Documents on Bandscales Development & Language Acquisition (259 pages)
South Australian Curriculum, Standards & Accountability Task Force, DETE
• CLB 2000: benchmarks for basic, intermediate and advanced proficiency
1 Scope & Scales for each band
Victoria ESL Companion to the English CSF
TESOL ESL Standards for Pre-K~12 Students
A Language in Common
South Australian Curriculum, Standards and Accountability Framework: English as a Second Language
Canadian Language Benchmarks 2000: English as a Second Language – for adults
ESL Development: Language & Literacy in Schools
Assessment frameworks: supporting materials
Qualifications & TESOL Task Curriculum Force Authority (QCA), London
Victoria Board of Studies
•
•
A Language in • Common: Assessing English as an additional language. (44 pages)
ESL Standards for Pre-K-12 Students. (166 pages)
ESL Companion to the English CSF (Curriculum & Standards Framework II) (2nd Edition, Board of Studies, 2000) (152 pages)
Program Evaluation in Language Education
Data 11.1
• • • Supplementary documentation:
1st edition: 1993 2nd edition: 1994 •
•
•
Website: • SACSA Framework; • Assessment pages; • Theoretical underpinnings
Not included in the booklet but available on the website are 2 sample profile templates (‘Initial pupil profile sample’ and ‘Subject profile sample’): www.qca.org.uk /ca/subjects/ english/ eal_addition.rtf
•
Integrating the ESL Standards into Classroom Practice: Grades Pre-K-2 Grades 3–5 Grades 6–8 Grades 9–12
•
Managing the Assessment Process: A Framework for Measuring Student Attainment of the ESL Standards Parent Guide to the ESL Standards for Pre-K-12 Students
•
•
Introducing the CSF II Schools Information Kit
181
Promising Futures
•
Evaluating Assessment Standards and Frameworks
•
CLB 2000: Theoretical framework (2002) CLB 2000: Additional Sample Task Ideas (2002) CLB for ESL Literacy Learners CLB 2000: A Guide to Implementation
(Continued)
Scenarios for ESL Standards-Based Assessment
• School Administrators’ Guide to the ESL Standards
• Training Others to Use the ESL Standards: A Professional Development Manual
Program Evaluation in Language Education
•
Victoria ESL Companion to the English CSF
TESOL ESL Standards for Pre-K~12 Students
A Language in Common
South Australian Curriculum, Standards and Accountability Framework: English as a Second Language
Canadian Language Benchmarks 2000: English as a Second Language – for adults
ESL Development: Language & Literacy in Schools
182
Data 11.1
Evaluating Assessment Standards and Frameworks
183
Overall, however, there are few evaluation studies relating to assessment frameworks in the public domain, although there is a slowly growing research literature in this field (see section 11.4 below). In this chapter, we report on two studies, neither of which is in the public domain. The first represents a documentary evaluation approach and focuses on six different assessment schemes;3 the second is an empirically driven evaluation linked to the Canadian Language Benchmarks and the work of the Centre for Canadian Language Benchmarks.4
11.2 Evaluation study 1: Evaluation of national and state frameworks for the assessment of learners of English as an additional language Context and aims As mentioned above, extensive development work in the area of assessment systems and practices, with specific reference to learners with ESL/EAL in mainstream classroom contexts in English-speaking countries, has been undertaken worldwide, most notably in Canada, USA and Australia. Experience in those countries which have developed such systems is that a comprehensive curriculum and assessment framework supports and advances the practice of educators working with school-aged children with EAL and enhances the progress of the pupils. (South 2003: 2) By way of contrast, however, there has been little coordinated, assessmentfocused development work for the UK context where, by and large, assessment of learners with EAL is little differentiated beyond the initial stage from that of their monolingual (English L1) peers, with their English language development based on band descriptors developed for English L1 speakers. The aim of this survey was to carry out an evaluation critique of ESL/EAL curriculum and assessment frameworks from different contexts of application – Canada, USA and Australia – with a view to highlighting dimensions of these frameworks which might have relevance for the UK context. The scope and purpose of the evaluation is explained as follows:
3
We are indebted to Hugh South, as coordinator of this project, for permission to cite from this work in progress and to the other team members (see note 2 above). 4 We are indebted to Alister Cumming for making available the relevant documentation. Any infelicities or errors in relation to this evaluation study are entirely our own.
184
Program Evaluation in Language Education
Data 11.2
Scope and aims of the evaluation
To provide comparative data on assessment systems which would be informative in the UK context. To produce valuable information and insights for educators and policy-makers concerned with the provision of EAL, and with the quality of education for linguistic minority pupils. (South 2003, Proposal: 2)
Key questions to ask of assessment frameworks are shown in Data 11.3.
Data 11.3 Evaluation Questions to Ask of Assessment Frameworks To what extent do they demonstrate language assessment as part of mainstream classroom instruction? To what extent do they cater for different phases of education? What is the level of teacher expertise required to engage with and use these frameworks? What is the theoretical underpinning of these frameworks in relation to, as examples, theories of second language learning, formative classroom language assessment, language proficiency? Is there an orientation towards process and/or knowledge? Is there an orientation towards formative and/or summative assessment? What is the degree of empirical ‘fit’ with classroom reality? Is there any evidence of empirical validation? (from South 2003, Proposal: 3)
The challenge for the evaluation team was to unpick each of the above questions and to identify the salient facets of these key dimensions so that they could be applied to each of the assessment frameworks. It was also recognised that further focusing questions would emerge during the data analysis and interpretation process. In this respect the evaluation was both reflexive and iterative. Funding for this work came from The Paul Hamlyn Foundation and NALDIC (the National Association for Language Development in the Curriculum; www. http://www.naldic.org.uk).
Data collection and analysis The six frameworks chosen for the evaluation study are shown in Table 11.2. Each of the chosen frameworks was analysed with reference to the same set of evaluative criteria and developed on the basis of: (1) discussion within the evaluation team; (2) reference to the assessment frameworks as well as the
Evaluating Assessment Standards and Frameworks Table 11.2
185
The assessment frameworks evaluated
1. ESL Development: language and literacy in schools (NLLIA 1994; http://www.) 2. TESOL ESL Standards for pre-K–12 Students (TESOL 1997; http://www.tesol.org/ assoc/k12standards/index.html). 3. A Language in Common: Assessing English as an Additional Language (Qualifications and Curriculum Authority 2000; http://www.qca.org.uk/ca/subjects/english/ eal_addition.rtf; www.qca.org.uk/ca/5-14/eal5-14.asp). 4. Canadian Language Benchmarks 2000: English as a Second Language for Adults (Centre for Canadian Language Benchmarks, 2000; http://www.language.ca/). 5. ESL Companion to the English CSF (Board of Studies 2000; http://www). 6. South Australian Curriculum, Standards and Accountability Framework: English as a Second Language (DETE 2002; http://www.sacsa.sa.edu.au).
supplementary material that was available for some of them; and (3) contacts with professionals who had worked with or had been involved in some way in the development of the frameworks. Data 11.4 lists the central analytic categories developed for the evaluation of the documentation.
Data 11.4
Central analytic categories
1. Construction of the framework 1.1 Context and development of the framework 1.2 Aims of the framework 1.3 Description of the framework 2. Orientation of the framework 2.1 2.2 2.3 2.4 2.5 2.6
Theoretical orientation Constructs of language Pedagogical orientation Process/knowledge orientation Sensitivity to students’ cultural/social experience Sensitivity to pragmatics of classroom activities/culture for the EAL learning
3. Use of the framework 3.1 3.2 3.3 3.4 3.5
Use as training document Use as assessment of learning User-friendliness Use within the wider system Evidence of use of the framework e.g. state-level implementation? e.g. school-level implementation?
Some findings The evaluation highlighted a range of issues in relation to the assessment of learners with EAL. We look at two findings here. The first has to do with the extent to which assessment is both age-related and specific to the different
186
Victoria ESL Companion to the English CSF
TESOL ESL Standards for Pre-K~12 Students
A Language in Common
South Australian Curriculum, Standards and Accountability Framework: English as a Second Language
Canadian Language Benchmarks 2000: English as a Second Language – for adults
ESL Development: Language & Literacy in Schools
CRITERIAL FEATURES OF THE ASSESSMENT FRAMEWORKS
Assessment framework structure: contexts of use
3. FRAMEWORK STRUCTURE Context of use • • •
• All school phases Cross-curricular • language use Complements • other documents e.g. ESL Scales
Adult • migrants Communicative • Proficiency Stands alone
• All school phases For use alongside • other State assessments & curriculum documents
All school • phases Stands alone •
• All school phases For use alongside • other State assessment & curriculum documents
All school phases As an adjunct to the English CSF which will ultimately replace this ESL Companion
Program Evaluation in Language Education
Data 11.5
Multiple entry • points • •
Junior Primary • Middle/Upper Primary • Secondary •
ESL teachers
Both ESL & • Both ESL & • mainstream class mainstream teachers class teachers
I Basic proficiency II Intermediate proficiency III Advanced proficiency
• • • •
Early Years Primary Years Middle Years Senior Years
•
No: • assessment of early progress • in EAL across • all school phases
ESL, bilingual and mainstream teachers
Victoria ESL Companion to the English CSF
•
TESOL ESL Standards for Pre-K~12 Students
South Australian Curriculum, Standards and Accountability Framework: English as a Second Language
•
Both ESL & mainstream class teachers
A Language in Common
Canadian Language Benchmarks 2000: English as a Second Language – for adults
•
•
• Elementary: • Pre-K-3 Middle: 4–8 Secondary: 9–12 •
Both ESL & mainstream class teachers
Lower primary Middle/upper primary Secondary
Evaluating Assessment Standards and Frameworks
Users
ESL Development: Language & Literacy in Schools
CRITERIAL FEATURES OF THE ASSESSMENT FRAMEWORKS
187
188
Program Evaluation in Language Education
phases of schooling. Key issues in the assessment of learners with EAL in this respect are shown in Table 11.3. Table 11.3 • • • • •
Questions about assessment frameworks: context of use
Which teachers are expected to work with the assessment frameworks: EAL teachers, mainstream teachers or both? Is there evidence of assessing cross-curricula language use? Is the framework relevant for all school phases? Does the framework cater for multiple entry points? Is the framework a stand-alone document or intended for use alongside other (state-mandated) assessment and curriculum documents?
Data 11.5 (on pp. 186–7 above) provides a comparative analysis of the six frameworks with specific reference to their context of use. From the evaluation survey of assessment frameworks shown in Data 11.5, we observe the following: Table 11.4 • •
•
What is learned from the evaluation: contexts of use
Some assessment schemes are developed for use by EAL teachers only, whereas assessment of learners with EAL is also a concern for all mainstream teachers. There is provision for the assessment of EAL across all school phases in the majority of frameworks. In our global world the importance of assessing early language learning progress is uncontested. Yet one assessment framework makes no provision for this in respect of older learners who are in the early stages of English language learning. Several of the frameworks are intended for use alongside other curriculum documents such that assessment of EAL is contextualised within a broader curricula frame.
The second set of findings presented here takes up the crucial issue of preparing supporting teachers in their language assessment practices, as exemplified in the summary in Data 11.6. Again, we observe diversity across the assessment frameworks, as exemplified in Table 11.5. Table 11.5 • • • •
What is learned from the evaluation: teacher guidance and support?
Some have a significant array of contextualised assessment activities for the teacher; others have none. Some include annotated language samples but not all. In some cases the exemplars are specified for each of the different school phases and cater for the different language levels of learners within these phases. Some include specific teacher reporting formats; others have none.
Reporting formats √: detailed; 2 different ones available: summary ESL Profile ESL development in schools: (i) during a school year and (ii) at the end of a school year
• Teacher’s instructions
X
√: guidelines for √: brief comment on users giving steps reading the Scope for consulting before the Scales sections
General guidance
√
Victoria ESL Companion to the English CSF
TESOL ESL Standards
There is some X general discussion of profiling which is seen as complementing other school assessment and monitoring information.
X
√: guidance and glossary given in introductory sections
189
key = none
√: on how to use the different features of the band scales; also integrated within band descriptors
√: 1 format giving ratings
Evaluating Assessment Standards and Frameworks
•
A Language in Common
WORKS
South Australian Curriculum, Standards and Accountability Framework: English as a Second Language
CRITERIAL FEATURES OF THE ASSESSMENT FRAME-
Canadian Language Benchmarks 2000: English as a Second Language – for adults
Reporting formats and teacher instructions
ESL Development: Language & Literacy in Schools
Data 11.6
190
Program Evaluation in Language Education
Throughout Part 2 (e.g. chapters 5, 6 and 8), we have argued and demonstrated the need for multifaceted approaches to evaluation. This last example is no exception. We have illustrated a limited number from the much wider set of very useful insights gained about the strengths and weakness from the comparative analysis of these frameworks and the implications for assessment practices. Nonetheless, the need remains to ‘go one step further’ and gather evaluative data about how specifically teachers’ assessment practices are actually informed and supported by such schemes and the extent to which they are implementing the schemes in the ways intended (see section 11.4 below).
11.3 Evaluation study 2: The Centre for Canadian Language Benchmarks4 Context As observed above, there is scant evaluative critique available in relation to language testing and assessment frameworks and standards. An exception is the external evaluation by Lam, Cumming and Lang (2001) of the Centre for Canadian Language Benchmarks (CCLB), which provides the focus for this section. We consider in particular the nature of data collection through electronic means and the structuring of this phase in the evaluation study, as well as sampling and analysis considerations. Evaluation scope and purpose We have seen in several of the case studies in Part 2 (see chapters 5, 6 and 8) that evaluations may serve both accountability and development goals and that the two are not mutually incompatible. This is further illustrated by the evaluation of the CCLB, undertaken by three external evaluators. They frame the purposes of the evaluation as follows:
Data 11.7
Scope and aims of the evaluation
This external evaluation, commissioned by the CCLB, is intended to determine the extent to which the CCLB has attained . . . two goals since its inception two years ago. Hence, we have conceptualised the evaluation project as comprising two components that correspond to the two goals. These two components are: (1) evaluation of the attainment of the ‘Standards Goal’, which involves an examination of the CCLB’s effectiveness in implementing those activities specified or implied in the ‘standards’ objectives, and the impact of these activities; and (2) evaluation of the attainment of the ‘Organisational Development Goal’, which involves an examination of the efficiency, effectiveness and inclusiveness of the CCLB’s organisational structures and processes to provide the aforementioned CCLB activities. The purposes of conducting this external evaluation of the CCLB
Evaluating Assessment Standards and Frameworks
191
is both for accountability reasons (‘to secure continued funding’) and for development reasons (to identify areas for improvements and ‘to provide useful tools to the CCLB for ongoing self-evaluation’). (Lam et al. 2001: 4)
In the external evaluation of the Science Across Europe initiative (chapter 5) and in the evaluation of primary modern language project (chapter 8), we observed the concern on the part of the evaluation commissioners for data that would also be actionable in terms of further program development. As observed in Data 11.7, the evaluation of the CCLB is no exception and, in addition to impact data, the evaluators sought data to identify where improvements could be made and how the CCLB itself could engage – subsequently – in self-evaluation.
Evaluation design and procedures Overview The evaluation was undertaken in January and February 2001 in the form of a survey of three categories of respondent, defined as follows:
Data 11.8
Participant groups
Field (or user group) refers to those who provide ESL services such as teaching or administering ESL and to assessors of ESL students across Canada. It also refers to any newcomers, e.g. ESL assessment centres, colleges or universities, boards of education, community immigrant groups, and corporations who employ newcomers. Stakeholders refer to individuals and organisations directly supportive of the CCLB in its role of facilitating the language education of immigrants . . . Clients refer to immigrants and refugees who are in need of language training. Although we deliberated seriously over how we might, we were not able to include this group in our evaluation for several reasons: the time and budgetary constraints . . . (Lam et al. 2001: 7)
It is particularly interesting to note – as elsewhere in this volume (see chapters 5, 6 and 13) – how budgetary considerations are very much to the fore in commissioned evaluation studies. These may be seen to ‘enforce’ a compromise in terms of ‘loyalty to the discipline’ (McDonald 1976) and the very expertise for which the evaluators are invited to undertake the evaluation in the first place. Pragmatic considerations may become prioritized in the planning stages of many evaluations.
192
Program Evaluation in Language Education
Data collection Data collection was achieved through five main procedures. Data 11.9 summarises these evaluation procedures with reference to the respondent groups involved.
Data 11.9
Data collection methods for target groups
Target Group
Data Collection Methods Electronic Survey
Field Representatives CCLB Board Members CCLB Staff Former CCLB Staff Previous Contractors
(1)* – – – –
Phone Interviews – (3)* – (3)* (3)*
Face-to-Face Interviews
Focus Group Interviews
– – (4)* – –
(2)* – (5)* – –
* Instruments (1) (2) (3) (4) (5)
CCLB Questionnaire (web-based and by fax and email) Field Focus Group Interview Schedule Individual Telephone Interview Guide Individual Face-to-Face Interview Guide with CCLB Staff Group Interview Guide with CCLB Staff
(Lam et al. 2001: 8)
As with many of the evaluation case studies reported in this volume, this evaluation sought both qualitative and quantitative data. A feature in common with the evaluation of the Science Across Europe program (chapter 5) is the electronic survey dimension, as explained in Data 11.10.
Data 11.10
Data collection procedures
The software used to create the web-based questionnaire was custom cgi script written in Perl. It was hosted by a Unix server running FreeBSD. The questionnaire was designed using Microsoft Frontpage. The survey software saved each response into a comma delimited file, and each response was added to the main file with a time stamp. The file was then loaded into Excel and then to SPSS for analysis. (Lam et al. 2001: 8)
Contacting respondent groups The telephone is a fairly recent mode of data collection, used in this study and in the survey evaluation of PRINCE (see chapter 13). In both studies, the practice was for an initial contact to be made and a mutually agreed
Evaluating Assessment Standards and Frameworks
193
time arranged for a structured interview. One of the challenges faced by an evaluation survey approach is how to establish contact with respondents. This is summarised by Lam et al. as follows:
Data 11.11
Survey response formats
Survey (web-based/fax/email) Respondents were contacted by one of two methods – email or fax (either a direct fax to the respondent’s own fax machine or receipt of a photocopy or fax), Both the e-mail messages and the faxes contained copies of the survey questionnaire. Recipients of e-mails had four options for responding to the survey. They were able to: 1. click on a link in the email which takes them to the web-based survey, or, 2. fill out the survey in the body of the email and email it to the address given to them, or, 3. fill out the questionnaire in the body of the email, print the completed questionnaire (or print up the survey and fill it out by hand) and phone in the answers using a 1–800 number given to them in the email, or, 4. fill out the survey in the body of the email, print the completed questionnaire (or print up the survey and fill it out by hand) and then fax in the answers using a 1–800 number given to them in the email. (Lam et al. 2001: 9)
The evaluation report also provides details of the response options for recipients of faxes or photocopies. Electronic communication opens up new possibilities for data gathering within the context of an evaluation but, as shown in Data 11.10, the mechanisms need to be thought through carefully, with procedures made clear and trialled. In chapter 5, we mentioned that in the transmission of the survey questionnaires the formatting was lost in a number of cases. Interestingly, in the CCLB evaluation, precautionary steps were taken to check against such error, as explained in Data 11.12.
Data 11.12
Contacting respondent groups
We sent faxes and emails in stages just in case something might go wrong with one or more of our methods of receiving the completed questionnaires. By pacing the transmissions we allowed ourselves the opportunity to discover and fix problems early in the receiving process. The following was our schedule for sending out the questionnaires (note: fax refers to fax survey; email refers to email survey):
• January 19: TESL Canada representatives sent emails and faxes, and the two toll-free lines and the web-based survey were operational
• January 21: Faxes and emails were sent to members at large; web survey link information was sent to provincial associations; and faxes were sent to provincial associations.
194
Program Evaluation in Language Education
Data 11.12
(Continued)
• January 22: Emails were sent to professional associations. • January 23: Faxes were sent to assessment centres. • January 24: Emails and faxes were sent to TESL Canada Journal editors, officers and provincial Newsletter editors; emails were sent to TESL Canada 2002 Conference co-chairs. Not all people in each category could be reached on the day their group was first contacted, so we sent follow-up emails and faxes at later dates. Furthermore, not all members of each group had either fax numbers or email addresses. Some attempts were made to obtain these, but due to time constraints, that was often not possible. We posted a deadline of January 29, 2001 for the receipt of all surveys on the emails, faxes, and on the cover-page of the web-based survey. Several responses came in after this date; however, we could not include many of these in the data set analysed. (Lam et al. 2001: 12–13)
How survey participants actually chose to respond is explained next. Some findings How respondents received and chose to respond to the survey Given the relative novelty in evaluation studies of using a variety of electronic formats for data gathering, we report in Data 11.13 a summary of the format through which the participants received the survey.
Data 11.13
How respondents received the survey
120
Frequency
100 80 62
57
60 45 40 20
(29%)
(21%)
15
7
(7%)
0 Photocopied Sheet
Lam et al. 2001: 21
Fax
Email from OISE/UT
25
(27%)
Email from Other
(12%)
(3%) Clicked on a Web No Response Site Link
Evaluating Assessment Standards and Frameworks
195
Data 11.14 shows the participants’ chosen response mode.
Data 11.14 responses 120
How respondents submitted their survey
111
100 Frequency
84 80 60
(53%)
40
(40%)
20 0 Web
Fax
6
8
(3%) Email
(4%) Voice
2 (1%) Missing
How Respondents Submitted Survey
Lam et al. 2001: 21
We observe from the data that whilst the majority of participants received the survey through more conventional means, i.e. photocopied sheet or fax (50%), 56% chose to submit their responses via the web link or email. We suggest that the clarity of procedures established and, we assume, the friendly interface for receiving data electronically, were influential in the use of this particular format. Issues around easily accessible and useable links and the piloting of these electronic response mechanisms take on considerable importance in any evaluation that depends on data elicited in this form.
Suggestions for improving the CLB In an open-ended question of the survey, participants were asked for suggestions for improving the services of the CCLB and these were analysed quantitatively, as summarised in Data 11.15.
196
Data 11.15
Suggestions for improving CCLB services 5
10
20
15
17 (8%)
Need for standardization of assessment
15 (7.1%)
The CLB is a poor quality document
14 (6.6%)
Need for more information and promotion of services
13 (6.2%)
More services should be provided to support the use of CLB
12 (5.7%)
Suggestions
The CLB document should be more accessible The CCLB is a poor quality centre
7 (3.3%)
CCLB has no impact or a negative impact
7 (3.3%)
The ESL placement test should be shorter
7 (3.3%) 6 (2.8%)
Need for more information and promotion of document
6 (2.8%)
There should be more concrete examples in the CLB document CCLB is important
3 (1.4%)
Need for support/Resources/Workshops for administrators
2 (0.9%)
CCLB is not important
2 (0.9%)
The CLB is a good quality document
2 (0.9%)
The CCLB is a good quality centre
2 (0.9%)
CCLB has a positive impact
1 (0.5%)
There should be more suggestions for implementation in the CLB document
1 (0.5%)
Lam et al. 2001: 29
25
30 26 (12.3%)
Need for support/resources/Workshops for teachers
Program Evaluation in Language Education
Number of Responses 0
Evaluating Assessment Standards and Frameworks
197
It will be recalled that the target participants for the evaluation comprised different groups and, thus, a further analysis using Analysis of Variance (ANOVA) investigated whether perceptions of the CCLB differed across types of user.
Data 11.16
Constraints for Statistical Analysis
Using our data set, we compared and determined the statistically significant differences in the overall familiarity, quality and impact ratings (average of individual averages) between the various types of respondents as defined by the background variables (e.g. respondents from different provinces). We could only use five background variables for this analysis because they are the only ones that have a roughly equal number of respondents in different collapsed categories of the variables (e.g. ESL teachers and other ESL providers). These professional variables include professional role, residence, highest education degree, how questionnaires were received, and how completed questionnaires were returned. Categories for all these variables were collapsed to provide a more even distribution of respondents across the reduced categories. (Lam et al. 2001: 26)
Through this analysis the evaluators were able to show that, for example, respondents with either a Master’s degree or doctorate tended to be more familiar with the work of the CCLB and that familiarity with the work of the CCLB differed across geographical regions (see discussion in section 11.4 below). Respondents in this evaluation case study were also provided with the opportunity to access the final report through contacting the CCLB. This goes some way to extending the involvement of the participants beyond the data collection phase.
11.4
Implications for evaluation practice
Evaluation approach and sampling As noted across several of the previously reported case studies in Part 2, achieving target samples frequently proves problematic. The evaluation of the CCLB is no exception. In the evaluation report the evaluators provide a summary profile of the 221 respondents to the survey, which they suggest represents ‘a certain level of representativeness’ as the respondents ‘match the general profile of the ESL service-provider population’ (p. 14). They continue: However, due to the weak sampling design and the potential problems in the data collection procedure . . . as well as measurement errors associated with self-reporting instruments, the results presented below should be interpreted with caution and only the obvious findings should be noted.
198
Program Evaluation in Language Education
We note from Data 11.14 that had there been a greater number of respondents, further and more detailed analyses would have been possible. However, examining the role of surveys more generally, we would assert that the role of a survey in evaluation research is frequently to illuminate trends in the data and that this in itself is of enormous value, potentially. In other words, the value of surveys may reside rather more in the way that they highlight issues which can then be verified or further investigated through other more appropriate approaches such as a small number of in-depth case studies. Additionally they have the potential to ‘sharpen up’ the focus for subsequent case study work. This was the case with the evaluation studies reported in chapters 5 and 6. This perspective, thus, runs somewhat counter to the familiar asserted strength of surveys and their asserted capacity to provide data for generalization purposes. Tests and assessment frameworks as an evaluation tool In chapters 5 and 8, we provided examples of the use of tests in language program evaluation; (see also sections 2.3, p. 19 and 4.3, p. 59). In both cases, these measures of language proficiency were used in conjunction with other evaluation procedures and not as the sole means of gauging impact. Unlike the view associated with the earlier objectives-driven evaluations of the 1960s and 1970s, of which the Bangalore evaluation study is an example (see section 2.3, p. 19), data from tests can only ever provide partial information on facets of program implementation. They may inform on outcome levels, thus providing an important contribution in relation to indicators of achievement and performance levels. They are, however, limited in terms of identifying which specific developments could be recommended and actioned within a program and shed no light on how to prioritise future developmental activity. In the words of Eisner (1985: 131): ‘Simply knowing the final score of the game after it is over is not very useful. What we need is a vivid rendering of how that game is played.’ It seems obvious: evidence for quality program implementation cannot be gained from tests; other evaluation approaches are required (see, for example, section 15.4, p. 26). In like manner, assessment frameworks and the data that they may generate can be seen as useful input to an evaluation study but as one type – necessary but not sufficient – of data. However, pressures in the domain of commissioned evaluation studies for experimental or quasi-experimental designs still remain. Links between evaluation and research In her state-of-the-art survey of ‘evaluation and ELT’, Rea-Dickins (1994: 71) pointed out that one of the distinctions to be made between evaluation and research is that the former is frequently constrained by the demands of policymakers for ‘answers’ and ‘results’. In fact, it is frequently the case that the answers sought within an evaluation cannot be ‘found’, on account of (1)
Evaluating Assessment Standards and Frameworks
199
constraints sometimes imposed by budgets and time, as seen above, or (2) the evaluation approach taken, for example, a survey or ‘evaluation of outcomes’ design. Further, it may be the case that in order to interpret the data from an evaluation fully (this depends on the evaluation study and context, of course) it may be useful to seek insights from relevant research studies. With specific reference to our focus in this chapter on assessment frameworks and criteria, these have been investigated empirically by only a small number of researchers, namely Breen et al. (1997); Arkoudis and O’Loughlin (2004); and Davison 2004 (see also Peirce and Stewart 1997; and Watt and Lake 2004 – these last two connect with the CLB). To take a case in point, (a) it was highly relevant for evaluation study 1 (section 11.2) to look for evidence of teacher engagement with the assessment frameworks evaluated; and (b) since impact of the CCLB was an evaluation focus of study 2 (section 11.3) it would be relevant to look for evidence of impact more broadly. Thus, there is potential synergy between the two domains – the evaluation of assessment frameworks on the one hand, and research into the use by teachers of assessment frameworks on the other – but this potential requires more careful scrutiny. In this case of assessment frameworks, the synergy appears both limited and limiting: limited in terms of ‘amount’, given the constraints specified by the researchers (see Data 11.8) and limiting in terms of the explanatory power that might be claimed by evaluation studies with reference to other, more detailed studies of the constructs involved. We suggest that research studies have the potential to increase the explanatory power of evaluations and that the two – evaluation and research – should not be viewed as completely separate. This may be especially so at the interpretation stage of an evaluation.
11.5
Summary
In this chapter we have presented accounts of two different approaches to evaluation with specific reference to assessment frameworks. The first study (South et al.) exemplifies a documentary evaluation through which key constructs underlying assessment frameworks are identified. The second, a survey study, provides a lens through which to understand key processes in the implementation of an evaluation using (largely) electronic communication. Both studies highlight the relevance of in-depth case studies to capture further insights about the assessment frameworks, both perceptions and use.
12 Stakeholding in Evaluation1
12.1
Introduction
The nature of stakeholder involvement is an issue for all program evaluations. In an evaluation that adopts a ‘stakeholder as informant’ position, decisions need to be taken about which individual(s) or group(s) will be approached and how their responses will be captured as part of the data collection. An approach which adopts a ‘stakeholder as participant’ perspective will, on the other hand, involve additional considerations as to how relevant individuals or groups may become active participants with opportunities to make distinctive contributions to the evaluation flow (see section 12.3), as opposed to being only on the receiving end of a questionnaire, a one-to-one interview, a classroom observation or the administration of a test. The concept of stakeholder participation in evaluation is not new. Murphy and Rea-Dickins (DfID Report Series 35: 90) cite Norris (1990: 131), who explains that Tyler (1950, see Concept 2.1, p. 20) ‘regarded evaluations as a tool to help the teacher in planning the curriculum and making instructional decisions’. This formative- and developmentally-oriented view is also a feature of the Stenhousian tradition (Stenhouse, 1975: 145).
Quote 12.1
Stenhouse on curriculum development
. . . the betterment of schools through the improvement of teaching and learning and that the only way of closing the gap between aspiration/curriculum intentions and practice is by involving the teacher in the renewal process, i.e. whether alone or in a group of cooperating teachers.
1
We are indebted to the work and ideas of Dermot Murphy in this chapter. 200
Stakeholding in Evaluation
201
Here, teachers are highlighted as key players in the curriculum development and renewal process, of which evaluation is an implicit element. Our own experience of evaluation in language education suggests that there is not always a shared awareness of the pivotal role of the ‘teacher as stakeholder’ in curriculum development and evaluation, as envisaged by Stenhouse in the late 1970s (but see Potter and Rea-Dickins 1994; Kiely 1994; Kiely and Komorowska 1998). In Part 1, we identified the move towards evaluation as ‘engagement with the learning milieu’ (section 2.1, p. 17) through the work of Parlett and Hamilton (1972) (Quote 2.6), as a means to generate dialogue with and between evaluation participants to take thinking, and subsequent action, forward. Pennington (1998: 205) too not only focused our attention on the multifaceted nature of evaluation, but explicitly highlighted interactional and participatory dimensions so as to facilitate, through ‘gaining increased understandings’, opportunities for change, thereby enhancing program effectiveness. Notwithstanding these moves, the potential benefits to increasing stakeholder participation in evaluations, as reflected in the educational evaluation literature, have not always been viewed through a positive lens. For example, Weiss cautioned:
Quote 12.2 Weiss on the limits on stakeholder evaluation approaches The stakeholder approach to evaluation holds modest promise. It can improve the fairness of the evaluation, democratise access to data, and equalise whatever power knowledge provides. However, it will not ensure that appropriate relevant information is collected nor increase the use of evaluation results. (1986: 144)
The prime motivation, therefore, for extending the involvement of communities of practice beyond those represented by the inner sanctum of evaluators and those who commission evaluations is to enhance evaluation utilisation (Concept Box 3.1, p. 38). For this reason, it becomes important to develop greater insights not only about the roles that stakeholders may play in evaluations, but also how learning may take place, as a consequence, as a basis for change within a given evaluation setting. This is an increasingly emergent theme in the evaluation literature, to a lesser extent in language programme evaluation, but more visible in evaluation in the social sciences more generally (e.g. van der Meer 1999; Maynard 2000; Thoenig 2000). In analysing stakeholding in this chapter, a concept we consider central to effective program evaluation, we depart from our previous practice of exemplification via a particular case study. We first examine ways in which stakeholders
202
Program Evaluation in Language Education
and stakes have been defined and then apply this analysis to some of the case studies presented in chapters 5–11. This is followed by examples of ways in which stakeholders may be given a voice in evaluation studies. In the final section of this chapter, we examine the locus of power within evaluation practice.
12.2
Understanding stakeholding
In this section, we explore a range of understandings in relation to stakeholding as part of the democratisation of evaluation processes and the positioning of evaluation context as a participatory domain. Consider the following questions: Table 12.1 • • • • •
• • •
Questions about stakeholding
Who are stakeholders? What are the stakes? What does it mean to participate in an evaluation? To what extent is participation envisaged beyond the role of an individual or group(s) being consulted as provider(s) of information as respondent? To what extent is stakeholder participation meaningful? Is there a danger in only paying lip service to stakeholder involvement? For example, in what ways are evaluations that carry the label of ‘joint’ or team evaluations actually collaborative and participatory? Who controls the ‘power’ in an evaluation? Who makes the decisions? Does participation actually lead to programme enhancements? (e.g. greater democracy in the working environment) Does learning take place as a consequence of an evaluation?
Below, we explore some of the issues raised by these questions. Who is a stakeholder? There are various views of stakeholders in the literature. Rea-Dickins provides an example of a broad definition in the context of language testing and assessment. She defines stakeholders as: ‘those who make decisions and those who are affected by those decisions’ (1997: 304). Murphy and Rea-Dickins (1999: 91–2), referring to the educational evaluation literature, note that stakeholders are often defined by their working role(s) within a program or by their contributions to one.
Data 12.1
Stakeholder roles
Policy-makers and decision makers . . . Program sponsors . . . Evaluation sponsors . . . Target participants . . . Program management . . . Program staff . . . Evaluators . . . Program competitors . . . Contextual stakeholders . . . Evaluation community . . . (Rossi and Freeman 1993: 408)
Stakeholding in Evaluation
203
Murphy and Rea-Dickins suggest, however, that this way of defining stakeholders remains largely ambiguous as it is usually unclear whether this definition has to do with their place in a project or program, or whether it refers only to their association in the evaluation (1999: 91). The roles identified above are not intended as exhaustive, with several of these not so much involved in an evaluation proper but, rather, as constituting the potential audience for any evaluation findings. Other writers have attempted to categorise groups of stakeholders, as opposed to merely listing them as potential participants. Two examples are illustrated below:
Data 12.2
Classifying stakeholder groups
Aspinwall et al. (1992: 84–5)
Guba and Lincoln (1989: 40–1)
Clients those who are intended to benefit from the project
Agents those who conduct and use the evaluation
Suppliers those who implement or provide resources for the project
Beneficiaries those who gain from the evaluation
Competitors, collaborators usually other organisations
Victims those who are negatively affected by the evaluation.
Regulators any agency which directly or indirectly regulates the project
From Data 12.2, we observe that Aspinwall et al. (1992: 84–5) propose four broad groupings. It remains unclear, however, whether the stake lies within the program or project being evaluated or in the evaluation itself. Guba and Lincoln’s classification, by way of contrast, allows us to position the stakeholders with reference to the evaluation itself. At this point, it is interesting to reflect that a slightly different approach has been taken in most of the English language teaching evaluation literature. First, stakeholder issues have largely, although not exclusively, arisen in response to such questions as: Who conducts the evaluation? or Where does the evaluation expertise lie? This has led to discussions about the role and/or need for an external evaluator and how they will involve program participants in an evaluation (see section 12.3). Second, and related to this earlier point, an ‘outsider–insider’
204
Program Evaluation in Language Education
binary division has been used to label evaluation stakeholders (e.g. Alderson and Scott 1992; see also section 12.2). Next, we evaluate the extent to which classifications, such as those exemplified above, are useful in further clarifying not only the roles but also the nature of stakeholder participation in evaluation.
Applying stakeholder classifications With some classifications or binary divides, we observe that evaluation participants may fall within two or more categories. For example, teachers in the SAE program (chapter 5) could be identified as both ‘supplier’ and ‘client’ according to Rossi and Freeman’s definitions. Equally, applying Guba and Lincoln’s categories (see Data 12.2) to three of our case studies is not particularly helpful.
Table 12.2
Applying stakeholder classifications
Programme
Agents
Beneficiaries
Victims
Evaluation of the Science Across Europe (chapter 5)
Program sponsor
Teachers Students
Teachers Students
Evaluation of teachers’ English language competence (chapter 6)
Development Agency Government Ministry
Long-term teachers and their students and, ultimately, education standards
Teachers
Evaluation of quality management (chapter 9)
Institution Teachers
Institution Subject department Teachers Students
Subject department Teachers Students
Table 12.2 reveals that participants may fall discretely into one category but, more often than not, may fall into two or even all three. If you take the higher education evaluation example (chapter 9), the teachers in the program who are responsible for both the teaching and the evaluation of the course are, potentially, agents, beneficiaries and victims! Nor does it clarify stakeholder roles and relationships during the implementation of an evaluation, which may also change during the course of an evaluation study, let alone shed light on the ultimate goal of evaluation utilisation! A more revealing analysis is provided by Alderson and Scott (1992: 42; see also Celani et al. 1988), who use the insider–outsider dichotomy to summarise the nature of the participation of program stakeholders in the evaluation that they report.
Stakeholding in Evaluation
205
An ‘outsider–insider’ dichotomy per se also has its limitations in terms of gauging the involvement of relevant stakeholder groups: we may know that an individual or group is involved in some way, but not how they are actually contributing to the evaluation. Alderson and Scott have gone beyond this and analysed the role of the individuals within the evaluation as a whole. Importantly, they have attempted to identify the extent of the individual or group participation, indicated by the asterisks in Data 12.3, with specific reference to tasks across different phases of an evaluation. The final row – ‘reading and learning from it’ [the evaluation] – is an interesting one from two perspectives: first, it assumes that learning is part of evaluation purpose; and second, it implies an agenda for action. In an ideal world we should, perhaps, have asterisks in all the boxes in this row when one of the stated evaluation aims is that of program development, thus demonstrating the value of the evaluation for the diverse stakeholder groups.
Data 12.3
Participant participation – an example University Insiders: Project Research Outsiders: Sponsors teachers University coordinator assistant External coordinator consultant
Designing the whole evaluation
*
*
**
*
**
Constructing the instruments
**
**
**
**
**
Testing them out
**
**
**
Collecting data
**
**
**
**
Data analysis
**
**
**
**
**
**
**
**
*
?
**
?
**
Drafting the report Reading and learning from it
?
Key: * = approximate degree of involvement ? = no evidence available
**
206
12.3
Program Evaluation in Language Education
Modes of stakeholder involvement
Validating the evaluation focus An example of a logframe that identified the immediate objectives for an evaluation was provided in chapter 4, Concept 4.4, p. 67 above. Such frames are one means through which the objectives for an evaluation are presented to an evaluator. The example in Data 12.4 is taken from the PRINCE (PRE-, In-service and Continuing Education) evaluation (Rea-Dickins, Reid and Karavas-Doukas 1996), a single-country, multi-site evaluation of developments in a national teacher education renewal process within teacher training colleges and their regional clusters. One approach to working with evaluation Terms of Reference (TOR) is for the evaluator to interpret the TOR independently from the pool of potential stakeholders and to develop an evaluation design in relative isolation, or with a closed and small coterie of other evaluators or ‘key’ players. A second approach is to seek to understand the TOR in fuller detail by including members of different stakeholder groups; an approach used by Rea-Dickins in several large-scale evaluations (e.g. 1994; see also Rea-Dickins et al. 1997) which involved developing a grid, as in Data 12.4.
Data 12.4
Evaluation TOR, the PRINCE Evaluation Study
Terms of Reference
Issues/Questions
Stakeholders/Information Sources
TOR 1.1 To what extent are objectives in the Project Framework being achieved?
This grid, with blank second and third columns, was circulated to relevant program stakeholders for completion. Data 12.5 provides the completed response from the program manager. To illustrate the importance of securing stakeholder perspectives reflecting the diversity of interests and stakes within a given evaluation context, Data 12.6, completed by one of the regional teacher trainers centrally involved in the program, exemplifies contrasting perspectives from that of the program manager above. A comparison of Data 12.5 and Data 12.6 reveals a number of differences. We notice, for example, that the issues identified in Data 12.6 have to do with curriculum, staff development and training which reflect professional concerns. By way of contrast, the focus that comes across clearly in the data provided by the project director (Data 12.5) centres on progress towards program objectives and achievements, infrastructure concerns and overall impact. This last point is also raised by the teacher trainer in relation to
Stakeholding in Evaluation
Data 12.5
207
Program manager’s stakes
Terms of Reference
Issues/Questions
Stakeholders/Information Sources
TOR 1.1 To what extent are objectives in the Project Framework (PF) being achieved?
Are the objectives ambitious enough for 1994–8: most outputs on PF already achieved? Are there other achievements as a result of PRINCE input not in our objectives? To what extent is there a match between PRINCE objectives + TTC objectives? How can a reformulation of our objectives take into account needs of colleges/ Polish socio-economic conditions/MEN objectives? Where is PRINCE input making most impact? Where can it make greatest impact in next 2–3 years? N.B. For all TORS please consider: – what are the differences between regional/cluster colleges? Between clusters? Across whole college network? Issue: This evaluation focuses on PRESETT. However, PRESETT is increasingly linked to INSET, e.g. training school mentors, INSET for college graduates, four college directors now INSETT leaders, university extra-mural course for TTC grads mushrooming. How can this evaluation, given its constraints, take into account this increasing integration?
Information Sources: Project Framework 1991, 1994 MEN reports and correspondence (MEN file)
Key: MEN = Ministry of National Education TTC = Teacher Training College Dirs = Directors
RTT reports: – job plans 1993–4 individual college visits 1991–3 – cluster wide reports 1994–5: two per year Link: – original proposals 1992 – second proposal – reports 1993–4 Informants: MEN, Prof X TTC Dirs/Head
208
Program Evaluation in Language Education
Data 12.6
A teacher trainer’s stakes
Terms of Reference
Issues/Questions
Stakeholders/Information Sources • • •
TOR 1.1 To what extent are objectives in the Project Framework being achieved? Professional upgrading of staff
1. Staff development workshops/seminars initiated by trainers at teacher colleges 2. More professional meetings about methodology, teaching practice, diploma project staff taking place because of pressure from regional trainers 3. Workshops held at cluster colleges for students and staff
Further development of three-year teacher training curricula
Curriculum revision in methodology, changes in dealing with teaching practice: What mechanisms can be put in place to ensure development?
Sustainable structures
How sustainable are the developments initiated? e.g. how does staff turn over affect sustainability and quality of teaching?
College directors Staff Students
sustainable structures. These two examples serve to illustrate the importance of seeking the perceptions of stakeholders within a program evaluation context. With the necessary permissions (see section 13.6, p. 240 below) the next step is to aggregate the stakeholder responses and circulate them as the basis for project evaluation discussions. The data thus constitute a working document for the evaluation that is shared among the various stakeholder groups. In this way, not only is the evaluation likely to be more comprehensive and valid, it should also help the different stakeholder groups to ‘take on meaning’ from the evaluation, which in turn may enhance the possibility of both action-related decision-making and learning taking place.
Stakeholding in Evaluation
209
The above constituted an example of securing stakeholder perceptions at the beginning of an evaluation; the example that follows illustrates how students may be given a strong voice within program evaluation.
From informant to participant Student questionnaires are a ubiquitous feature of the student course evaluation process but there is a trade-off between so-called ease of application and analysis and the capture of data that will lead to action-related thinking. Among the many criticisms that can be levelled at the over-prominent place of questionnaires in evaluation studies is the limitation that the questions tend to be closed for ease of data analysis, thereby imposing a specific agenda – frequently an accountability one. Our experience suggests that the data provided may not be actionable in any useful way, as opportunities for spontaneous comments in relation to future program development may be minimal on account of the rather limited and under-articulated nature of the data obtained. One approach to overcome these drawbacks is to engage program users in different ways, through alternative or additional means of data capture, depending on the scale of the evaluation (see chapter 13 on large-scale evaluations). We exemplify two approaches below, premised on small group activities which formed part of the evaluation of an Open Learning Environment (OLE, namely Blackboard) used by the MEd TESOL Pathway, University of Bristol (Timmis 2004). This evaluation, commissioned by the MEd TESOL coordinator (Richard Kiely), was primarily developmental in orientation, with the following aims:
Data 12.7
Evaluation of the TESOL Blackboard
Aims 1. 2. 3. 4.
To identify strengths and weaknesses in current use of Blackboard. To identify student perspectives on current provision. To identify task types and roles which offer enhanced opportunities for learning. To identify orientation and training needs for enhanced learning with Blackboard.
The following issues were considered in the evaluation activities and discussions: 5. 6. 7. 8.
Communication habits and preferences. Perceptions of students’ and tutors’ roles. Factors affecting online discussion and other Blackboard activities. Activities and tasks that students felt were successful and less successful, and the reason(s) why. 9. Suggestions from students for future development of the use of Blackboard (what would have helped to develop their learning and supported then in working online?). (Timmis 2004: 3)
210
Program Evaluation in Language Education
High value was placed on this stakeholder engagement: the program virtual learning environment was initiated with a view to promoting student understanding and learning in the substantive areas of study. Thus, in addition to an analysis of the different types of learning activities and use of different Blackboard tools and work areas, a group of students was invited to participate in a focus group, as part of the evaluation process (see Data 12.8 for a summary). The focus group lasted just over two hours. The Nominal Group Technique (NGT; see section 10.3, p. 162; also Kiely 2003) was used to facilitate and scaffold the discussion, focusing initially on the following question:
Data 12.8 Stakeholder engagement using NGT and putative hypotheses Think about all the activities and tasks you have done (or have been asked to do) using Blackboard on this MEd programme. Which of these have been helpful or unhelpful to your learning? Students were asked to reflect on this individually. This was followed by a plenary session in which all responses were recorded. Students then worked in groups of three or four to consider the list and prioritise their top ten statements and identify action points for the top two in each of the helpful and unhelpful groups, although it should be noted that in some cases the groups merged several items or rephrased the original statements and others came up with less than ten priorities; this was also true when agreeing on their action points. These action points were then fed back to the whole group and discussed. This final plenary was tape recorded and all responses were recorded on flipcharts or pro formas designed for the group work activities (see Appendix 12.1). In addition to the main activity described above, a series of controversial statements (for example, ‘Working with Blackboard is all about working on your own’) on small sheets were handed out to each of the students, who were given a few moments to reflect individually. They were then asked to say whether they agreed or disagreed with each statement (using a sheet containing a five-point Likert scale) and why, and these responses were again recorded on the flipchart and tape. Putative hypotheses 1. ‘Online discussions were a good way to learn in this module’. 2. ‘I prefer to communicate by email rather than through discussion boards or chat rooms’. 3. ‘On this module, I have learnt more from online discussions with fellow students than from the tutor’.
The responses to the three ‘putative hypotheses’ (see Kiely 2003 for a discussion of this approach to evaluation data gathering) are summarised in Data 12.9.
Stakeholding in Evaluation
211
Data 12.9 Evaluation of assumptions underpinning learning through an OLE Hypotheses
1
2
3
Strongly agree
0
2
0
Agree
8
2
0
Neutral
4
6
0
Disagree
1
3
11
Strongly disagree
0
0
2
From this it can be seen that the participants’ responses were largely positive or neutral. The evaluation report also suggests a feeling of ambivalence from some in the group: I think that sometimes the online discussion is good for written language, but I don’t think that, to some extent, written language can express the idea very clearly instead of the spoken language, especially for some nonnative speakers of English, so we prefer the spoken discussion rather than the written. (student response in discussion of hypotheses – from tape) Further:
Data 12.10 Student program evaluation responses in discussion – from tape When discussing statement 2 about emails, the views of the group were widely distributed across the scale from strongly agree to disagree, suggesting that a variety of communication mechanisms might be needed to take account of student preferences: ‘I think we are used to email.’ ‘Chat rooms are better than email.’ ‘Depends on what kind of problems you want to solve.’
What we observe from the above examples are the opportunities provided for the students as stakeholders to engage with the focus of the evaluation and a discussion of various views. In this way, we argue, a richer set of evaluative data becomes available, as contrasted with, say, a structured interview or questionnaire format. In this case, the student data gave rise to a ‘student action list’, i.e. action-related points generated by the students themselves to lead to developments in the use of Blackboard across the
212
Program Evaluation in Language Education
MEd(TESOL) program (see Appendix 12.2). In Data 12.11 we summarise the recommendations that emerged from the evaluation which establish the different aspects and activities which can be improved and developed as the next phase in the improvement of student learning through OLE.
Data 12.11 Evaluation recommendations and ideas for development Recommendations 1. Attend to structural and consistency issues, especially developing a whole program structure. 2. Remove unwanted functions or areas and ensure that the sections/areas are used in the same way across the program. 3. Agree with the team how to manage announcements for a more consistent approach. Ensure students are aware of how the announcements will operate. 4. Introduce a more developed induction for Blackboard to include trialling the use of different tools and reviewing sources of help and support. 5. Investigate ways in which Blackboard could support more active discussions, group work, personal development and portfolios. 6. Consider the use of email through Blackboard as a vehicle for responsive communication and private or urgent messages. 7. Consider the use of anonymous postings and if these are removed, ensure that induction helps students to reflect in advance on the issues of privacy and permanency in an online environment. Ideas to consider for further development 1. 2. 3. 4. 5. 6. 7.
Recorded lectures. Links to/use of authentic data sets. Peer review and commentary. Tutorial practical activities (especially supplementary). Using groups for tasks (research, discussion). Collaborative planning for face to face student-led sessions. Forum for invited external experts to interact with group.
Joint evaluations The Alderson and Scott evaluation (see Data 12.3) clearly illustrates how project members became involved in the different phases of the evaluation, including data analysis and report writing. A slightly different approach was adopted in the evaluation of the large-scale English Language Teaching Secondary project (ELTSP, Rea-Dickins et al. 1997), on account of the timing of the evaluation impact study, for which the ultimate goal was to promote higher English language standards of school age learners. (This is reported on in chapter 13.) The evaluation itself aimed, as far as was feasible, to use existing data sets to inform on project impact and sustainability. In addition, a small number of discrete evaluation studies were prepared by a wider
Stakeholding in Evaluation
213
range of stakeholders who were members of the evaluation team, representing the interests of the inspectorate, the teacher training sector, Form I (secondary school) teachers, those with responsibilities for trainer training, as well as the university sector and the National Examinations Council. Given that this evaluation had the brief to inform on impact after a ten-year period (see section 13.13, p. 231 below), the sheer volume of available data to be processed, the length of time available for the study and the absence of certain key data sets suggested a design that would generate communication across the different stakeholder groups and dialogue about evaluation data, findings and interpretation. One way in which this was managed was to hold a series of workshops with a fairly broad membership including key stakeholders (although arranging a time to secure maximum attendance was problematic) and to commission a series of mini-evaluation studies in each of the major project areas. This is exemplified in Data 12.12.
Data 12.12
Focus and resourcing of evaluation impact studies
Impact of program on
Impact indicators (examples)
Evidence provided by
1. Standards of English of secondary school learners
•
Quality of Form 4 examination results in English and other subjects (e.g. Biology, History): comparison of pre- and post-project data Samples of students’ written work
National Examinations Council
Improvements in classroom practice in areas covered in Inset Completion rates Teacher attitudes
Centre for Tutor Training
Quality of tutor reporting, e.g. level of professional insights, nature of reporting Numbers of Inset tutors trained and working in teacher training colleges running Inset courses
Centre for Tutor Training
Enhanced teacher awareness of gender issues Gender considerations treated appropriately in program resources, e.g. Inset resource packs
Centre for Tutor Training
Improved performance on university language tests
University staff
• 2. INSET (Teacher Training)
•
• • 3. INSET (Tutor Training)
•
•
4. Gender awareness
• •
5. Other institutions, e.g. the universities
•
University staff
214
Program Evaluation in Language Education
Responsibility for collating available evidence, collecting new data and managing relevant data sets was thus largely devolved to representatives from the main stakeholder groups who had been involved in or affected by the program in some way. They were also in a position to contribute to the debate on project findings, interpretation, conclusions and recommendations, e.g. the Inspectorate, tutor and teacher training staff. Effectively, this wider group worked as members of the evaluation team for the duration of the evaluation. In this way, one of the aspirations was to enhance the potential of both learning from the evaluation itself and utilisation of findings, through the active participation of relevant professional groups who had been given specific responsibilities as evaluation team members. The extent to which such aspirations are ultimately achieved can only be determined post-hoc through some form of follow-up activity which rarely, if ever, gets done as funding for this type of enquiry would be of such low priority within wider sponsor or national/regional agenda. Thus, the extent to which evaluations actually enhance learning remains largely unknown, a point also implicit in Data 12.3 (last row). What gets learned and by whom is also related to the locus of power which we take up next as a final issue in this chapter.
12.4
Stakeholding and the locus of control
Table 12.3
Identifying the locus of control
Who has the power and the control: • • • • •
as part of program implementation? as part of evaluation processes (e.g. planning, piloting or interpretation phases)? as evaluation informants? as part of decision-making? for acting on evaluation data?
In Part 1 we introduced the notion of loyalties and power relationships within the evaluation context with reference to the political orientation of an evaluation, as viewed through the lens of educational evaluation. In Concept 3.5 we cited the three different roles of an external evaluator, as categorised by McDonald (1976, cited in Hopkins 1992: 32–3): Table 12.4 • • •
Evaluation allegiances and loyalties
Bureaucratic evaluation Autocratic evaluation Democratic evaluation
Allegiance to and ownership by the bureaucracy Allegiance to the discipline and academic values Recognition of the values and interests of stakeholders and relevant communities of professional practice
Stakeholding in Evaluation
215
In addition, we observed earlier in this chapter that the starting point for discussions about evaluation stakeholders within language program evaluation has had a tendency to focus on the polarities of insiders and outsiders and in particular on the role of the external evaluator in relation to program staff. The issues have been motivated by questions such as those in Table 12.5. Table 12.5 • • • • • •
Motivations for involving external evaluators
Who has the evaluation expertise? Who is the least likely to have no vested interest in the object of the evaluation? Who is the least likely to be biased towards particular outcomes? Who is in a position to offer ‘fresh’ perspectives/new insights – at different stages of the evaluation process? Who is able to address issues that ‘insiders’ find uncomfortable or controversial? Who has greatest credibility? (and in whose eyes?)
The reasons behind increased stakeholder participation is well rehearsed from some perspectives in the language programme evaluation literature (e.g. Alderson and Beretta 1992; Kiely and Murkowska 1993; Weir 1991; Rea-Dickins and Germaine 1998; Tribble 2000), as summarised in Table 12.6. Table 12.6
Motivations for involving stakeholders
Programme stakeholders • • • • • • •
know the situation well and can better explain or offer different interpretations of phenomena; can help trace programme or project developments: the collective memory; have better recollections of decisions taken; understand human relationships and predispositions better: the baggage and the history; are more likely to accept the evaluation findings; will become more aware and critical of their professional practice; will develop an awareness of evaluation practice.
The concept of partnerships in evaluation can also be usefully traced within other ‘literatures’. For example, the supposed benefits are prominent within the domain of international development, with clear linkages here to large-scale English language programme evaluations sponsored by donor agencies such as DfID (Department for International Development) or SIDA (Swedish International Development Agency; see also chapter 13). Although the partnership discourses are powerful, it has been argued that these remain largely at the level of rhetoric:
216
Program Evaluation in Language Education
1. Different categories of stakeholders are typically acknowledged, but in terms of quality and depth the discussion ‘remains on the level of generality and a taxonomy’ (Murphy and Rea-Dickins 1999: 93); the ambiguities in the theory and the praxis remain. 2. ‘The level of rhetoric concerning mutual respect, transparency, trust, dialogue, cooperation, coordination and genuine partnerships cannot eliminate underlying differences and structural relationships between aid providers and aid recipients’ (Buchert 2002: 83, cited in Brown 2004: 3), with 3. Ambiguity over the learning potential from an evaluation, i.e. whether learning has or has not taken place outside of the evaluation ‘senior management team’ (see Data 12.3, last row); and 4. The fact that the majority of all reported evaluations in language education have been externally driven and that the role of stakeholders have been limited to that of ‘respondent’ (with exceptions, e.g. Alderson and Scott 1992; Rea-Dickins et al. 1997).
We therefore have sympathy with the view of Brown (2004):
Quote 12.3
Brown on partnership discourses
Partnership is subject to mixed-interpretations and controversy . . . . However this resultant ambiguity of partnership is a significant reason for its appeal, and its vagueness: ‘can involve a denial of individual identity: we share everything . . . for the donors the great advantage of this model of partnership is legitimation in that it allows them to claim a certain authenticity’ (Stirrat and Henkel 1997: 75). (2004: 2; internal documents, University of Bristol)
The fact that the power base in many evaluation cases resides with the evaluation sponsoring agency, or the evaluator/evaluation team, leads us to question the extent to which an evaluation can be democratic and whether it is in fact possible to ‘equalise whatever power knowledge provides’ (Weiss 1986: 144). Of central importance here is the nature of evaluation expertise, a definitely understated facet in the stakeholding literature, and exemplified by Murphy and Rea-Dickins (1999: 95) as follows.
Stakeholding in Evaluation
217
Data 12.13 Identifying stakeholder knowledge, power, interest and evaluation expertise Knowledge Expertise Control Budget control Responsibility Benefits
Loyalty
Status Distance
About the project and about project evaluation. Relevant to the project and to evaluation. Power to initiate or stop action and participation. Power to take decisions about spending. Recognition of the individual’s/group’s power and its potential to affect others. As symbols of individual power and as potential to advance (an increase in one’s own knowledge and skills, for example. Individuals may have more than one loyalty, but the direction of loyalty may change (as when, for example, one becomes integrated into a team). Loyalty in groups also has the potential to influence outcomes. Position within a hierarchy, or origin of a group or an individual. Degree of acceptance of another’s right to take decisions or benefit personally.
Murphy and Rea-Dickins (1999) have also suggested that it is possible to test three propositions about stakeholder perspectives with reference to programme and evaluation implementation processes. These are:
Data 12.14
Hypotheses about stakeholding
1. Stakeholder perspectives defined by power relations offer more insights into evaluations than definitions based on job or position. 2. Stakeholder perspectives defined by power relations will have greater explanatory potential than considerations of cross-cultural differences when examining and understanding reactions to evaluation or an evaluation. 3. Understanding stakeholder perspectives will enable us to plan and organise evaluations more effectively, and to promote a greater and better use of their findings.
In this chapter, we have attempted to demonstrate the relevance of proposition 1 above. The second proposition draws on the work of Murphy (1997) which suggests that the existence of an evaluation culture reveals more about the utilisation of evaluation than attempts to explain utilisation through cross-cultural difference. And as Murphy and Rea-Dickins (1999: 95) assert: ‘The third proposition follows from the first two and would,
218
Program Evaluation in Language Education
therefore, be true for any approach.’ What we need is a theory that explains the locus of power in relation to stakeholders, their stakes and their evaluation expertise (see Data 12.13). In other words, we need to achieve better explanatory power in relation to stakeholding in evaluation studies to assure: (1) comprehensive coverage of evaluation issues and focal points; (2) the validity of the data obtained and interpretations drawn; (3) the inclusion of different voices such that the weight of the data and interpretation is not balanced in favour of those who ‘shout loudest’; and, importantly, (4) uptake from the evaluation. Further, such a theory will have the potential ‘of making sure that educational change processes are appropriate to the contexts in which they are to be carried out’ (Tribble 2000: 327). To achieve these aims, we refer once more to the centrality of dialogue and debate within evaluation contexts in relation to professional practice, resonant with the now much earlier work of evaluation and curriculum theorists such as Stake (1967), Parlett and Hamilton (1972), Stenhouse (1975) and Kemmis (1986).
12.5
Summary
In respect of stakeholding, the focus of this chapter, we believe this theme to have been relatively unproblematised in the language program evaluation literature. We applaud enhanced engagement of different groups affected by program innovation and the elucidation of issues related to stakeholder participation. We have demonstrated ways in which stakeholder participation can be sought in evaluations – that goes well beyond the position of ‘stakeholder as informant’ – as a means (1) to enhance the validity of the issues that are addressed within a study, (2) to provide opportunities for collaborative engagement of different stakeholder groups, and (3) to learn about program implementation. In our view, utilisation and learning are two major goals for evaluation: why bother to evaluate if there is no evidence of learning or tangible actions as a consequence? As Murphy and Rea-Dickins (1999: 98) have suggested, the following are relevant questions for those planning to engage in ‘participatory evaluation’ studies. Table 12.7 • • • • • • •
Planning for participatory evaluation
How do I plan for open communication? How is partnership evaluation defined in this specific context? How do I put power and responsibility at the level where decisions will be most effectively taken? Is it possible to do this? How can I build in time for ‘learning to evaluate and participate’ in the evaluation? (See chapter 13.) How can I identify stakeholder interests? How do I manage ‘power and expertise’ in evaluation decision-making? How can I identify the different power relationships between stakeholder groups?
(from Murphy and Rea-Dickins 1999: 98)
Stakeholding in Evaluation
Appendix 12.1
219
Student action points
Student Action List • Some signal on the main page to show new information has been uploaded. • Establish links between units and simplify operations. • Establish synchronous chat system. • Make course information more consistent across units, especially around assignment support and provide all unit outlines. • Feedback is often delayed – make more timely, frequent and from various tutors. • FAQ list to help with technical problems. • Encourage more active involvement from students. • Enable students to post information. • More personalised space and ability to save what is useful. (from Timmis 2004: 15–16)
Appendix 12.2
Focus group instruments
M.Ed. TESOL Programme Evaluation: Nominal Group Technique Record of Group Responses Think about all the activities and tasks you have done (or been asked to do) using Blackboard on this MEd program. Which of these have been particularly helpful or unhelpful to your learning? Helpful 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. (Timmis 2004: 22)
Unhelpful 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
This page intentionally left blank
Part 3 Evaluation Practice and Research
This page intentionally left blank
Introduction
In Part 3, we set out evaluation design and decision frames to support both readers’ evaluations and research into evaluation processes. Many of the points we make relate to the discussion of purposes and principles of evaluation in Parts 1 and 2. Our purpose in Part 3 is twofold: first, to guide and support evaluation practice, and second, to establish a critical orientation to this practice, so that evaluation is a process of situated enquiry rather than the use of generic instruments. A critical perspective is likely to: • facilitate research into the evaluation process; • inform on how evaluation is viewed and used within programs; • illuminate how the evaluation process itself identifies constructs, roles, and learning within programs. Chapter 13 takes large-scale evaluations as its focus and examines the design and implementation issues involved. We identify the nature of large-scale evaluations and pose a series of questions to be asked in relation to: • • • • • •
focusing the evaluations; validating evaluation issues and questions; developing evaluation procedures and modes of analysis; developing evaluation skills; defining constraints within program evaluations; and developing an ethical stance.
Examples sampled from large-scale evaluations are used to illustrate some of the complexity and the need for a principled and systematic approach to evaluation design and implementation. Chapter 14 looks at teacher-led evaluations. We identify the dimensions of professional practice in language education that might be evaluated, and 223
224
Program Evaluation in Language Education
set out contextual factors that can be related to the feasibility and usefulness of evaluation. Three sample studies are presented as: • a means of illustrating design and implementation in practice; and • a frame for engaging in situated enquiry: an examination of both program context and evaluation options which can guide the elaboration of novel evaluation designs which address program issues and meet expectations in different contexts. Chapter 15 examines the development of management-led evaluations. The issues and options here share much with those of the preceding chapters: the frameworks for action and the sample studies are presented both to illustrate designs and explore the complexities of evaluation practice. A key issue here is stakeholder involvement (see also chapter 12). While teacher-led evaluations focus largely (but not exclusively) on the pedagogical responsibilities of the teacher, management-led evaluations work on a bigger canvas. They have an integrating function, mediating an ongoing dialogue between wider discourses in terms of institutional policy and provision of resources, and quality learning experiences within programs. These chapters are set out not as a set of ready-made evaluation designs, but rather as an illustration of the complexities of design and implementation that derive from program contexts and evaluation purpose. We hope the instances and sample cases we present will serve as tools for action and frames for further enquiry into evaluation processes. For this reason, questions are a significant feature of our discourse: the answers which readers and evaluators provide will constitute an analysis of the program in context, focal points for emerging evaluations, and a basis for evaluation research.
13 Large-scale Evaluations
13.1
Introduction
In Part 2 in particular, we provided examples of evaluation studies undertaken for a variety of purposes that (i) reflected approaches and designs using different evaluation paradigms and (ii) drew on different constructs in applied linguistics research to inform the development of evaluation procedures. Several of those presented (e.g. chapters 5, 6, 7 and 8) were in fact examples of large-scale evaluations but there we did not analyse in any detail the criterial features or key facets of evaluations of this kind. In this chapter, we consider central evaluation decision-making points that may frame approaches and designs of large-scale studies. In particular, we encourage our readers to address the following questions: 1. How do you recognise and define large-scale evaluations? 2. How do you frame evaluation focal points for an evaluation study, i.e. what is to be evaluated? 3. How do you validate evaluation issues and questions? Which are the ‘right’ questions to ask? 4. Which evaluation procedures will you use, and why? How will you analyse your data? 5. Why and how will you develop evaluation skills? (e.g. how can members who are part of a larger evaluation team be inducted to evaluation processes, knowledge and skills?) 6. What are the constraints that you will be working with in your evaluation? 7. What is the role of ethical mindfulness within programme evaluation contexts, (e.g. what specific ethical concerns might be raised?)
13.2
Understanding large-scale evaluations
How do you recognise and define large-scale evaluations? Considerable diversity is evidenced across large-scale evaluations and motivations for them may vary, but they tend to focus on major educational 225
226
Program Evaluation in Language Education
innovations with significant financial backing (external or internal) with underlying agendas centred on: 1. 2. 3. 4.
concerns about ‘value for money’ and resourcing levels; quality assurance; decisions as to whether to continue the funding or axe the programme; an awareness/perception of ‘major’ problems, i.e. chaos in the programme so ‘call in an evaluator’ syndrome (see section 3.5 above for a discussion of the political dimensions of evaluations); and 5. tensions evidenced between the need to demonstrate accountability and the potential for an evaluation to inform program development. First, we define some of the diversity in large-scale evaluations.
Table 13.1
Diversity in large-scale evaluations
1. The educational strategy Large-scale evaluations may link to a national strategy or curriculum reform movement (e.g. the development of on-line teaching and learning, the introduction of a new curriculum or program) centering on a major innovation, or the need to provide data as the basis for significant strategy developments, (as in the case study of chapter 5). 2. The process of innovation Evaluations may be carried out as part of a pilot study to establish the suitability of a given evaluation (that is, contribute to decision-making and policy formulation) and inform on how it might be implemented, (for example, chapters 7 and 8 above). 3. The level of resourcing for the evaluation Evaluations typically have an allocation of resources, which is a small proportion of the overall investment in the program. There are likely to be questions about how the level of resources influences the design of the evaluation, for example, the extent to which it enables a classroom observation component or inclusion of case studies (see chapters 5 and 6). 4. Stakeholder issues Large-scale evaluations usually involve layers of different stakeholder groups, generally incorporating sizeable numbers of participants (for example, an external or government agency, teachers, inspectors), as observed in several of the evaluations reported in Part 2 (see also chapter 12). Evaluators may include program managers (as in the case study reported in chapter 7), independent experts, or a combination of both (as in the case study in chapter 5). 5. Sampling issues Large-scale evaluation may involve multi-sites, for example, schools in a particular district or nation-wide (represented in chapters 5, 7 and 8; see also the ELTSP examples below), or several different countries (see chapter 6). An evaluation may take a survey approach across a whole program, or a more focused study of selected sites.
Large-scale Evaluations
227
6. Evaluation focus Large-scale language program evaluation may focus on learners, relating to learning achievements and attitudes, on changes in teachers’ behaviour and attitudes, or a combination of both (for example, chapter 7 and 8). 7. Audience The reports of large-scale evaluations may be read by a limited number of decision-makers or disseminated more widely; many may never see the light of day (as noted by Rea-Dickins 1994): they may be commissioned for internal purposes, be considered of too sensitive a nature or context specific and therefore intentionally suppressed or circulated on a restricted basis; alternatively, although available on request they are not visible in the public domain so may easily be overlooked. 8. Structure of evaluations Large-scale evaluation may involve a one-off study (for example those in chapters 5, 6, 7 and 8) or a series of iterated studies (see chapters 9 and 10).
It should be noted that some of these parameters impact directly on evaluation processes and implementation. Who are the commissioners of large-scale evaluations? These are, by and large, powerful players in terms of their potential influences on program or institutional policy, as exemplified in Table 13.2.
Table 13.2
Large-scale evaluations: commissioners and contexts
Evaluation context
Commissioning organisations
Major national government initiatives, e.g. • • • •
The Literacy Hour in England and Wales Primary Modern Language Project (chapter 8) Evaluation of the Western Isles (Mitchell 1992) Evaluation of Modern Languages in the Primary School
Major initiatives in development contexts, e.g. The Molteno Language and Literacy Project (Namibia and South Africa) English Language Teaching Secondary Project (ELTSP), Tanzania (see chapter 13)
Department for Education and Skills (DfES), www.dfes.gov.uk) Irish Government & European Union (EU) funding Scottish Education Department Scottish Education Department (Johnstone 1999; 2000) National project in collaboration with government agency (DfID) (Rea-Dickins 1994) Government of Tanzania, DfID, The British Council (Rea-Dickins et al. 1997)
228
Program Evaluation in Language Education
Table 13.2
(Continued)
Evaluation context
Commissioning organisations
Professional country-wide initiatives, e.g. PROSPER (Romania) (see Bardi et al. 1999; see also chapter 4) PRINCE (Poland) (see chapter 15)
Government in collaboration with external agency
Developmental cross-country initiative, e.g. Project Development Support Scheme (PRODESS) (see also Chapter 4)
Institutional Quality Assurance e.g.: UK higher education
Language schools Teacher training
Government grant and British Council funding; some ‘internal’ resourcing
Quality Assurance Agency (QAA) Qualifications and Curriculum Authority (QCA) British Council EQUALS
Aspects of quality development are, however, unlikely to be visible in any prominent way in large-scale evaluations for several reasons, linked to use of existing data and the quality of existing data.
Table 13.3
Limitations of large-scale evaluation
1. The timing of the evaluations themselves: rather than being fully integrated within a particular programme or project, as was the case in the PMLP (chapter 8), they are undertaken as an add-on component of the program. 2. Across the lifetime of a single project (e.g. the English Language Teaching Secondary Project (ELTSP) (see Rea-Dickins et al. 1997) several evaluation studies may be commissioned and undertaken. However, they have a tendency to be discrete, self-standing and not tightly integrated within routine program operations (see Kiely et al. 1995). 3. Existing data sets may be incomplete and the kind of data needed can no longer be collected, e.g. within the evaluation time-frame, since it relates to classroom processes or not feasible as it is ‘after the event’.
In the context of the ELTSP impact evaluation (Rea-Dickins et al. 1997) there was a concerted effort to make use of existing evaluation data.
Large-scale Evaluations
Data 13.1
229
Using existing data sets in large-scale evaluations
A large number of monitoring mechanisms have been set in place for the ELTSP . . . A decision was taken during Visit 1 [as part of the ELTSP Impact Evaluation] to make use of existing data sets wherever possible, for the following reasons: • • • • • •
the importance of exploiting data already gathered; these data have been gathered over time, which would include ‘concurrent’, in addition to ‘retrospective’ perceptions gathered at this end point in time; limitations on the usefulness of gathering additional data on account of the large scope of the ELTSP, implemented countrywide; additional collection of comprehensive data sets would have significant financial implications; time constraints in providing evaluation training for in-country members of the evaluation team; the prescribed reporting schedule.
(ELTSP Impact Study, Rea-Dickins 1996: 28)
Taken overall, then, and played out over several years, such evaluations: 1. May lack continuity with reference to each other and for the program as a whole; the fact that both those who lead these studies and those who participate on each occasion may differ is further testament to their inherent weakness. 2. May be limited in relation to the contributions they might make developmentally – over time – to any given project. 3. May be inhibited in terms of evaluation impact through lack of continuity or staff commitment resulting in loss of uptake from evaluation findings. How do you frame evaluation focal points for an evaluation study? What aspects of the programme are you going to evaluate? The potential areas on which a large-scale evaluation might focus are numerous, determined by the nature of the programme under review. For example: Table 13.4
ELTSP evaluation focal points
Context • The ELTSP – a large-scale (234 private and government schools), national and multi-site example over a ten-year period: 1986–97 (see Rea-Dickins 1996, Rea-Dickins, Mgaya and Karavas-Doukas 1997). Aims • To raise levels of school learners’ English language proficiency through innovations in both teacher education and inspection.
230
Program Evaluation in Language Education
Table 13.4
(Continued)
The innovations to be evaluated • Standards of English language proficiency in secondary schools, through the implementation of an effective Reading Improvement Programme, Form 1 Orientation Course, with INSET to support these innovations. • Teacher performance through the effective use of Senior ELT Advisor (project coordinator) and seven English language teaching officers, and increased INSET, especially for English teachers in Forms 1 and 2, and a strengthening of the Inspectorate. • Teacher trainer performance through INSET Tutor training, strengthened PRESET in Teacher Training Colleges. • The development of appropriate curricula to support each of the above initiatives.
The wide remit of the ELTSP impact evaluation study – also likely to be the case with other large-scale evaluations – can be explained as follows: • It was commissioned towards the end of a ten-year period of innovation implemented on a national scale in a large and diverse country. • It was reasonable to expect that discernible project impact would be possible and could be measured after a decade of project innovation (see Fullan 1998), thus contrasting significantly with expectations of finding evidence of project impact within a much shorter time scale. There is a tension, however (recalling Nuttall 1991, Concept 4.2, p. 65 above) between the potential of an evaluation to inform: • On the one hand, on the impact of specific and discrete project activities (implicit in the itemised list in Table 13.4), and, • On the other hand, to provide evidence for the wider success of the government’s English language teaching development strategy in relation to enhanced educational or economic developments. In other words: 1. The potential of large-scale evaluations to yield impact evidence for the former, i.e. accountability data in terms of how program inputs have been used, is greater than impact evidence about program developments, i.e. what made the program components work. 2. The capacity of an evaluation to inform on overall impact, i.e. what overall changes has the whole program brought about, in relation to the achievement of wider strategic objectives is even more difficult to evidence.
Large-scale Evaluations
231
13.3 How will you validate the evaluation issues and questions? In Table 13.5 we set out an approach with reference to evaluation Terms of Reference (TOR) and questions. Table 13.5
Critiquing terms of reference
Terms of Reference (TOR) • Do you take evaluation TOR at their face value? (i.e. as they have been developed from the perspective of the commissioners of the research – and the evaluation design developed with little further discussion). • Do you attempt to broaden out the interpretation of the TOR (e.g. to engage with the evaluation context through the participating stakeholders). • How do you assure what are the ‘right’ questions to ask? (e.g. are they all present in the specified evaluation TOR?) • How do you elicit feedback from different programme players in various ways and at different points in the evaluation process? • Why is it important to gain a broader perspective? (e.g. to identify a comprehensive set of themes and questions, to ensure a better coverage of pertinent issues, questions and facets of program implementation. • Are there opportunities for learning through stakeholder engagement with TOR, as well as a window to gain insights about program constructs, conditions, constraints and implementation? • Does dialogue with stakeholder groups enhance validity? (e.g. through the development of valid data collection procedures and questions).
The reader is also referred to section 12.3, p. 206 above, which focuses specifically on modes of stakeholder engagement in evaluation processes. What are the ‘right’ questions to ask? Data 13.2 presents a number of questions relating to the evaluation of a national network of resource centres (Rea-Dickins 1991; Hedge 1992; Reid 1995; see also chapter 15 below). • • • • • •
How appropriate are these questions? What orientation do they have? Are they comprehensive enough? Do they reflect the concerns of relevant stakeholder groups? (see chapter 12) Will they yield the data required from the evaluation? Are they ‘workable’ questions?
232
Program Evaluation in Language Education
Data 13.2
Sample evaluation questions
Large-scale evaluation context
Sample evaluation questions
National Network of Resource Centres (RC)
• • • • • • • • •
How many RC have been set up? How many RC staff have been trained? How often are the RC used? Who uses the resource centres? Which materials have been developed? How are the materials used? What training for RC staff has been provided? How are staff resources & expertise used? Does the RC have a strategic plan and if so how is this implemented?
These questions were fashioned around project objectives, inputs and anticipated processes (see Table 5.1, p. 77 above and Concepts 4.1 and 4.2, pp. 64 and 65 above), and a distinction can be made between those evaluation questions generated in Data 13.2 that focus on:
1. A ‘measured product’, linked usually to a specific ‘performance indicator’ on the one hand. 2. Those which lead to an explanation as to how these measured achievements are actually working and being implemented on the other.
Taking the resource centre example provided above:
• The fact that the target number of resource centres might have been established and centre managers recruited and trained does not yield adequate data on the basis of which sound judgements can be made about how efficient and effective the implementation of a resource centre network actually is. • They do not inform on issues of program quality. For this different kinds of question needs to be asked.
Large-scale Evaluations
233
The last question in Data 13.2 on resource centre strategy could be defined more precisely through a series of micro-evaluation questions that have the potential to yield more detailed and action-related evaluation data, such as:
Data 13.3 • • • • •
Exemplar evaluation questions
What is the range of strategies the resource centre uses to publicise its activities and raise its profile? How has the resource centre worked towards establishing the size of its target groups? How effective are the strategies the resource centre uses to reach target users? Does the resource centre have attractive publicity? Does the resource centre have a policy of publicity, promotion and service enhancement which contributes to its wider implementation strategy?
(from Reid 1995)
Readers are encouraged to consider carefully the framing of their evaluation questions in relation to fitness for evaluation purpose, audience, and utilisation intentions.
13.4 Which evaluation procedures will you use, and why, and how will you analyse your data? A large-scale evaluation: • may focus on any curriculum aspect; • may potentially draw from the full range of social science research procedures; • may use several procedures, rather than one – a particular feature of large-scale evaluations; and, therefore, • may use a variety of means to analyse the data gathered which, in turn, has implications for data management (see section 13.5). In Table 13.6 we provide an overview of some of the procedures that have been successfully used in large evaluations that, in turn, have implications for evaluation training. Readers are invited to reflect on evaluation studies with which they are familiar and to analyse the evaluation focus, the procedures used, the nature of data capture and the mean of data analysis.
234
Evaluation context, procedures and data analysis
Evaluation of
Procedures
Data collected
Modes of analysis
Teacher development: in both classroom and teacher training contexts
•
Systematic observation
•
Count data
•
•
Ethnographic (chapters 9 and 10) Self-assessment checklist, e.g. as basis for discussion with trainer (Rea-Dickins 1994) Self-report (chapter 8), interviews Questionnaires
• • •
Field notes Narrative accounts Count data
•
Primarily qualitative data e.g. autobiography Qualitative data from open-ended questions; data for numerical analysis from closed questions Qualitative data
•
Ethnographic accounts
•
Data-base, e.g. ACCESS, Spreadsheet, e.g. EXCEL Development of categories
•
Development of categories
•
Count data
•
Development of categories
•
Narratives
•
Qualitative software, e.g. QSR NUDIST, ETHNOGRAPH
•
• •
Classroom innovations
•
Documentary analysis, e.g. lesson plans, syllabuses, Head of Department Report (chapter 7)
•
Systematic observation (Mitchell 1992), Lawrence 1995) Ethnographic observation (chapters 9 and 10)
•
Numerical and use of statistical software, e.g. SPSS Qualitative software,e.g. WinMAX (Kuckartz 1998)
Program Evaluation in Language Education
Table 13.6
Teachers’ language proficiency
•
• • Students’ language proficiency
• • •
Computer-mediated teaching and learning
•
Resource centres
•
• •
Count data Field notes
• •
Test scores Narrative reports
Self-assessment scales Tests, pencil and paper and performance measures Observation-driven assessment, e.g. language sampling (Gardner and Rea-Dickins 2002)
•
Student focus groups (e.g. Timmis 2004) Nominal group technique (e.g. chapter 10; Timmis 2004, see chapters 12 and 15) Documentary analysis (PISET, Rea-Dickins 1992)
•
Numerical and use of statistical software, e.g. SPSS
•
Qualitative software, e.g. WinMAX (Kuckartz 1998)
Count data Test scores, language samples on performance tests
•
Statistical software, e.g. SPSS, FACETS or qualitative analysis
•
Lists, student accounts
•
Development of categories or narrative accounts
•
Summary of points, student accounts
•
Corpora
• •
Wordsmith Tools (Scott 1999) WinMAX (Kuckartz 1998)
Large-scale Evaluations
•
Observation bandscales: classroom and teacher training contexts (chapter 5) Language tests (chapter 5) Self-report (chapter 8)
235
236
Program Evaluation in Language Education
A particular challenge for large-scale evaluations that involve different layers of data capture in relation to diverse program aims is: 1. Managing the data, e.g. ranging from the form in which it is prepared and presented for data entry, on the one hand, to 2. How it is all brought together as the basis for making sense of the data through interpretation of the available evidence, on the other.
13.5
Why and how will you develop evaluation skills?
‘Learning to do’ evaluation Acquiring the skills to define and prioritise central evaluation issues and questions, as well as to develop valid evaluation designs and procedures, is central to and underpins any quality programme evaluation, of which largescale evaluations are no exception. It is thus important to consider evaluation ‘skills development’ in order to promote evaluation ‘best practice’. Table 13.7 • • • • •
Evaluation participant skills
Who is to undertake the evaluation: a single evaluator, a small team or a larger group of professionals? Why? Why use external evaluation consultants? What strengths do program participants bring to an evaluation? How can ‘novice evaluators’ develop their evaluation skills? What forms could these skills’ development activities take? Can evaluation skills development be part of an evaluation study?
We suggest that: 1. Evaluation consultants are very probably experts in the subject specialist sense with insights into the particular construct(s) under investigation; they will bring with them the skills to develop appropriate methodologies: design and procedures to research these domains, but that this may not always be the case (see also Table 12.5, p. 216 above). 2. The imperative to implement more participatory approaches to program evaluation requires that those who become involved as, say, members of an evaluation team, should have the opportunity to develop the research skills necessary for the conduct of the evaluation. 3. The laudable aim in 2 above may be problematic: ‘It is very difficult to conduct participatory evaluation research at the level required by an impact study of the kind required by the TOR where some partners lack basic skills in relevant areas’ (Rea-Dickins 1996: 28).
Large-scale Evaluations
237
Murphy (1995; 1996), most notably, writes about ‘learning to do evaluation’ in the apprenticeship sense, where novice evaluators work alongside more experienced staff to develop their skills and to position them for independent evaluation work in the future (Loh 1995). The apprenticeship analogy is also important in that it positions evaluations as socially situated whereby evaluation participants and/or evaluation team members: • May contribute to decisions about evaluation focus, and the options available (e.g. the extent of piloting within available resources; see chapters 5 and 8 in relation to the construction and piloting of appropriate and valid evaluation procedures). • May develop their evaluation skills through both the actual trialling of the data collection procedures as well as specialist workshops (e.g. in using classroom observation bandscales, chapter 5; and testing procedures, see chapters 5 and 8). Evaluation training embedded within specific evaluation contexts provides key opportunities for the situated learning of evaluation skills, a point further reinforced in the PRODESS Guidelines for Evaluation (Kiely et al. 1995).
Quote 13.1 Kiely et al. on evaluation skills development as socially situated It is useful to view the period of training as an internship; project members will have tasks to carry out, but they and their supervisors need to remember that learning tasks takes time and patience to develop understanding of criteria, constraints and so on, and, like other kinds of learning, will probably include some errors. (1995: 12)
Data management 1. Within the context of large-scale evaluations, the remit of the program may be: • large and extremely diverse; • wide ranging in terms of questions, data procedures, and data sets and analysis. 2. A capacity especially crucial for large-scale evaluations relates to aspects of data handling in particular, e.g. organisation, cleaning and storage of data sets for easy retrieval.
238
Program Evaluation in Language Education
3. Specific advice on how to collect, code and manage data is essential to the success of any evaluation. This is especially so in the large-scale evaluation context where there are several evaluation team members collecting data across a large number of schools. Table 13.8 • • • • •
Questions about data management in evaluations
Why is it important to provide guidance in this area? What are the threats to the validity of the data if this part of the evaluation is not systematic? What are the threats to data interpretation? What are the threats to the impact from and the utilisation of evaluation findings? What guidance would you give for coding data in – as one example – an evaluation study which involved 126 diverse and widely dispersed schools in which data was being gathered by different evaluators?
This is how one evaluation handled the last question in Table 13.8. The example below illustrates how novice evaluators – in the evaluation of teachers’ English language proficiency (chapter 5) – were inducted into evaluation processes and procedures, with an example (Data 13.4) in the form of instructions for coding data. This, of course, needs also to be reinforced through face-to-face training sessions.
Data 13.4
Instructions for coding the cover sheet questionnaire
1. Here is a coding sheet for the Cover Sheet Questionnaire. [see Data 13.5 below]. 2. Use a separate coding sheet for each individual school. Each sheet has a capacity for twelve test-takers. Use two (or more) sets if the number of test-takers exceeds this (e.g. college students). 3. In the space provided, enter relevant information about the test administration and coding: • • • • • •
Name of school Date of test administration Name of invigilator Number of test takers Name of coder Date of coding
4. Enter each test-taker’s ID number in the extreme left-hand column. The ID number must reflect the number that has been given by the evaluation team for this teacher.
Large-scale Evaluations
239
5. Transfer all the responses of the first test-taker to the first line of the table provided on the coding sheet, following the instruction below. The numbers 1–13 on the coding sheet refer to the items 1–13 on the questionnaire. Leave the space blank where the test-taker does not provide any answer. If the answer is illegible, leave a blank.
Take the utmost care at this stage to avoid mistakes. After copying the first test-taker’s responses on the sheet, check every entry before proceeding to the next test-taker. 6. Repeat the same (as described in 5 above) for the next test-taker, and for all the other test-takers. 7. When completing the coding, put the coding sheet on top of the test takers’ questionnaires, and secure the documents together with a paper clip. 8. Make photocopies of all the documents and keep them in a safe place. 9. Put all the original coding sheets, along with the filled-in questionnaires, into an envelope, and send to X. Keep the rest of the test booklet in a safe place.
The actual coding sheet on which members of the evaluation team were asked to enter their data is provided in Data 13.5.
Data 13.5
Coding sheet for cover sheet questionnaire
School Name:———————————— Date of Test Administration:—/—/1999 Number of Test-takers:———————— Name of Invigilator:—— Name of Coder:———————————— Date of Coding:—/—/1999
ID Number
1
2
3
4
5
6
7a
7b
8
9
10
11
12
13
1 2 3 4...
The benefits from such training and the reasons behind establishing an infrastructure to cope with multiple data sets gathered by different members of an evaluation team include the following:
240
Program Evaluation in Language Education
• Clean and accurate records will be maintained. • Poor data management will result in loss of data, which, in turn, • Represents a potentially serious threat to the integrity of evaluation data, and ultimately, the overall interpretation of the findings and any conclusions drawn. It is interesting to note that a central feature of the ELTSP impact evaluation was the integration of a specialist attachment of six months to an evaluation unit where skills development in managing and analysing data were the main focus for a senior member of the national evaluation team. Above we have provided an example of how evaluation skills can be developed. Ultimately, however, the question remains as to whether the resources are available to induct large numbers in an evaluation team in novel procedures.
13.6 What are the constraints that you will be working with in your evaluation? There are a number of constraints, or potential inhibitors, that surface with particular reference to large-scale evaluation studies. We suggest that the notion of contextual constraint can be defined in relation to four factors: Table 13.9
Defining contextual constraints
1. What are the factors affecting the implementation of the programme? (see below) 2. What are the factors affecting evaluation processes? (see below) 3. What are the factors affecting the interpretation of the data and findings? (see below) 4. What are the factors affecting the proposals for action? (see also chapter 14).
Constraints affecting program implementation and interpretation Rea-Dickins and Germaine (1992) note the need to be alert to contextual constraints. Questions that might highlight factors that can affect program implementation are detailed in Data 13.6 with reference to the ELTSP impact evaluation.
Data 13.6
Identifying constraints: conditions of implementation
To evaluate any program effectively, it is important to analyse the conditions within which the innovation is taking place. The following are illustrative guiding questions:
Large-scale Evaluations • • • • • • •
241
How would you define the country’s economic context? Does it have a multi-job economy, such that it affects those who have professional contacts with various aspects of the project/programme evaluation? What are the levels of resourcing provided by the Ministry of Education and Culture for schools and for teacher development activities? What are the levels of leadership like in schools? Is there a trained management team cadre for teacher support within schools? What is communication like? Are there transport problems? How big is the country? What are the factors that might make project implementation and monitoring tasks more difficult?
However, constraints may also impact on the data interpretation phase of an evaluation study as explained in conclusions of the ELTSP impact study final report.
Quote 13.2
Identifying constraints to effective implementation
It is important that the findings presented in this evaluation report are fully considered with reference to the overall context of education provision and English language teaching in the country more specifically. The following comments represent essential background detail against which the ‘success’ of this programme of innovation may be judged . . . and are seen as crucial in coming to conclusions about the impact and effectiveness of the ELTSP. (Rea-Dickins et al. 1997: 222)
Further, field notes provide another means to inform on project implementation processes. Data 13.7 provides data about the ELTSP Reading Improvement Programme.
Data 13.7 • •
Extracts from inspectors’ reports
Inspectorate concerns about adequate numbers of English teachers: ‘Of the 21 schools inspected/visited, 8 had sufficient teacher ’ (Highland zone, 1990). Untrained teachers surfaced as a constraint, compounded by issues of teacher transfer:
‘The movement of teachers and the recruitment of untrained teachers in private secondary schools greatly affected the effective implementation of the project in schools ’ (Highland zone, January–June 1991).
242
Program Evaluation in Language Education
Data 13.7 •
(Continued)
The requirement for 6–8 English lessons per week was not implemented across all schools; neither were the basic administrative arrangements set out by the project for the reading program: ‘the schools which offered 6-8 periods did fairly well in the reading programme ’ (Eastern zone, January–June 1992); ‘There
was very little reading going on. Most of phase one schools visited did not have any reading records, and most of the records recorded an average of three class library books read throughout the year ’ (Lake zone, July 1992–February 1993) (Rea-Dickins et al. 1997: 222–3)
From observations such as these (i.e. as part of the evidence base the ELTSP impact evaluation study was able to draw on), it became possible to summarise factors associated with effective implementation of the reading programme on the one hand, and those associated with less effective implementation on the other, thus identifying specific programme implementation constraints. Constraints affecting evaluation processes We exemplify constraints on evaluation processes through the lens of sampling and of time-resourcing issues. Sampling In the SAE evaluation (chapter 6), for example, we highlighted some of the problems with our respondent base. Those who did participate were representative of the target groups, but were self-selected and were probably, in the words of Clegg et al. (2000: 82), ‘a reasonably enthusiastic constituency’. As a consequence, it became impossible to achieve a balanced sample in order to make valid comparisons between the participating groups according to the different variables (e.g. equal numbers of teachers new to as well as experienced in using SAE). Other factors affecting sampling decisions are further illustrated in Data 13.8.
Data 13.8 • •
Sampling in large-scale evaluation contexts
It was not possible to draw a random sample of schools for this evaluation. Apart from the well rehearsed limitations of experimental design studies and random sampling, the difficulties in selecting schools in this district were compounded by a variety of contextual factors: e.g. access to some schools was not considered desirable, nor in some cases possible, on account of teacher strikes, teacher absences and other forms of disruption.
Large-scale Evaluations •
•
243
As a consequence the selection of schools, teachers, and school principals was based on non-probability sampling through the use of quota sampling: involving a deliberate selection of samples that reflect a known composition of the whole population. Variables such as the following were used to inform the selection: Project/non-project schools; Rural/urban schools School location/zone; Well-resourced/poorly resourced Size of school; Performance in national examinations
•
•
Based on these variables, schools were selected in consultation with the project team, who also had to take into account the extent to which access to schools was feasible in terms of travelling distance and on grounds of safety. Every attempt was made to capture the diverse conditions prevailing in each district.
(from Rea-Dickins 1994: 29–30, ODA Report 5921)
Time Typically, time constraints are directly linked to the resource allocation available for an evaluation study. Points 2, 3 and 4 in Table 13.10 will be particularly constrained by resourcing levels.
Table 13.10
Evaluation constraints
Constraints 1. In the development of evaluation strategy: e.g. is there a strong preference for, or imposition of, the cheaper, one-shot design rather than one that also engages with actual classroom process through longitudinal data capture, i.e. ‘gathering data over time’, thus reflecting a ‘process’ view of curriculum experiences rather than the evaluation of curriculum outcomes? 2. In the development of the evaluation procedures: e.g. is the strong argument for ‘time well-spent’ on the piloting and validation of data collection procedures likely to be accepted? 3. On skills development: e.g. ‘In planning training it is important to relate training needs to project planning so that time and effort are used efficiently’ (Prodess Guidelines, p. 11), but are funds actually released for this? 4. In developing a participatory approach: e.g. is there recognition of the value of time spent engaging in discussion with a range of stakeholder groups, not only as informants but also as participants in data analysis and interpretation processes? 5. In reporting/presenting the final report: e.g. is this expected ‘too soon’ without having engaged with appropriate dissemination opportunities.
244
Program Evaluation in Language Education
In our experience, constraints such as these: • may give rise to tensions in relation to evaluation planning, implementation and reporting processes; • may result in a trade-off between validity on the one hand and practicability and feasibility on the other. With reference to point 4 in Table 13.10 in particular, loss of opportunities for dialogue and ‘critical debate’ (see chapter 4) between program participants and an evaluation team is likely to impact negatively on uptake of evaluation recommendations, decision-making and subsequent action.
13.7
Developing ethical mindfulness
All evaluations raise ethical issues (see chapters 14 and 15; see also Lynch 2002). In Table 13.11 we identify five dilemmas that have particular resonance for large-scale evaluations. Table 13.11
Ethical dilemmas
1. Respect individual identity and anonymity: e.g. how can this be assured if working with, say, case studies in different distinctive educational zones or a number of small states the data from which will reveal individual identities, schools or members of senior management teams? 2. Informed consent from all participating informants: e.g. how can this be achieved prior to data collection on a country-wide or multinational basis? How is it actually feasible to meet with and distribute consent forms to parents who don’t accompany their children to school, whose homes are inaccessible by road and whose children might walk 2–3 miles to school? 3. Use of data: e.g. consent forms will explain how data are to be used in both the short and long term. But, what leverage does an external evaluator have on how data collected or data included in a report will be used? What is the shelf-life of data? 4. The ‘confidential annex’: there may be certain information arising in the course of an evaluation that should not be made available more widely. Should an annex of this sort be used for whistle-blowing? 5. Evaluator loyalty: e.g. is this articulated and clarified? Loyalty to the evaluation sponsors or to the profession in which the evaluation is embedded?
Evaluators need to be aware of: • a rising tide in ethical governance and charters within Europe; • ethical guidance available from the full range of learned societies and association abound (e.g. ILTA, BAAL, BERA, BPS, Prodess Guidelines; see chapter 16 below);
Large-scale Evaluations
245
Evaluators need to ask: • to what extent is such guidance for evaluation studies appropriate, as it is frequently offered in relatively unproblematised ways from a European or North American perspective; • are there ‘other appropriate ways’ of engaging in research and evaluation studies?
Quote 13.3 International students’ comments on ethical guidelines and practice Very often, we accept the good practice codes [of various professional organisations] e.g. BAAL’s Recommendations on Good Practice in Applied Linguistics http://www.baal.org.uk/ goodprac.htm)...this is however, based on the assumption that we human beings have the same understandings of good practices. It is, however, not always the case. Different cultures have different interpretations of the same ‘good’ practice at a particular point of time... Senior authorities in [country X] make the decision whether an individual can go to an educational centre and do research. The process begins with a letter from the university or the place where the researcher works/studies to introduce the researcher to the senior educational administrators in the area; they, in turn, write a letter introducing the researcher to the specific school/s where he/she needs to investigate. All other people, including the head teacher, and the class instructors, have to cooperate thereafter. In addition, we do not have a data protection act as you have here. Everything happens as an unwritten agreement between the researcher and the participants; they are told that their information will not be disclosed to anyone . . . The type of agreement is, at least so far, unsystematic. (International PhD students, University of Bristol, September 2004)
In summary, what might appear to be reasonably straightforward advice to assure an ethical approach for an evaluation study may be neither appropriate nor feasible in another cultural context. There are time implications for a discussion of these issues and/or dilemmas.
13.8
Summary
In this chapter we have drawn together various strands in the implementation of large-scale evaluations. These are situated in the evaluations described in Part 2 and in other studies. The issues are in many instances presented as sets of questions for evaluators to address in context as part of the process of evaluation design and implementation. We emphasise three aspects of large-scale evaluations, all of which relate to validity: (1) understanding the evaluation construct, terms of reference and evaluation questions; (2) data management – sampling, data gathering and analysis; (3) participants in the evaluation process and the report, particularly in relation to evaluation use, contextual constraints and ethical requirements.
14 Teacher-led Evaluations
14.1
Introduction
In this chapter we set out some evaluation projects which teachers can carry out in their own teaching contexts. These correspond in some ways with action research and reflective practice. They build on notions of professional practice as enquiry, professional development through enquiry, and the centrality of contextual understanding in solving curricular problems and enhancing opportunities for learning. Teacher-led evaluations therefore are opportunities to evaluate curricular resources, that is, learning materials and classroom tasks, resources such as information technology and libraries, and aspects of interaction in the teaching learning process. In addition, they serve a professional development function, whether related to formal appraisal or performance management processes, or more personal aspects of professional learning. The findings of such evaluations contribute to the management task in language programs, whether within the school or institution, or related to the operation of mandates from external stakeholders. Teacher-led evaluations relate to innovation in two ways: 1. They are likely to work best where there is a culture of innovation: where teachers are encouraged to experiment with materials, tasks and activities as part of their role in facilitating language learning. 2. The innovative aspects of the curriculum (including, perhaps, the practice of evaluation) requires a management of change dimension to the evaluation. This may mean enquiry into a given resource, a new course book or a computer in the classroom, as a change of practice as well as a set of curricular practices in its own right.
14.2
The scope of teacher-led evaluations
Teacher-led evaluations take aspects of the planned or intended curriculum as focal points. Reasons for a particular focus may derive from concerns about 246
Teacher-led Evaluations
247
effectiveness or efficiency, that is, the extent to which aspects of the program promote learning as intended, or represent a good use of resources. A list of possible foci includes: 1. 2. 3. 4. 5. 6. 7. 8.
Exploring students’ needs and wants. Evaluating course books and materials. Designing and evaluating task types. Utilising resources including IT. Developing an evidential base for a particular view of program quality. Exploring teaching-learning interactions. Designing and evaluating assessment formats and processes. Learning through self evaluation.
The case studies examined in Part 2 addressed several of these focal points; for example, in chapter 6 they considered the use of tasks, and chapters 7 and 8 looked at teaching-learning interactions. In these evaluations the lack of a classroom observation component in the evaluation and limited involvement of teachers in documenting their practices resulted in difficulties in drawing firm conclusions. Teacher-led evaluations addressing these focal points can be developed along two lines: 1. They can be focused on one particular area, or a combination of areas. 2. They can be framed as one-off studies, or more usually, on-going studies shaped by and adapting to the evolution of practice. Teacher-led evaluations can be carried out in at least three ways; 1. Teachers acting alone. 2. Teachers working in pairs or small groups. 3. Teachers and students working together. There is also a role for teachers working with administrators to develop data bases over time, particularly on entry profiles, exit profiles, attendance records, routine evaluation findings and self-access centre use.
14.3
Evaluation projects
Getting started The first step is to determine the potential for evaluation in the language program context, the extent to which innovation and evaluation are valued. This point is important for two reasons: from an ethical perspective, it is important to work with the consent of all program participants and
248
Program Evaluation in Language Education
Table 14.1 Context factors in teacher-led evaluations Characteristics which influence readiness for evaluation
Evaluation focal points – likely to be a positive characteristic for:
Teachers’ sense of opportunity to change practice, such as availability of new resources
Evaluating course books and materials Utilising resources including IT
Teachers’ perceptions of need for change
Exploring students’ needs and wants Designing and evaluating task types Designing and evaluating assessment formats and processes;
Teachers’ perception of evaluation as contribution to improved practice.
Learning through self evaluation
Sufficient time for carrying out the evaluation
Exploring teaching–learning interactions
Teacher involvement in quality management
Developing an evidential base for a particular view of program quality
stakeholders; and from a pragmatic perspective evaluation is unlikely to lead to program improvement if key people are not engaged or are unhappy with it. Some or all of the characteristics in Table 14.1, col. 1 of the language program context may benefit from exploration as the first stage of the evaluation. These characteristics relate differently to evaluation focal points as set out in Table 14.1. Where there is a sense at the end of this initial assessment of readiness for and appropriateness of evaluation, the next stage is to decide on specific evaluation questions. We identify three types of evaluation question which are particularly relevant to practice in this context: • To what extent is [curricular component] effective is the manner intended? • In what ways is [curricular component] effective? • Are there unintended effects of the [curricular component]? In each case two further questions are likely to be of interest. First, an explanation question: • Why is it effective in these ways? And second, an improvement question, • How can it be improved? Accommodating ethical concerns There are three concerns that we wish to highlight here: 1. The need to protect students and preserve the integrity of the program, so that maximum effort on the part of the teacher is directed to further
Teacher-led Evaluations
249
learning of the language by the students, and so that this effort is not limited in any way by the teacher’s concurrent involvement in evaluation. 2. The need to protect teachers from inappropriate use of evaluation findings, e.g. promotion, contract renewal, for example, where the evaluation provides a critical or otherwise unwelcome perspective on institutional policy, etc. 3. The problem of dealing with data which are outside the scope of the evaluation, but which uncover issues which it may be illegal or inappropriate to ignore. Gathering and analysing data In Table 14.2 we set out a framework of data types particularly relevant to teacher-led evaluations. The framework of approaches to data collection and analysis set out in chapter 13 for large-scale evaluations is more detailed and may be of use where teachers are working with large data sets. A key distinction is made between data which represent behaviour and data which represent attitudes. Reported behaviour – for example, what program participants state in interviews or in journals – is different and may be considered to reflect actual behaviour or attitudes to program activities. Table 14.2 Data types Types Means of collection
Behaviours
Attitudes
Interview/Questionnaire
✓ (reported)
✓
Group discussions Fieldnotes Diaries, journals Audio/video recording Classroom observation schedules Documents – students’ work, tests, coursebooks, teaching materials, syllabuses
✓ (reported) ✓ ✓ (reported) ✓ ✓ ✓
✓ ✓ ✓ ✓
Table 14.3 lists published resources for the development of instruments for teacher-led evaluations. In the context of teacher-led evaluation, three practices are likely to prove conducive to effective enquiry and appropriate action: 1. Focusing on methods which promote learning. Group discussions and diary-keeping are likely to have beneficial pedagogical impact. 2. Involving program participants as data gatherers and analysts as well as informants, and considering training for these roles as both a curriculum and evaluation resource. (Note: This is particularly relevant for students on language courses and teacher development programs.) 3. Considering meetings and workshops as ways of reporting evaluations rather than written reports only.
250
Program Evaluation in Language Education
Table 14.3 Resources for data gathering in teacher-led evaluations Data types
Published resources on instrumentation
Questionnaire Interview Group discussions Fieldnotes Diaries, journals Audio/video recording Classroom observation schedules
Dornyei (2003); Weir and Roberts (1994) Drever (1995); Kvale (1996) Kreuger ( 1994); Wilcox (1992) Stake (1995) Nunan (1989) Nunan (1992a and b) Simpson and Tuson (1995); Nunan (1992a and b) Blaxter, Hughes and Tight (1996); McNamara (2000) (tests); Tomlinson (2003) (materials)
Documents – students’ work, tests, course books, teaching materials, syllabuses
14.4
Sample projects
In what follows we set out some sample projects that you can undertake in your own contexts and which all draw on the principles and practices we have discussed and illustrated in Parts 1 and 2 of the book. Course book evaluation Context (This is adapted from an unpublished evaluation carried out by one of the authors (Kiely) in a private sector language school.) A group of teachers on an English language teaching program work with a course book purchased by students. The overall aims of the program promote communication and interaction. The program requires all six class groups at this level to use the same course book to facilitate purchasing and to facilitate movement between classes by students. Evaluation questions • To what extent does course book X facilitate interaction and communication in lessons? • In what ways do teachers supplement course book materials? • What is the best way to make improvements – change the course book, or revise use of it? • Why do teachers supplement the course book activities? Data collection and analysis Each teacher maintained a log, documenting in relation to each lesson the time (in minutes) spent on course book activities and on supplementary
Teacher-led Evaluations
251
materials. For each activity where interaction and communication (s–s; t–ss) were key elements an impressionistic judgement (buzz checks: ✓✓✓ a real buzz; ✓✓ some engagement; ✓ flat; laboured) was noted. Each teacher maintained a table for each unit (approximately two weeks of class), and at the end of this period met to tally and compare. Evaluation issues This evaluation project provides an opportunity to engage with the following issues: • Developing a professional discourse to articulate teachers’ constructions of materials. • Initiating a descriptive account of what teachers actually use materials for, and raising awareness of personal variations in relation to this. • Documenting the sources of supplementary materials, and promoting a discussion of the reasons for their use, such as student preferences; teacher beliefs; cultural value; topicality; humour. • Engaging with informal judgements teachers necessarily make of classroom processes, and leading teachers to query the basis of the validity of these judgements. • Providing a context for teachers to observe the practices of others (albeit indirectly) and thus stimulate reflection and professional learning. Evaluation of a curriculum innovation Context (This is adapted from an evaluation carried out by one of the authors (Kiely) and colleagues at Thames Valley University, and published as Clark et al. 1996.) A group of four teachers on a university foreign language and area studies program develop an integrated curriculum where language knowledge and skills and area studies are taught and assessed together. There are two two-hour classes per week and each includes activities to promote the development of language skills development and understanding of social and cultural issues of the area studied. Assessment is a group project which involves oral presentation and written report. Evaluation questions • To what extent does the integrated curriculum provide opportunities for learning in the foreign language and in area studies? • In what ways do students develop language skills and understanding of the area studied? • How can the integrated curriculum be improved? • How does a focus on content stimulate language skills development?
252
Program Evaluation in Language Education
Data collection and analysis Data collection is in three phases: 1. A questionnaire study to determine interests and learning preferences. 2. A series of interviews with volunteer students (eight students interviewed every two weeks in Weeks 4–10 of the twelve-week program to document learning activities outside lessons. 3. A group discussion procedure – nominal group technique – at the end of the program (see chapter 10 for a detailed account of the NGT procedure) (after the oral presentations, but before the completion of the written report). Evaluation issues This evaluation project provides an opportunity to engage with the following issues: • Providing a context for the discussion about the design of this integrated curriculum when the initiative has moved from the design to the implementation stage. • Generating a focus on materials evaluation – how language tasks facilitate content learning, and area studies input supports language skills development. • Generating a similar focus on assessment issues: transparency (for example, students’ perceptions of group assessment), practicality (for example, the time demands of assessment by oral presentation) and validity (for example, the extent to which the skills focused on in the program correspond to those required by the assessment performance). • Establishing a data-based understanding of how students use a range of resources in a university-learning context. • Providing, through the use of the nominal group technique, a context for teachers and students to discuss the program in a shared context. • Facilitating the development of research projects, an aspect of professional practice which is valued in university teaching contexts. • Providing evidence for quality management purposes of a grounded learning-centred approach to developing the curriculum. Evaluation of a research methods course in a teacher education program Context (This is adapted from an evaluation carried out by the authors at the University of Bristol which was part of a larger study published as Kiely et al. 2004.) A course coordinator and tutor on a postgraduate teacher education research methods course want to evaluate the various forms of support
Teacher-led Evaluations
253
provided to students to facilitate writing of dissertations. The forms of support, in addition to the taught course of the program, include: • • • • •
A taught research methods course. A series of dissertation seminars. A series of workshops on data analysis techniques and software. Consultations with a dissertation supervisor. Talks, seminars and presentations in the research centres of the university.
In initial survey of students’ experiences of learning to research focused on the taught research methods provision. This study did not inform significantly on the dissertation experience which takes place after the taught elements of the program. This evaluation took as it focal point the independent learning leading to the completion of the dissertation. Evaluation questions • To what extent does the support for dissertation-writing meet students needs? • In what ways do students develop the skills required for writing the dissertation? • How can the program provision in this area be improved? • How do learning dispositions (Carr and Claxton 2002) facilitate successful dissertation-writing? Data collection and analysis Data collection is in three phases: 1. Case studies on six students, who volunteer to interview each other at 3–5 weekly periods over the four months of the dissertation writing period (an induction to the study and training workshop in interviewing was provided, and at the end a concluding workshop identified key issues and informed on the role of the evaluation process in writing the dissertation). 2. An email questionnaire filled in by each student on the program at the point of submitting the dissertation. 3. A review of dissertation grades and reports. Evaluation issues This evaluation project provides an opportunity to engage with the following issues: • Informing on program provision through the lens of the student experience, rather than the actual provision – the life of the student in the program, rather than the program in the life of the student (Kushner 1996).
254
Program Evaluation in Language Education
• Generating a focus on the dissertation writing process – the student interviewing is revealing here, since the questions asked, as well as the answers, reflect actual experience of the focus of the evaluation. • Providing an opportunity to develop a participatory approach to program evaluation, and thus to an understanding of perceptions of stakes and stakeholding. • Establishing a data-based understanding of how students engage with a complex problem-solving task such as writing a dissertation, and the range of resources they call on. • Providing evidence for quality management purposes of a capacity to support research-learning in ways students find effective.
14.5
Summary
In this chapter we have explored some issues in the design and implementation of teacher-led evaluations. Our account of purposes and process in such evaluations is exemplified by sample projects, which teachers can use as starting points for initiating evaluations relevant to the pedagogical issues and opportunities in their own contexts. We emphasise three features of teacher-led evaluations. They are likely to be most effective when (1) they are linked to shared teaching and learning concerns; (2) they relate to valued professional learning; and (3) they are constructed as opportunities by teachers rather than as obligations or impositions.
15 Management-led Evaluation Projects
15.1
Introduction
Much of the discussion in Parts 1 and 2 has been about the relationship between program management and evaluation. In this chapter, we set out frameworks and sample evaluation designs which readers can use to embark on evaluations which aim to understand program management issues and improve program management processes. A management lead in designing evaluations is important for four reasons: 1. Current concerns for quality in many program contexts emphasise program ownership at institutional-level. 2. Many programs are implemented according to mandates or approval from external bodies, which in turn require programs to meet specified standards and requirements (see chapter 3 above). 3. Knowledge-building and policy development related to teaching and learning are likely to be more effective where there is involvement of program management. 4. Management involvement in evaluation activity may develop and facilitate the use of links between program evaluation and management processes of performance assessment and professional development.
15.2
The scope of management-led evaluations
We identify four principal evaluation purposes which interface with the program management task. These are set out in Table 15.1, with related evaluation constructs and applications.
15.3
Evaluation projects
Getting started We set out two approaches to initiating management led program evaluation: (1) the adaptation of a Management Information System (Patton 1995) to 255
256
Program Evaluation in Language Education
Table 15.1
Evaluation purpose, construct and use
Evaluation purpose
Evaluation construct
Evaluation use
To evidence the quality of the learning experience
The view of quality from the institution or wider context
Demonstration of compliance with mandates Informing ongoing development of mandate specifics
To facilitate improvement of course
The curriculum as set of learning aims, activities, resources, and innovations, such as materials and ICT
Identification of appropriate changes; Identification of areas for further enquiry, for example: – learning materials; – ICT resources; – induction and learner training; – assessment formats.
To facilitate reflection, professional learning and teacher development
Opportunities for teachers to learn through reflective practice
Teachers’ use of evaluation data to develop individual interpretations and responses in terms of professional action.
To inform formal teacher evaluation processes
Institutional frameworks for promotion, renewal of contracts, and awards of resources for special projects (such as the further enquiry activities in the Evaluation Use column).
Institutions use of evaluation data as part of making professional judgements about teachers and other program practitioners.
a language program; and (2) a framework for developing use of course evaluation data. 1 Developing a Management Information System to assist evaluation Patton’s utilisation-focused evaluation (see Table 15.2); sets out a form of Management Information System which provides a useful first stage in datagathering as part of routine program management. The growing role of information technology in administrative systems facilitates the gathering and compilation of such data in language programs. With much of the information in Table 15.2, problems in collating and sharing data are unlikely. For example, participation levels may involve class attendance lists, a practice which is increasingly routine in language programs. Caseloads, on the other hand, might require teachers and teacher trainers to provide detailed information on the support they provide to individual students. This might be viewed as time-consuming and bureaucratic, as not beneficial
Management-led Evaluation Projects
257
Table 15.2 Patton’s Management Information System, applied to language program contexts Patton’s Program MIS
Data for language programs
Client intake
Recruitment patterns: – demographic information – language proficiency levels – language learning experience – reasons for participating in program
Participation levels
Participation patterns – attendance in class – use of independent learning facilities – participation in interaction – effort
Programme completion rates
Completion (outcomes) patterns – learning outcomes (test results) – satisfaction outcomes (evaluation findings)
Caseloads
Caseload patterns – teacher/student ratio – actual time required to support each student – value for learning of curriculum activities, such as tutorials, language centre management, and learner training
Client characteristics
Client characteristic patterns – curricular impact of demographic factors – curricular impact of learning culture factors – impact of client characteristics on caseload patterns – impact of marketing activity on client characteristics
Programme costs
Cost patterns – financial information – costs other than financial (time, admin load, etc.) – cost-benefit analyses
to current students, and as intrusive in a way that chips away at professional autonomy. The task for management here is to ensure that such information gathering is part of an ongoing process of understanding and improving. Programs managers can use this information in three ways: 1. Identifying program strengths and weaknesses. 2. Planning and implementing program innovations. 3. Constituting baseline data for further evaluation or action research studies.
258
Program Evaluation in Language Education
In each context where program teams wish to establish such a database, and use the information for evaluation and development purposes, some important questions need to be addressed. We see exploration of these questions as a key starting point in evaluation for management purposes. The questions are set out in three categories: Administration 1. Which data are already collected? 2. Are they accessible by administrators, teachers and managers? 3. Are there data protection and data security issues which affect access? Roles and responsibilities 4. Who collects and analyses these data? 5. Who uses these data for developing programs, and for other purposes such as external presentation of the program? 6. Who maintains the database over time? Use and benefit 7. What modes of reporting are involved? 8. What opportunities for discussion of data are there? 9. What is the cost-benefit case for maintaining such a program database? A key feature of a program database is the collection of data over time. As the database grows and access to the data is not lessened by the volume, it becomes more useful. For example, if a language program manager carries out a one-off study to find out how one cohort of participants heard about the program, the findings will provide a limited account. If such information is routinely provided in application forms, but not collated, it will be of limited use. However, if the information is routinely data-based as participants are admitted, then it is likely to contribute substantially to the development of a marketing strategy. 2 Enhancing course evaluation data In many program contexts a management perspective on the actual program experience is through the program evaluation findings, such as an end-ofcourse questionnaire (see chapter 10). Examining such findings and reports in isolation is unlikely to afford a clear picture of the program strengths and weaknesses. One way forward is to identify other evaluation data sets which can be developed for a more comprehensive account of the program as a learning experience. Table 15.3, adapted from a framework for evaluation within PRINCE, a British Council program which supported the innovative Polish language teacher education scheme, proposes lines of development
Management-led Evaluation Projects Table 15.3
259
Evaluation for improvement in a language teacher education program
Evaluation data sources
Lines of development for evaluation
Students’ views:
Teaching
Questionnaires Interviews Meetings Journals Virtual learning environments Students’ learning outcomes: Coursework Tests Colleagues’ views Stakeholder views
→
Individual
→
Practice
→
Practical Issues
→
Attitudes
→
Outcome
Learning
Team
Overall program development
Syllabus/Method distinction Theory
Synthesis of attitudinal data and performance data
(adapted from Kiely 1994)
to maximise the value of evaluation activity (Kiely 1994). This was a framework for managers and teacher trainers in different colleges where the use of course evaluation questionnaires was established. The strategies in the central column represent lines of development, which can guide the planning of evaluation activity, and serve as a checklist for monitoring practice. The five strategies listed are important in the following ways: Teaching → Learning It is important to focus on how students or trainees are doing when we are designing evaluation instruments and collecting data. This establishes the criterion of achievements in learning as central when students are considering responses to a questionnaire or in a discussion. Thus, when rating performance, students are considering their own learning rather than the performance of the teacher. (See chapter 10 for a detailed discussion of program evaluation by students.) Individual → Team Within programs, there may be several strands, units or modules. Where these are evaluated separately, the series of partial accounts of the student learning experience may not be brought together. A team approach benefits
260
Program Evaluation in Language Education
the evaluation instruments and the data: the former are likely to be improved by trialling and extensive use, so that over time the data can have a validating function. The effect of team involvement is also likely to enhance analysis and interpretation of data, and use of the findings to improve the programs. Practice → Syllabus/Method distinction A language program is both a plan – syllabus – and its implementation – method. An important aspect of determining strengths and weaknesses is distinguishing between the two: in the context of a curriculum innovation which is proving problematic. It is necessary to understand which aspects of the problems derive from the plan, and which from its implementation. There are many anecdotal accounts in program evaluation of innovative, communicative materials being used for grammar translation-type classroom practice. Landmark evaluations discussed in the previous chapters, for example, the Lawrence evaluation (see section 4.2, p. 56 above and chapter 5 above) and the EELTS evaluation in Hong Kong (see chapter 7) failed to clarify whether it was the planned curriculum or the manner of implementation in classrooms which was inappropriate. We consider that such questions are best explored within programs and institutions, where the complex dynamics of syllabus and method can be engaged with. Practical issues → Theory By theory here we mean the rationale for the program as set out initially at the work plan stage. Questions which may need to be addressed are: • Is the original thinking still valid? • How can we find a solution which follows this rationale? • Does the experience of implementation critique the general rationale in a novel way? Attitudes → Synthesis of attitudinal and other data Student feedback often has a substantial satisfaction element, which relates to the performance of the teacher (see chapter 10 for a discussion of these issues in the evaluation research of Marsh (1987) and others in the American higher education system). We propose, as part of the evaluation process, a discussion of what the satisfaction data relate to, and which other data, such as data on student background and contribution to the program and test or assessment results might contribute to understanding the satisfaction construct. Accommodating ethical concerns Management-led evaluations assume teacher and manager collaboration in program evaluation. However, this shared commitment is likely to be from
Management-led Evaluation Projects
261
different perspectives, shaped by differing stakes and interests, and articulated from different levels in the organisational hierarchy. A key ethical requirement, therefore, is to situate evaluation processes so that their outcomes and impact are not skewed by interests other that program improvement. We see this broad ethical requirement as having four dimensions, which can be addressed in the context of evaluation design and practice through four questions: 1. Is program evaluation adequately resourced in terms of time and data management requirements? 2. To what extent and in what ways is the evaluation process owned and shaped by those participants involved in day-to-day program activity, with more remote stakeholders playing a supportive and responsive role? 3. How is the interface between program evaluation processes and outcomes and management activities such as promotion of staff and contract renewal managed, and to what extent is this considered satisfactory by program personnel? 4. Is evaluation activity seen by teachers and students as promoting learning, or as taking time and other resources from the learning enterprise? There is inevitably some overlap with the ethical requirements of other forms of evaluation: the discussion in chapter 13 in particular addresses ethical issues in evaluation which inform the integration of program evaluation and management.
15.4
Sample projects
In this section we set out three evaluation designs which have been undertaken from a program management perspective. The first focuses on innovation evaluation; the second, on language centre evaluation and the third, on the evaluation of a Virtual Learning Environment. Each evaluation is presented in summary form, with references to the fuller account. We invite readers who find these evaluations resonate with their professional responsibilities to use these accounts in any of three ways: 1. To carry out replication studies. 2. To adapt the evaluation strategy to better meet the needs of their context. 3. To enquire into the evaluation process itself. Evaluating an innovative EAP program Context Jacobs (2000) presents an evaluation which focuses on the management of change in a Learning in English for Academic Purposes (LEAP) program in
262
Program Evaluation in Language Education
the South African tertiary education sector. The evaluation had a constructivist orientation, reflected formative, summative and illuminative purposes, and generated a ten-stage procedure for the evaluation of innovative programs (see Table 15.4). Jacob’s analysis of the evaluation identifies three broad purposes: (1) to understand the ways LEAP as an innovation changed the curriculum, teaching and learning; (2) to understand the ways the institutional context (social, political and economic climate) impacted on LEAP; and (3) to identify and involve all LEAP stakeholders in the evaluation. Table 15.4
Stages in the process of evaluating educational innovation
STAGE 1 Locate the innovation within the context and policy framework of its operation
STAGE 2 Determine the goals of the evaluation
STAGE 3 Identify the principal stakeholders from all relevant constituencies
STAGE 4 STAGE 5 Determine criteria for evaluating aspects of the innovation
Identify the aspects of the innovation to be evaluated
STAGE 6 Decide on the best sources of information
STAGE 7 STAGE 8
Decide on evaluation methods to be used
Collect data from sources
STAGE 9 Analyse and interpret the data
STAGE 10 Disseminate the evaluation findings (reproduced with permission from C. Jacobs (© C. Jacobs 2000) by permission of Sage Publications Ltd.)
Management-led Evaluation Projects
263
Evaluation questions • To what extent was the LEAP program effective? • What teaching and learning practices were developed by the LEAP program? • What were the attitudes of the participants towards LEAP? Data collection and analysis Data were collected in six ways: 1. 2. 3. 4. 5. 6.
A questionnaire to all students. Interviews with a sample of these students. Observation in LEAP classrooms. Review of documentation. Meeting and discussion for a with students and teachers. Scrutiny of test results.
Evaluation issues We identify five evaluation issues for the use of evaluation to facilitate and understand the management of change aspects of complex language programs such as LEAP, and develop from these questions for evaluators in other contexts to consider: Table 15.5
Evaluating innovation – issues
Issue
Further questions
In taking a constructivist position both on the purpose and process of the evaluation, Jacobs develops a focus on both the design and outcomes of LEAP, which she views as summative evaluation principally for institutional managers and policy-makers, and the operation of the program, in terms of understandings and practices of teachers and students.
Is the Jacobs model likely to prove useful for evaluations with both accountability and development purposes? Is it feasible, using the stages of the Jacobs’ model, to focus on either accountability or development?
The inclusive approach to data-gathering (both qualitative and qualitative data are used) provides different stakeholders with valid, credible and usable accounts. This contrasts with the narrower approach to data gathering in the EAP program evaluations described in chapters 9 and 10 – in that context only limited evaluative accounts filtered through to institutional managers and policy-makers.
Is the purpose of your evaluation a one-off study of an innovation, or an evaluation integrated into routine teaching and management tasks? Are the resources available to you sufficient for a two-year study such as Jacobs? If not, which stages might be skipped? Which stages of the Jacobs’ model might usefully augment an approach such as that described in chapter 10?
264
Program Evaluation in Language Education
Table 15.5
(Continued)
Issue
Further questions
The qualitative data generated findings which informed on both the implementation of the LEAP program, in terms of student needs, classroom practice and professional development for teachers. Thus, this aspect of the evaluation process constituted a service to management, in ways similar to the ethnographic case study element of the EELTS evaluation described in chapter 7.
In what ways might evaluations of these three aspects of the program (student needs, classroom practice and professional development for teachers) assist program development? Are there confidentiality or ethical issues with this use of evaluation data, for example, the use of classroom practice data for teacher professional development?
Two types of quantitative studies were undertaken: (1) a rigorous pre- and post-test design involving experimental and control groups and comparisons on three variables, and (2) an examination of ‘patterns of benefit when all the quantitative data from the summative phase were pooled’ (Jacobs 2000: 277). The results of the former appeared to be inconclusive, while those of the latter showed a consistent patter of benefit. Jacobs comments: ‘All the summative data, such as LEAP test scores, the LEAP throughput figures, the LEAP pass rates and the independent measures of outcome, showed that the LEAP participants had benefited as a result of their participation. However, the outcome of the LEAP summative evaluation seems to suggest that quantitative measures should be used circumspectly when evaluating educational innovation. Attempts at rigour often create other confounding variables. Researchers should display their awareness of these inherent limitations at the start of the evaluation and reflect the kinds of quantitative data that can be used.’ (Jacobs 2000: 277)
Does the discourse of experimental rigour prevail in your language program context? Is it possible to work with the ‘pooled summative data’ approach from the start, that is, identifying the quantitative data sets to be used? Which qualitative data sets might be used to assist analysis of test scores, throughput figures, pass rates and other measures of outcome? Jacobs discusses the limitations of quantitative data for the evaluation of educational innovation. Do qualitative data have limitations that might be problematic in your context?
This evaluation placed emphasis on the contribution of program participants to the evaluation, particularly in relation to the formative purpose of the evaluation.
Does ‘formative’ refer to innovation management, or to EAP program practices? Or both?
Management-led Evaluation Projects The model facilitated ‘shared understandings’, ‘cross-pollination of ideas’ within the program, and provided a ‘space for reflection on macro issues’ (institutional policies and discourses) and how these impacted on micro issues (teaching and learning practices) in the LEAP program (Jacobs 2000: 278).
265
Does Jacobs’ account of formative evaluation (shared understandings, cross-pollination of ideas, space for reflection on macro and micro issues) reflect the purpose of formative evaluation in your context? Is Jacobs’ account likely to lead to improvement in your program context?
Evaluation of English language teaching resource centres (Reid 1995) Context Reid (1995) presents an evaluation which focused on the management and use of resource centres (RC) in different countries in Eastern and Central Europe as part of the Project Development and Support Scheme (PRODESS; see chapter 4 above). In chapter 13 we discuss this study as a large scale evaluation, and focus on the operationalisation of the Terms of Reference. The discussion here is on the engagement with Resource Centre management. The areas of RC management included: • • • • • • • •
operating contexts staffing management activities and facilities liaison publicity outreach activities user take-up
The evaluation was commissioned by the sponsors (the British Council) to inform on both the effectiveness and patterns of use of the resource centres, and the ways in which the management of RCs could be further developed. Reid developed a case study approach which derived from a theoretical account of what constituted a good resource centre (Little 1989), and examined what this involved in each RC context. Reid’s study was framed by three evaluation questions: 1. Which activities should characterise an ELT resource centre? 2. Which services to students, teachers and trainers can be provided by RCs? 3. To what extent are these achieved and achievable in the different RC contexts?
266
Program Evaluation in Language Education
These questions illustrate the management consultancy orientation of the evaluation: in addition to determining what has been achieved (3), the study also addressed what is achievable in each RC context (3), what should be done (1) and how it might be done (2). We consider such a developmental approach as a key characteristic of management-led evaluations of innovative resources in language program contexts. Data collection and analysis Reid took Little’s (1989) account of independent learning as her starting point. These principles were used as sentence stems (see Table 15.6) for data collection and analysis. These data were related to three levels of service which RCs might provide: 1. Provision of materials (paper, audio, video, electronic) to support language learning teaching and teacher training and development. 2. Workshops and training sessions in the RC to develop use of these materials. 3. Outreach services to ELT contexts and institutions to provide context-based support for use of RC materials. These different levels of activity, permitted on the one hand, a focus on the performance of each RC in its own context, and on the other, sustained attention to the wider agenda in outlined recommendations for each RC. Reid (1995) used the sentence stems below to frame data collection using interviews, documentation review, observation in RCs; examination of RC stock and documentation on procedures.
Table 15.6
Framework from Little (1989) for Reid (1995) RC evaluation
A good centre is one . . . (a) which fully exploits the potential of its context; (b) where the accommodation available is suitable for the activities carried out and is used to best effect; (c) where clear membership targets are established, such targets are met or there is evidence that membership is growing satisfactorily towards them; (d) where the fullest use is made of staff resources and expertise, and training is tailored to individual and centre needs; (e) where structures exist for relevant stakeholders to make an impact, and where reporting lines are clear; (f) where there are processes in place to ensure the stock reflects users’ needs, that stock is labelled in a user-friendly way, and that all stock ‘earns its keep’ in terms of usage; (g) where a range of facilities is offered (audio, video, photocopying, CALL materials) including the facility for staff to record and manipulate data relating to the catalogue, loans and membership with minimum expenditure of staff time;
Management-led Evaluation Projects
267
(h) where all the centre’s facilities are used and procedures exist to monitor use effectively; (i) which provides a program covering a wide range of interests, fully utilises available expertise, and which attracts to its activities a good attendance in terms of the targets aimed for; (j) where a network of relationships has been established with relevant colleagues and organisations, and evidence of collaboration can be found; (k) which uses a range of strategies to publicise its activities and raise its profile, has tried to establish the size of its target groups and has effective means of reaching them, has attractive publicity material, has a policy of publicity, promotion and service enhancement which contributes to a wider business plan; (l) where the users demonstrate a positive attitude towards it, and are able to identify clearly advantages from using it; (m) where there is a planned coherent evaluation strategy, with the findings of any evaluation being used in the formulation of future decision-making and action.
The Reid approach used the Little framework as a model for exploration, rather than as a set of performance indicators. The orientation to future development and the commitment to evaluating each RC in terms of the activities it was undertaking countered any rigidity in the which might derive from use of fixed criteria or indicators. Evaluation issues We identify six evaluation issues for RC evaluation and develop from these questions for evaluators in other contexts to consider if you evaluating a resource centre for language teachers, you can replicate the Reid study, or adapt it through analysis of the particular characteristics of your own context. Table 15.7
Evaluating resource centres – issues
Issues
Further questions
1. The Reid evaluation was not concerned to make a judgement of worth on the policy of developing English language teaching resource centres. Rather it was acknowledged that the policy was part of a wider discourse, and the evaluation task was largely about identification of appropriate strategies.
Is this the case with your resource centre evaluation, or is there a particular focus on achievements, and value for money? Is the policy likely to prove enduring regardless of the evaluation outcome (as described in chapters 7 and 8 above)?
2. This evaluation took an established model of a ‘high quality’ RC as its starting point. Although this proved a useful starting point in this case,
Is there an existing account of a ‘quality’ RC available? Is it feasible to use a grounded approach, i.e. to uncover internal, stakeholder
268
Program Evaluation in Language Education
Table 15.7
(Continued)
Issues
Further questions
largely due to the range of aspects of RC activity addressed by the Little model, use of a ‘received’ conceptual frame work may constitute an imprecise account of the program in context. It is not stated that the Little framework had informed the RC policy, or had informed developmental activity in each RC.
accounts of appropriate and useful RC activity? When a theoretical framework is used as to frame data collection and analysis, is it one which has informed policy or practice in that context? Where a theoretical framework, such as Little (1989), is used, is it possible to draw on a self-assessment (such as Reid’s levels of service) as well?
3. The case study approach of the Reid evaluation made a contribution to management of RCs at two levels: first, at the level of practice, it informed on effectiveness and on opportunities for development in the particular context. Second, it provided an overview on the RC initiative, and guidelines on how policy on this language curriculum resource might be developed.
Is it possible in your evaluation to work with these twin purposes? Is the result of the evaluation intended (explicitly or implicitly) as a judgement of worth of the policy (such as the EELTS evaluation in chapter 7), or as recommendations for improving program practice (such as the PMLP evaluation in chapter 8)?
4. In recognition of these two purposes, reporting was also at two levels: each RC received an evaluation report based on its own activities and development plan, with suggestions for further development which often derived from successful practices in other RCs. An overview report, with separate RC reports appended, was prepared for The British Council.
Is such distributed reporting an option in your context? In what ways might such a separate approach to reporting be considered supportive of development in each RC? Would a more transparently comparative approach, where all centres were discussed together, facilitate collaborative sense-making and learning?
5. The Reid study was an external evaluation for (a) an external audience (the British Council) and (b) an internal audience (RC managers). The role for RC personnel and users was that of informants. Point (m) of the Little framework explores evaluation activity with RCs, but there was no planned use of such evaluation data as part of the Reid study.
What opportunities are there in your RC context for involving RC personnel and users in design, analysis and reporting stages of the evaluation? Is there a possibility to develop in greater detail the Points from Little so that they constitute a case study framework which each RC can use to carry out its own evaluation (as, for example done in Mackay (1994) for language centres in Indonesian universities, and in Crossley and Bennett (1997) in primary schools in Belize)?
Management-led Evaluation Projects 6. This evaluation worked with a series of separate components of RC operations as framed by the Little framework. The analysis and reports focussed on performance and opportunities for development in relation to these components, with only limited exploration of the links between them. In Realist terms (see section 3.4 above) this represents a focus on the molecular rather than the molar.
269
Taking Point (m) of the Little framework as an example, how could enquiry into this be related to other points, such as exploiting the context (a); the use of staff expertise (d); and marketing and publicity (k)? Which links might be prioritised in such an approach?
Evaluation of a Virtual Learning Environment (VLE) in a teacher education program (Timmis 2004) Context Timmis (2004) presents an evaluation which focused on the effectiveness and strategies for development of a VLE. The evaluation was commissioned by the program team to take stock of activity after one year of VLE operation, and establish a direction for VLE activities and support required professional development. The evaluation had a two-stage, dialogic structure, designed to maximise opportunities for action and further enquiry (see also chapter 12 above for an exploration of stakeholder issues in this evaluation). Evaluation questions • What are the strengths and weaknesses of current use of Blackboard? • What are students’ views of current provision? • Which VLE task types and roles might offer enhanced opportunities for learning? • What are the teacher and student induction and training needs for enhanced learning using the VLE? Data collection and analysis Stage 1: Data were collected in three ways: 1. Review of electronic data available through VLE functions, such as materials and task types provided; patterns of access to these; patterns in use of discussion board. 2. Interviews with tutors using the VLE as part of their teaching. 3. Group discussion of VLE with a group of students, using Nominal Group Technique.
270
Program Evaluation in Language Education
Stage 2: A report on Stage 1 was presented at a program team workshop which had three purposes: 1. Elicit teacher responses to the report on Stage 1. 2. Outline strategies for further development and identify professional development needs. 3. Elaborate a plan for further VLE strategies and strategies for ongoing data gathering and evaluation. Evaluation issues We identify four evaluation issues for the use of evaluation to contribute to the management of new technologies as additional opportunities for learning. We develop from these issues questions for evaluators in other contexts to consider. If you evaluating a resource centre for language teachers, you can replicate the Timmis study, or adapt it through analysis of the particular characteristics of your own context. Table 15.8
Evaluating VLEs – issues
Issues
Questions
1. The evaluation used, with appropriate consents, an existing database in the form of the tracking information on access and interactions afforded by the VLE technology. This facility corresponds to Patton’s guidelines for the use in evaluation of existing management information (see Table 15.1).
Does such a database exist on use of your VLE? Are there ethicality or confidentiality issues involved? What is the best way of analysing such data in terms of categories, and quantitative and qualitative approaches? Are there possibilities of formatting such information so that key features of VLE use are documented and available for use in routine program management processes?
2. The evaluation emphasised group processes in the data collection. This feature developed stakeholder involvement, provided for collaborative sense-making, as well as a bridge to action in the form of the Stage 2 program team workshop. This approach thus reflected the advantages of group discussion processes discussed in chapter 10. An audio recording of the discussion and analysis of the transcript facilitated a validity check in terms of group member participation and facilitator role.
What alternatives to group discussions such as NGT are feasible for data gathering in your context? To what extent can the Information and Communication Technology (ICT) be harnessed to this task? In distance learning contexts is the Delphi Technique (a variation of NGT – see Wilcox 1992; Weir and Roberts 1994) an option? What sampling issues are involved: invite all interested students? Identify sampling criteria, such as active and less active users; high achievers and low achievers?
Management-led Evaluation Projects
271
3. This evaluation combines two forms of expertise: Timmis provided expertise in VLE use and evaluation while the program team provided expertise on specific learning aims and the program context. This combination reflects a feature of the design of realist evaluations (Pawson and Tilley 1997; and section 3.4, p. 44 above). It also provided in this evaluation for professional learning and skills development – the workshop set out student induction strategies, task and activity types, and techniques for ‘virtual community’ development.
Is there a resource in your context for ‘buying in’ expertise? If there is, what contribution should such expertise make? If not, what can be achieved, in terms of understanding VLEs as a learning resource, and in terms of management of the resource? Is it feasible to link evaluation reporting and staff development in your context in the way these were linked in this evaluation? Is such linking likely to reduce or increase resistance to evaluation (see section 3.2, p. 37 above)?
4. The data and findings of this small-scale evaluation contributed to a wider evaluation of the VLE provision. Thus, in addition to contributing to strategy at program level, it also contributed to an institution-wide review and policy formation.
Is the strategy in your program context part of a wider institutional policy or discourse? Are there links between program-based evaluations and wider reviews of ICT policy and provision? To what extent are policy decisions on the role of technologies such as VLEs based on learning and teaching experiences within programs?
15.5
Summary
In this chapter we have set out two frameworks for initiating management led evaluation. The first provides for a program database, which as it grows across programs and over time, provides an opportunity to engage with the complexity of language programs. The second sets out a framework for moving beyond course satisfaction feedback, to develop a finer-grained understanding of how the program works. The three sample studies presented illustrate approaches to the evaluation of an innovative EAP program in one institution, the evaluation of resource centres established by the British Council in countries in Eastern and Central Europe, and the evaluation of a Virtual Learning Environment in a British university. In each case the evaluation issues are presented with questions which readers may find useful in using these sample studies as scaffolds for the design and implementation of their own evaluations. These evaluations provided opportunities for learning and development within programs, and in terms of program management, they furthered understanding of the contribution of wider institutional policy on resources and support for learning.
This page intentionally left blank
Part 4 Resources
This page intentionally left blank
16 Resources for Language Program Evaluation
16.1
Introduction
In this chapter we set out further resources for language program evaluation. These cover the paper and electronic material which program evaluators will find useful for engagement with issues of evaluation theory, purpose and practice, examples of instrumentation and data analysis, and evaluation reports. The inherent interdisciplinarity of language program evaluation means that many of these resources fall into two categories: 1. Resources which focus on language programs, but which provide support for evaluators through theoretical and professional perspectives on such programs. 2. Resources which focus on evaluation more generally, and have the potential to inform evaluation activity in language program evaluation through perspectives and insights from evaluations of other educational and social programs. We also see these resources as illuminating three other interfaces in understanding and developing language program evaluation: 1. Evaluation and research 2. Evaluation and program management 3. Evaluation and pedagogy All evaluations negotiate these interfaces, both in terms of the resources drawn on in design and implementation, and in terms of dissemination and use of evaluation findings.
16.2
Books
Alderson, J. C. and A. Beretta (Eds.) 1992. Evaluating Second Language Education. Cambridge: Cambridge University Press. 275
276
Program Evaluation in Language Education
Alderson, J. C., C. Clapham and D. Wall 1995. Language Test Construction and Evaluation. Cambridge: Cambridge University Press. Allison, D. 1999. Language Testing and Evaluation: an Introductory Course. Singapore: World Scientific Publishing. Allwright, D. and K. Bailey 1991. Focus on the Language Classroom. Cambridge: Cambridge University Press . Anivan, S. (Ed.) 1991. Issues in Language Programme Evaluation in the 1990’s. Anthology Series. Singapore, RELC. Available online from ERIC at www. eric.ed.gov. Aspinwall, K., T. Simkins, J. Wilkinson and M. McAuley 1992. Managing Evaluation in Education A Developmental Approach. London, Routledge. Bailey, K. M. and D. Nunan (Eds.) 1996. Voices from the Language Classroom. Cambridge: Cambridge University Press. Bardi, M., G. Chefneux, D. Comanetchi and T. Magureanu (1999). Innovation in Teaching English for Specific Purposes in Romania: A Study of Impact. Bucharest: British Council/Cavallioti Publishing House. Bell, J. 1987. Doing Your Own Research Project. Milton Keynes: Open University Press. Brumfit, C. and R. Mitchell 1989. Research in the Language Classroom. London: Modern English Publications in association with the British Council. Celani, M. M. A., J. L. Holmes, R. C. G. Ramos and M. R. Scott 1988. The Brazilian ESP Project: An Evaluation. Sao Paulo: Editora de PUC-SP. Cronbach, L. J. 1982. Designing Evaluations of Educational and Social Programs. San Francisco: Jossey Bass. Edge, J. and K. Richards 1993. Teachers Develop Teachers Research. Oxford: Heinemann. Eisner, E. 1985. The Art of Educational Evaluation. London, Falmer Press. Fullan, M. 1992. Successful School Improvement. Buckingham, Open University Press. Fullan, M. 1993. Change Forces. London: Falmer Press. Fullan, M. G. 1991. The New Meaning of Educational Change. London: Cassell Gitlin, A. and J. Smyth 1989. Teacher Evaluation: Educative Alternatives. London: Falmer Press. Graves, K. 1996. Teachers as Course Developers. Cambridge: Cambridge University Press. Guba, E. and Y. Lincoln 1989. Fourth Generation Evaluation. Newbury Park: Sage. Herman, J. L., L. Lyons Morris and C. Taylor Fitz-Gibbon 1987. Evaluator’s Handbook. Newbury Park: Sage. Hopkins, D. 1985. A Teacher’s Guide to Classroom Research. Milton Keynes: Open University Press. Hopkins, D. 1989. Evaluation for School Development. Milton Keynes: Open University Press.
Resources for Language Program Evaluation
277
House, E. 1993. Professional Evaluation. London: Sage. Johnson, R. K. (Ed.) 1989. The Second Language Curriculum. Cambridge: Cambridge University Press. Joint Committee on Standards for Educational Evaluation 1981. The Standards for Evaluation of Educational Programs, Projects and Materials. New York, McGraw-Hill. Kemmis, S. 1986. Seven principles for programme evaluation in curriculum development and innovation. In E. R. House, New Directions in Educational Evaluation. London: Falmer Press. Kiely, R., D. F. Murphy and P. Rea-Dickins (Eds.) 1994. Evaluation in Eastern and Central Europe – Papers of the First PRODESS Colloquium. Manchester: British Council. Kiely, R., D. F. Murphy, P. Rea-Dickins and M. I. Reid (Eds.) 1995. Evaluation in Planning and Managing Language Education Projects Papers of the Second PRODESS Colloquium. Manchester: British Council. Kiely, R., D. F. Murphy, P. Rea-Dickins and M. I. Reid 1995. PRODESS Evaluation Guidelines. Manchester: British Council. King, J., L. Morris and C. T. FitzGibbon 1987. How to Assess Program Implementation. Newbury Park: Sage. Lacey, C., B. Cooper and H. Torrance 1998. The Evaluation of Large-Scale Development Projects: A Handbook for Evaluators Based on the Evaluation of the Andhra Pradesh Primary Education Project (APPEP). Brighton: University of Sussex Institute of Education. Legutke, M. and H. Thomas 1991. Process and Experience in the Language Classroom. Harlow: Longman. Li, E. and G. James (Ed.) 1988. Testing and Evaluation in Second Language Education. Hong Kong: Language Centre, Hong Kong University of Science and Technology. Lynch, B. 1996. Language Programme Evaluation. Cambridge: Cambridge University Press. Lynch, B. 2003. Language Assessment and Programme Evaluation. Edinburgh: Edinburgh University Press. Markee, N. 1997. Managing Curricular Innovation. Cambridge, Cambridge University Press. McCormick, R. (Ed.) 1984. Calling Education to Account. London: Heinemann Educational Books in association with The Open University Press. McDonagh, J. and C. Shaw 1994. Materials and Methods in ELT. Oxford: Blackwell. McGrath, I. 2002. Materials Evaluation and Design for Language Teaching. Edinburgh: Edinburgh University Press. Millman, J. and L. Darling-Hammond (Eds.) 1990. The New Handbook of Teacher Evaluation. California: Corwin Press. Norris, N. 1990. Understanding Educational Evaluation. London: Kogan Page/ CARE, University of East Anglia.
278
Program Evaluation in Language Education
Palumbo, D. (Ed.) 1987. The Politics of Program Evaluation. Sage Yearbooks in Politics and Public Policy. London: Sage. Patton, M. Q. 1997. Utilization-Focussed Evaluation. Thousand Oaks: Sage. Pawson, R. and N. Tilley 1997. Realistic Evaluation. London: Sage. Peck, A. and D. Westgate 1994. Language Teaching in the Mirror. London: CILT. Pennington, M. C. (Ed.) 1991. Building Better English Language Programs: Perspectives on Evaluations in ESL. Washington, DC: National Association for Foreign Student Affairs. Rea-Dickins, P. and Germaine, P. 1992. Evaluation. Oxford: Oxford University Press. Rea-Dickins, P. and Germaine, K. (Eds.) 1998. Managing Evaluation and Innovation in Language Teaching. Harlow: Addison-Wesley Longman. Rea-Dickins, P and A. F. Lwaitama (Eds.) 1995. Evaluation for Development in English Language Teaching: Review of ELT 3(3). Basingstoke: Modern English Publications/British Council. Roberts, C., C. Garnett, S. Kapoor and S. Sarangi 1992 Quality in Teaching and Learning. Sheffield: Training, Enterprise and Education Directorate, Department of Employment. Saunders, M. 1998. RUFDATA: A Practical Framework for Evaluation. London: British Council. Simons, H. (Ed.) 198.1 Towards a Science of the Singular. Norwich: Centre for Applied Research in Education, University of East Anglia. Stake, R. 1995. The Art of Case Study Research. Thousand Oaks: Sage. Stenhouse, L. 1975. An Introduction to Curriculum Research. Oxford: Heinemann. Taylor Fitz-Gibbon, C. L. M. L. 1987. How to Design a Program Evaluation. Thousand Oaks: Sage. Thompson, R. M. 1995. Participatory Project Preparation and Appraisal at the World Bank: The Ugandan Private Sector Competitiveness Project, Team Technologies. http://www.teamusa.com/part.pdf. Wallace, M. 1998, Action Research for Language Teachers. Cambridge: Cambridge University Press. Weir, C. J. and Roberts, J. 1994. Evaluation in ELT. Oxford: Basil Blackwell. White, R. 1988. The ELT Curriculum. Oxford: Blackwell. Woods, D. 1996. Teacher Cognition in Language Teaching. Cambridge: Cambridge University Press.
16.3
Journals
Journals with a language program focus Annual Review of Applied Linguistics http://www.journals.cambridge.org/journal_AnnualReviewofAppliedLinguistics Applied Linguistics http://applij.oupjournals.org/
Resources for Language Program Evaluation
Assessing Writing http://www.sciencedirect.com/science/journal/10752935 Computer Assisted Language Learning http://taylorandfrancis.metapress.com/ openurl.asp?genre = journal&issn = 0958-8221 English for Specific Purposes http://www.sciencedirect.com/science/journal/08894906 ELT Journal http://www3.oup.co.uk/eltj/ International Journal of Applied Linguistics http://www.blackwell-synergy.com/servlet/useragent?func = showIssues&code = ijal International Journal of Language & Communication Disorders http://www.blackwell-synergy.com/servlet/useragent?func = showIssues&code = ijal Journal of English for Academic Purposes http://www.sciencedirect.com/science/journal/14751585 Journal of Language and Social Psychology http://jls.sagepub.com/ Journal of Research in Reading http://www.blackwell-synergy.com/servlet/useragent?func = showIssues&code = jrir Journal of Second Language Writing http://www.sciencedirect.com/science/journal/10603743 Language Assessment Quarterly http://www.leaonline.com/loi/laq Language & Communication http://www.sciencedirect.com/science/journal/02715309 Language and Education http://www.catchword.com/rpsv/cw/mm/09500782/contp1.htm Language Learning http://www.blackwell-synergy.com/servlet/ useragent?func = showIssues&code = lang Language Learning and Technology (free online) http://llt.msu.edu/archives/default.html Language Teaching http://www.journals.cambridge.org/jid_LTA
279
280
Program Evaluation in Language Education
Language Teaching Research http://www.catchword.com/rpsv/cw/arn/13621688/contp1.htm Language Testing http://www.ingentaselect.com/rpsv/cw/arn/02655322/contp1.htm Linguistics and Education http://www.sciencedirect.com/science/journal/08985898 Modern Language Association of America. Proceedings http://uk.jstor.org/journals/15393666.html The Modern Language Journal (recent issues) http://www.blackwell-synergy.com/servlet/useragent?func = showIssues &code = modl The Modern Language Journal (archived back issues) http://uk.jstor.org/journals/00267902.html Reading in a Foreign Language (free online peer reviewed) http://nflrc.hawaii.edu/rfl/ ReCALL http://www.journals.cambridge.org/jid_REC System http://www.sciencedirect.com/science/journal/0346251X Teaching English as a Second Language (free online peer reviewed) http://www.kyoto-su.ac.jp/information/tesl-ej/ Journals with a program evaluation focus American Educational Research Journal (archived back issues) http://uk.jstor.org/journals/00028312.html The American Journal of Evaluation http://www.sciencedirect.com/science/journal/10982140 Assessment in Education: Principles, Policy & Practice http://taylorandfrancis.metapress.com/openurl.asp?genre = journal&eissn = 1465-329X Assessment & Evaluation in Higher Education http://taylorandfrancis.metapress.com/openurl.asp?genre = journal&eissn = 1469-297X Assessment Update http://www3.interscience.wiley.com/cgi-bin/jtoc?ID = 86511121
Resources for Language Program Evaluation
281
British Educational Research Journal http://taylorandfrancis.metapress.com/openurl.asp?genre = journal&eissn = 1469-3518 Educational Assessment http://www.leaonline.com/loi/ea Educational Evaluation and Policy Analysis (archived back issues) http://uk.jstor.org/journals/01623737.html Educational Management Administration & Leadership http://ema.sagepub.com/ Education Policy Analysis Archives (free online peer reviewed) http://epaa.asu.edu/epaa/ Educational Research http://taylorandfrancis.metapress.com/openurl.asp?genre = journal&eissn = 1469-5847 Educational Research and Evaluation http://taylorandfrancis.metapress.com/openurl.asp?genre = journal&issn = 1380-3611 European Educational Research Journal (free online access to individuals) http://www.wwwords.co.uk/eerj/index.html Evaluation http://evi.sagepub.com/ Evaluation in Education: an International Review series http://www.sciencedirect.com/science/journal/0191765X Evaluation in Education: International Progress http://www.sciencedirect.com/science/journal/01459228 Evaluation and Program Planning http://www.sciencedirect.com/science/journal/01497189 Higher Education Research and Development http://taylorandfrancis.metapress.com/openurl.asp?genre = journal&eissn = 1469-8366 International Journal of Educational Research (formerly known as Evaluation in Education) http://www.sciencedirect.com/science/journal/08830355 Journal of Multidisciplinary Evaluation (launched October 2004, free online) http://evaluation.wmich.edu/jmde/
282
Program Evaluation in Language Education
Journal of Personnel Evaluation in Education http://www.kluweronline.com/issn/0920-525X/ The Journal of Technology, Learning, and Assessment (Boston College, USA) http://www.bc.edu/research/intasc/jtla.html New Directions for Evaluation http://www3.interscience.wiley.com/cgi-bin/jhome/85512890 Practical Assessment, Research & Evaluation (free online, peer-reviewed, strongly recommended) http://pareonline.net/ Research Papers in Education http://taylorandfrancis.metapress.com/openurl.asp?genre = journal&eissn = 1470-1146 Studies in Educational Evaluation http://www.sciencedirect.com/science/journal/0191491X Studies in Learning, Evaluation, Innovation and Development (launched October 2004, free online) http://www.sleid.cqu.edu.au/
16.4
Professional associations
Professional associations with a language program focus BAAL (British Association of Applied Linguists) http://www.baal.org.uk/ BALEAP (British Association of Lecturers in English for Academic Purposes) http://www.baleap.org.uk/ British Council http://www.britishcouncil.org/ CAL (Centre for Applied Linguistics) http://www.cal.org ELTECS (English Language Teaching Contacts Scheme) http://www.britishcouncil.org/eltecs IATEFL (International Association of Teachers of English as a Foreign Language) http://www.iatefl.org TESOL (Teaching English to Speakers of other Languages) http://www.tesol.org/
Resources for Language Program Evaluation
283
Professionals associations with a program evaluation focus American Evaluation Association http://www.eval.org/ Canadian Evaluation Society http://www.evaluationcanada.ca/ City & Guilds http://www.city-and-guilds.co.uk/servlet/page?_pageid = 680&_dad = cg2&_ schema= PORTAL30 Department for International Development (DfID) http://www.dfid.gov.uk/ European Evaluation Society http://www.europeanevaluation.org/ National Foundation for Educational Research http://www.nfer.ac.uk/ Office for Standards in Education http://www.ofsted.gov.uk/ Qualifications and Curriculum Authority http://www.qca.org.uk/ Quality Assurance Agency for Higher Education http://www.qaa.ac.uk/ Trinity College London http://www.trinitycollege.co.uk/ UK Evaluation Society http://www.evaluation.org.uk/
16.5
Ethical guides and best practice codes
AERA (American Educational Research Association) http://www.aera.net/about/policy/ethics.htm American Psychological Association http://www.apa.org/science/researchethics.html British Association for Applied Linguistics http://www.baal.org.uk/ British Educational Research Association http://www.bera.ac.uk/
284
Program Evaluation in Language Education
British Psychological Society http://www.bps.org.uk/index.cfm International Language Testing Association http://www.dundee.ac.uk/languagestudies/ltest/ilta/ilta.html
16.6
Email lists and bulletin boards
Subscribe to: AERA-D, sponsored by the AERA division that studies educational measurement and research methodology. [Send email to:
[email protected] with message: Subscribe AERA-D yourfirstname yourlastname (omit signature)] Subscribe to ARN-L – Assessment Reform Network, sponsored by FairTest and edresearch.org [Send e-mail to
[email protected] with message: Subscribe ARN-L yourfirstname yourlastname (omit signature)] Subscribe to: ASSESS – discussion on assessment in higher education. [Send e-mail to:
[email protected] with message: Subscribe ASSESS yourfirstname yourlastname (omit signature)] Subscribe to: ASSESS-P – Sponsored by the Psychological Assessment/ Psychometrics Forum at St. John’s University. Topics include clinical and research settings, psychometric theory and application. [Send e-mail to:
[email protected] with message: Subscribe ASSESS-P yourfirstname yourlastname (omit signature)] Subscribe to: EARLI-AE – Listserv of the European Association for Research on Learning & Instruction. EARLI-AE aims to promote and improve empirical and theoretical research regarding processes of learning, development, and instruction. Offers an Assessment and Evaluation SIG. [Send e-mail to:
[email protected] with message: Subscribe EARLI-AE yourfirstname yourlastname (omit signature)] Subscribe to: EVALINFO – General listserv of the American Evaluation Association. Circulates updated job bank information, AEA membership form, AEA meeting information and a list of AEA SIGs. [Send e-mail to:
[email protected] with message: Subscribe EVALINFO yourfirstname yourlastname (omit signature)] Subscribe to EVALTALK – American Evaluation Association (AEA) Discussion List. [Send e-mail to:
[email protected] with message: Subscribe EVALTALK yourfirstname yourlastname (omit signature)]
Resources for Language Program Evaluation
285
To unsubscribe from EVALTALK, send e-mail to
[email protected] with only the following in the body: UNSUBSCRIBE EVALTALK To get a summary of commands, send e-mail to
[email protected] with only the following in the body: INFO REFCARD To use the archives, go to: http://bama.ua.edu/archives/evaltalk.html Subscribe to: EVALTEN – Topical Evaluation Network on Evaluation Methodology and Statistics. Provides assistance for experimental design and analysis with a focus on mental health systems evaluation. [Send e-mail to:
[email protected] with message: Subscribe EVALTEN yourfirstname yourlastname (omit signature)] Subscribe to: GOVTEVAL – Regarding government/public sector program evaluation. [Send e-mail to:
[email protected] with message: Subscribe GOVTEVAL yourfirstname yourlastname (omit signature)] Subscribe to: K12ASSESS-L – K12ASSESS-L is a place for local assessment personnel to share and obtain resources, ideas, and support. Visit the K12ASSESS-L Home Page. [Send e-mail to:
[email protected] with message: Subscribe K12ASSESS-L yourfirstname yourlastname (omit signature)] Subscribe to: PSYCHOEDUCATIONAL_ASSESS – For those interested in psychoeducational assessment, especially special education related assessment. [Send e-mail to:
[email protected] with message: Subscribe PSYCHOEDUCATIONAL_ASSESS yourfirstname yourlastname (omit signature)] Subscribe to: VISTA – Topics include Values and Impact in Selection, Testing, and Assessment. Sponsored by the Educational Testing Service. [Send e-mail to:
[email protected] with message: Subscribe VISTA yourfirstname yourlastname (omit signature)]
16.7
Additional internet resources
Assessment and Evaluation on the Internet This site includes an index on Assessment & Evaluation of sources pertaining to educational and psychological testing. http://edres.org/intbod.stm Centre for Program Evaluation, University of Melbourne http://www.edfac.unimelb.edu.au.cpe CREATE (Consortium for Research on Educational Accountability and Teacher Evaluation) http://www.wmich.edu/evalctr/create/
286
Program Evaluation in Language Education
Hosts two journals (Journal of Personnel Evaluation in Education and Studies in Educational Evaluation). CSTEEP (http://wwwcsteep.bc.edu/) Since its inception in 1980, The Center for the Study of Testing, Evaluation, and Educational Policy (CSTEEP) has conducted research on testing, evaluation, and public policy; studies to improve school assessment practices; and international comparative research. Evaluation Cookbook (Learning Technology Dissemination Initiative, Institute for Computer Based Learning, Heriot-Watt University, UK) http://www.icbl.hw.ac.uk/ltdi/cookbook/ A practical guide to evaluation methods for lecturers to evaluate e-learning and other materials. Includes ‘recipes’ for different evaluation methods; useful information drawing on the expertise of a range of practising evaluators; a framework for planning and preparing your evaluation; guidelines for reporting and acting on the results; and short exemplars of evaluation studies using some of the methods described. Evaluation Handbook http://www.ncela.gwu.edu/pubs/eacwest/evalhbk.htm Website includes guidelines on planning, designing, implementing, and reporting evaluations. It has a focus of language programme evaluation, in particular bilingual education programmes. Guidelines for the Evaluation of Ethnographic Visual Media http://etext.lib.virginia.edu/VAR/guide.html Guidelines for the evaluation of ethnographic visual media (specifically film, video, photography, and digital multimedia) for the production and application of anthropological knowledge. Handbook for Mixed Method Evaluations http://www.ehr.nsf.gov/EHR/REC/pubs/NSF97–153/START.HTM Guide to evaluation research using mixed methods. The Handbook seeks to introduce a broad perspective for evaluation research which combines both quantitative and qualitative techniques. The Handbook is divided into four parts: 1) Introduction to Mixed Method Evaluations, 2) Overview of Qualitative Methodsand Analytic Techniques; 3) Designing and Reporting Mixed Method Evaluations; and 4) Supplementary Materials containing an annotated bibliography and glossary. International Association for Educational Assessment http://www.iaea.info/ International Test Commission (ITC) http://www.intestcom.org/
Resources for Language Program Evaluation
287
National Center for Research on Evaluation, Standards, and Student Testing (CRESST) at UCLA http://www.cse.ucla.edu/ For more than 36 years the UCLA Center for the Study of Evaluation (CSE) (see chapter 2) and, more recently, the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) have been on the forefront of efforts to improve the quality of education and learning in America.. Extensive collection of research reports National Council on Open & Distance Learning – Evaluation and Quality Issues http://cedir.uow.edu.au/NCODE/ University of Wollongong (Australia) website with information on evaluation methodology and links to other useful evaluation-related sites Note on enhancing stakeholder participation in aid activities. London: Department for International Development. www.carryon.oneworld.org/euforic/gb/stake2.htm Richard A. Krueger’s site http://www.tc.umn.edu/~rkrueger/ Website includes brief guides for various topics about evaluation, such as strategies, logic models, also in addition to guidelines on surveys, interviewing, observations, and focus groups. SearchERIC http://searcheric.org/or www.eric.ed.gov This website has tools to search abstracts and digests produced by the ERIC system. SOSIG (Social Science Information Gateway) http://www.sosig.ac.uk Teachers’ College Record (the Voice of Scholarship in Education) on Assessment and Evaluation http://www.tcrecord.org/CollectionMain.asp?CollectionID = 3 An extensive collection of program evaluation methods, email list discussions, new books announcements, and other research and development centres/ organisations in educational evaluation. Register for easy access to resources. The Common European Framework of Reference for Languages: Learning, Teaching, and Assessment http://culture2.coe.int/portfolio//documents/0521803136txt.pdf The Joint Committee on Standards for Educational Evaluation (USA) http://www.wmich.edu/evalctr/jc/ The summaries of the standards for educational evaluation (see chapter 2 above) are listed.
288
Program Evaluation in Language Education
The West Michigan University Evaluation Center http://www.wmich.edu/evalctr/ Extensive collection of websites with practical guides; evaluation checklists for designing, budgeting, contracting, staffing, managing, and assessing evaluations of programs, personnel, students and other evaluands; collecting, analysing and reporting evaluation information; and determining merit, worth, and significance; evaluation reports and other publications such as occasional papers/presentations, instructional materials, books by the centre staff, and the Journal of Multidisciplinary Evaluation (see section 16.3). United Nations Development Programme, Evaluation Office www.undp.org/undp/eo Website with guidelines for doing programme evaluations, evaluation documents and links to other evaluation sites W. K. Kellogg Foundation Evaluation Handbook http://www.wkkf.org/Pubs/Tools/Evaluation/Pub770.pdf Primarily for project directors who have direct responsibility for the ongoing evaluation of W. K. Kellogg Foundation funded projects. The handbook gives details for designing and conducting evaluations. Case Studies are presented which provide real examples of ways in which evaluation can support projects (pdf format). Web resources for evaluation and innovation, and quantitative and qualitative research methods. http://www.kcl.ac.uk/education/fdtl/ These resources have been developed as part of the DATA Project (Thames Valley University and King’s College London) funded by the Fund for the Development of Teaching and Learning (HEFCE/QAA). World Wide Web Virtual Library: evaluation (also known as Evaluation Virtual Library, EVL) http://www.policy-evaluation.org/ Website has links to evaluation societies, other related organisations, educational material, discussion groups mailing lists and journals and newsletters.
Postscript
In this volume, we have traced the development of evaluation as a field of both social science enquiry and Applied Linguistics. Throughout, we have drawn attention to the different purposes for evaluation and demonstrated – through the case studies in Part 2 in particular – how these purposes may be articulated and operationalised within the context of evaluation practice. We have also sought to show the value of learning, a dimension of evaluation that has not, perhaps, received as much attention as it deserves. We perceive learning in the evaluation context in two ways: learning through evaluation as well as learning from evaluation. Learning from evaluation findings is, we would argue, what all evaluations should demonstrate. Interestingly, however, we have found few authors in the field of English language education who explicitly signal this dimension (notably Alderson and Scott 1992; see Data 12.3, p. 205). Learning from evaluation findings is also crucial in respect of ‘evaluation leading into decisionmaking, action and actual utilisation’. As we have indicated elsewhere in this volume, if little or no change follows from an evaluation, then what were the reasons for doing it in the first place? It thus becomes important to ask of evaluations questions such as: What has been learned from this evaluation? What are the implications for practice? What decisions have been taken and which recommendations made? Who is responsible for the recommended action? And, further, what are the effects from any changes made? Evaluations might be typified by a lack of obvious synergy between the research and the evaluation communities, in relation to the constructs implicit in a given programme and the successful implementation of the innovation and, thus, there may be a failure to learn about ‘theory in practice’. But, learning through evaluation is, we argue, an important consequence of any evaluative activity (small- or large-scale). Throughout this volume we have drawn attention to ways in which stakeholders may be involved either as informants or as active participants (see section 12.3, p. 206), and have also shown (for example, chapters 5, 13, 14 and 15) how the evaluation context provides a prime 289
290
Program Evaluation in Language Education
site and opportunity for sharpening evaluation skills, and for the involvement of several program players in shaping and implementing evaluation studies. In addition to these benefits, others should also accrue in the form of heightened awareness and understanding of program practices. We have also seen how compromises may have to be made or, indeed, may be imposed in relation to evaluation design, procedures or implementation. Demonstrating value for money – a key goal for many evaluations – generally implies a more categorical approach with evidence of impact sought through numerical indicators. In turn, this might lead to (sponsor) preferences for one-off survey approaches as opposed to engaging with program implementation processes. As we have illustrated in several chapters, there are limits to survey approaches: they may well identify trends in professional practice or in the implementation of an innovation but whether they have the capacity to inform in the detail required is uncertain. Compromises may also result from stringent financial constraint or resourcing more generally. Evaluation budgets may be limited and pressure brought to bear on short-circuiting the piloting of evaluation procedures, training processes, or the number of programme staff who are involved. This, in turn, may compromise the integrity of the data collection or analytic procedures used: through an imposed trade off between concerns for validity and pragmatic considerations. As a consequence of such compromises, opportunities for learning through evaluation may be lost. In terms of more general opportunities for evaluation skills development, these skills – as well as developing expertise in areas of language testing and assessment – may be acquired through teacher education programmes and as part of professional learning at Masters level. However, few Masters-level programmes have core units in these areas, preferring to go down the optional unit line; few other opportunities for acquiring the specific skills required for planning, designing and implementing evaluation studies seem available. We have positioned evaluation as a socially mediated and contextualised activity; with meanings largely developed within particular communities of practice through stakeholder participation. Evaluation purposes and aims need to be articulated within specific contexts of professional practice; likewise, the evaluation procedures that are developed to achieve these aims should have resonance not simply with the evaluation goals established for a program, but also with the context of the evaluation implementation. Further, we have made explicit connections between evaluation and research and suggest that evaluations may be enriched through drawing on relevant empirical studies. Whilst the imperatives for the two activities (evaluation and research) may have different defining features, we are of the view that evaluations may develop greater explanatory power through greater articulation with the research community. By the same token, the research
Postscript
291
community should be receptive to findings from evaluation studies that may, in turn, serve to sharpen the focus of subsequent research. Finally, we underscore our position that evaluation is socially situated and, hence, the importance of dialogue as a means of opening up communication between evaluation players, irrespective of the locus of power (see Data 12.4, p. 206) among the stakeholder groups. Constraints in time – a theme emerging from a number of the studies in Part 2 – may, however, lead to limited opportunities for discussion and debate, thereby limiting the knowledge base of program participants. This will affect the level of subsequent action and, ultimately, program sustainability. On the one hand, marginal participation of stakeholders in evaluations may impact in various ways: (1) on the extent to which data, findings and recommendations will generate discussion; which, in turn, may limit (2) uptake, and (3) ultimate sustainability in terms of program development. On the other hand, engaging with stakeholder perspectives through enhanced opportunities for dialogue throughout evaluation processes carries a number of benefits: enhanced opportunities to hear different voices, enhanced validity of the evaluation through, for example, better content coverage and the framing of the ‘right’ questions, and, above all, enhanced opportunities for learning both from and through evaluation activities. Dialogue and debate will, thus, strongly influence the extent to which evaluation findings will be acted upon and, ultimately, used to secure better professional practices, both within and outside the classroom.
Bibliography
Alderson, J. C. 1992. Guidelines for the evaluation of language education. In J. C. Alderson and A. Beretta (Eds.), pp. 274–304. Alderson, J. C. and A. Beretta (Eds.) 1992. Evaluating Second Language Education. Cambridge: Cambridge University Press. Alderson, J. C. and M. Scott 1992. Insiders, outsiders and participatory evaluation. In J. C. Alderson and A. Beretta (Eds.), pp. 25–58. Arkoudis, S. and K. O’Loughlin, 2004. Tensions between validity and outcomes: teachers’ assessment of written work of recently arrived immigrant ESL students. Language Testing 20(3): 284–304. Arva, V. and P. Medgyes 2000. Native and non-native teachers in the classroom. System 28: 355–72. Aspinwall, K., T. Simkins, J. F. Wilkinson and M. J. McAuley 1992. Managing Evaluation in Education. London: Routledge. Association for Science Education. http://www.scienceacross.org. Bailey, K. 1999. Washback in Language Testing (TOEFL monograph series). Educational testing Service ftp://ftp.ets.org/pub/toefl/Toefl-MS-15.pdf. Bardi, M., G. Chefneux, D. Comanetchi, and T. Magureanu 1999. The PROSPER Project – Innovation in Teaching English for Specific Purposes in Romania – A Study of Impact. Bucharest: The British Council/Cavalliotti. Bell, J. S. 2002. Narrative research in TESOL. TESOL Quarterly 36(2): 207–12. Bennet, S. N. 1975. Weighing the evidence: A review of Primary French in the balance. British Journal of Educational Psychology 45 337–340. Benson, P. 2001. Autonomy in Language Learning. Harlow: Pearson Education. Beretta, A. 1986a. Towards a methodology of ESL programme evaluation. TESOL Quarterly 20(1): 144–55. Beretta, A. 1986b. Programme-fair language teaching programme evaluation. TESOL Quarterly 20(3): 431–44. Beretta, A. 1986c. A case for field experimentation in programme evaluation. Language Learning 36(3): 295–309. Beretta, A. 1987. Evaluation of a language-teaching project in South India. PhD thesis, University of Edinburgh. Beretta, A. 1989a. Attention to form or meaning? Error treatment in the Bangalore Project. TESOL Quarterly 23(2): 283–303. Beretta, A. 1989b. Who should evaluate L2 programmes? In C. J. Brumfit and R. Mitchell (Eds.), pp. 155–60. Beretta, A. 1990. Implementation of the Bangalore Project. Applied Linguistics 11(1): 1–15. Beretta, A. 1992a. Evaluation of language education: an overview. In J. C. Alderson and A. Beretta (Eds.), pp. 5–24. Beretta, A. 1992b. What can be learnt from the Bangalore evaluation? In J. C. Alderson and A. Beretta (Eds.), pp. 250–71. Beretta, A 1992c. Editor’s postscript to Alderson and Scott 1992. in J. C. Alderson and A. Beretta (Eds.), pp. 58–61. Beretta, A. and A. Davies 1985. Evaluation of the Bangalore Project. English Language Teaching Journal 39(2): 121–7. 292
Bibliography
293
Biggs, J. 2001 The reflective institution: Assuring and enhancing the quality of teaching and learning. Higher Education, 41: 221–38. Blaxter, L., C. Hughes and M. Tight 1996. How to Research. Buckingham: Open University Press. Block, D. 1998. Tale of a language learner. Language Teaching Research 2(2): 148–76. Block, D. 2003. The Social Turn in Second Language Acquisition. Edinburgh: Edinburgh University Press. Blue, G. and P. Grundy 1996. Team evaluation of language teaching and language courses. English Language Teaching Journal 50(3): 244–53. Board of Studies. 2000. ESL Companion to the English CSF. Carlton: Board of Studies. Boyle, R. 1996. Modelling oral presentations. English Language Teaching Journal 50(2): 115–26. Bowers, R. 1983. Project planning and performance. ELT Docs 116: 90–120. Braine, G. (Ed.) 1999. Non-Native Educators in English Language Teaching. New Jersey: Lawrence Erlbaum Associates. Breen, M. 1985. The social context of language learning – a neglected situation? Studies in Second Language Acquisition 7(2): 135–58. Breen, M. P., C. Barratt-Pugh, B. Derewianka, H. House, C. Hudson, T. Lumley and M. Rohl (1997). How Teachers Interpret and Use National and State Assessment Frameworks Vols 1–3. Canberra: Department of Employment, Education, Training and Youth Affairs. Breen, M. P. and C. N. Candlin 1980. The essentials of a communicative curriculum in language teaching. Applied Linguistics 1(2): 89–112. Brindley, G. 1989. The role of needs analysis in adult ESL program design. In R. K. Johnson (Ed.) The Second Language Curriculum pp. 63–78. Brindley, G. 1998. Outcomes-based assessment and reporting in language learning programmes: a review of the issues. Language Testing 15(1): 45–85. Brindley, G. 2001. Outcomes-based assessment in practice: some examples and emerging insights. Language Testing 18(4): 393–408. British Association for Applied Linguistics (BAAL) 1994. Recommendations on good practice in Applied Linguistics. http://www.baal.org.uk/goodprac.htm. British Educational Research Association (BERA) 2004. Revised Ethical Guidelines for Educational Research. Southwell: BERA. http://www.bera.ac.uk. Brivati, B. 2000. Don’s diary. The Times Higher Education Supplement 4 February: 8. Brown, J. D. 1989. Language programme evaluation: a synthesis of existing possibilities. In R. K. Johnson (Ed.), pp. 242–61. Brown, K. and M. Brown. 1996. New Contexts for Modern Language Learning: Crosscurricular Approaches. London: CILT. Brown, K. and M. Brown. 1998. Changing places: cross curricular approaches to teaching languages. London: CILT. Brumfit, C. J. 1984. The Bangalore procedural syllabus. English Language Teaching Journal 38(4): 233–41. Brumfit, C. J. and R. Mitchell 1989. Research in the Language Classroom. ELT Docs 133. Modern English Publications. Buckby, M. 1976. Is primary French really in the balance? Audio Visual Language Journal 14(1): 15–21. Burstall, C., M. Jamieson, S. Cohen, and M. Hargreaves 1974. Primary French in the Balance. Slough: National Foundation for Educational Research. Cameron, L. 2001. Teaching Languages to Young Learners. Cambridge: Cambridge University Press.
294
Bibliography
Candlin, C. N. 2004. Constructing concordance: Discursive and ethical aspects of compliance in medical settings. Plenary address to COMET 2004, Stockholm. Candlin, C. N. and D. F. Murphy (Eds.) 1987. Language Learning Tasks. Lancaster Practical Papers in English Language Education Vol. 7. Englewood Cliffs NJ: Prentice-Hall International. Carr, M. and G. Claxton 2002. Tracking the development of learning dispositions. Assessment in Education 9(1): 9–37. Celani, M. M. A., J. L. Holmes, R. C. G. Ramos and M. R. Scott 1988. The Brazilian ESP Project: An Evaluation. Sao Paulo: Editora de PUC-SP. Centre for Canadian Language Benchmarks. 2000. Canadian Language Benchmarks: English as a Second Language for Adults. Ottawa: Centre for Canadian Language Benchmarks. http://www.language.ca/. Chapelle, C. 2001. Computer Applications in Second Language Acquisition: Foundations for Teaching, Testing and Research. Cambridge: Cambridge University Press. Chamot, A. and M. O’Malley 1992. The cognitive academic language learning approach: a bridge to the mainstream. In Richard-Amato and M. A. Snow (Eds.), The Multicultural Classroom: Reading for Content Area Teachers. New York, Longman, pp. 39–57. Chamot, A. and M. O’Malley. 1994. The CALLA Handbook. Reading, MA: Addison-Wesley. Chapelle, C. 2003. English Language and Learning Technologies: Lectures on Applied Linguistics in the Age of Communication Technology. Amsterdam: John Benjamins. Chater, M. F. T. 1998. The gods of mismanagement: a mythological exploration of our management predicament. School Leadership and Management 18(2): 231–8. Checkland, P. and J. Scholes 1999. Soft Systems Methodology in Action. Chichester: Wiley. Clandinin, D. J. and F. M. Connelly 2000. Narrative Enquiry: Experience and Story in Qualitative Research. San Francisco: Jossey-Bass. Clark, R., R. Kiely, A. Fortune and J. O’Regan 1996. Integrated EFL and British Studies – Students’ Perceptions in SIGMA. Newsletter of the Cultural Studies and Literature Special Interest Group of IATEFL, pp. 6–16. Clegg, J. C., P. Rea-Dickins and M. Kobayashi (1999). Report on the Phase I Visit to the English Language Teacher Development Project. University of Warwick: Language Testing and Evaluation Unit. Clegg, J., M. Kobayashi and P. Rea-Dickins 2000. Language Research for the Science Across Europe Programme. Final Report, Parts I and II. University of Warwick: Language Testing and Evaluation Unit. Coelho, E. (1992). Co-operative learning: foundation for a communicative curriculum. In C. Kessler, Cooperative Language Learning. Englewood Cliffs, NJ: Prentice Hall, pp. 31–49. Cohen, L., L. Manion and K. Morrison 2001. Research Methods in Education. London: Routledge. Coleman, H. 1992. Moving the goalposts: project evaluation in practice. In Alderson and Beretta (Eds.). pp. 222–46. Cook, G. 2000. Language Play, Language Learning. Oxford: Oxford University Press. Cook, G. and B. Seidlhofer (Eds.) 1995. For H. G. Widdowson: Principles and Practice in the Study of Language. Oxford: Oxford University Press. Council for Cultural Cooperation 2001. Common European Framework of Reference for Languages: Learning, Teaching and Assessment. Cambridge: Cambridge University Press. Council of Europe 2001. Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge: Cambridge University Press.
Bibliography
295
Crabbe, D. 2003. The Quality of Language Learning Opportunities. TESOL Quarterly 37(1): 9–34. Crandall, J. 1987. ESL through Content-Area Instruction. Englewood Cliffs, NJ: Prentice-Hall Regents. Cronbach, L. J., S. R. Ambron, S. M. Dornbush, R. D. Hess, R. C. Hornik, D. C. Phillips, D. F. Walker and S. S. Weiner 1980. Towards Reform in Program Evaluation. San Francisco: Jossey-Bass. Crookes, G. 1993. Action research for second language teachers: going beyond teacher research. Applied Linguistics 14(2): 130–44. Crossley, M. and J. A. Bennett 1997. Planning for case-study evaluation in Belize, Central America. In Crossley and Vulliamy (Eds.), pp 221–43. Crossley, M. and G. Vulliamy (Eds.) 1997. Qualitative Educational Research in Developing Countries. New York: Garland Publishing. Csikszentmihalyi, M. 1997. Finding Flow. New York: Basic Books. Cumming, A. 2001. The difficulty of standards, for example in L2 writing. In T. Silva and P. Matsuda (Eds.) On Second Language Writing. Mahwah, NJ: Lawrence Erlbaum Associates, 209–29. Cunningsworth, A. 1995. Choosing Your Course Book. Second edition. Oxford: Heinemann. Dam, L. 1995. Autonomy: From Theory to Classroom Practice. Dublin: Authentik Language Learning Resources. Dam, L. (Ed.) 2001. Learner Autonomy: New Insights. Aila Review 15. West Yorkshire: The English Company (UK). Davies, A. 1991. The Native Speaker in AppliedLinguistics. Edinburgh: Edinburgh University Press. Davies, A. 2003. The Native Speaker: Myth and Reality. Clevedon: Multilingual Matters. Davison, C. 2004. The contradictory culture of teacher-based assessment: ESL teacher assessment practices in Australian and Hong Kong secondary schools. Language Testing 21(3): 305–34. DeKeyser, R. 1997. Beyond explicit rule learning: Automatizing second language morphosyntax. Studies in Second Language Acquisition 19: 195–221. Delbecq, A. 1975. Group Techniques for Programme Planning: A Guide to Nominal Group and Delphi Processes. Glenview, IL: Scott Foreman. Department for Education and Employment (DfEE) 1997. Preparing for the Information Age: Synoptic Report of the Education Departments’ Superhighways Initiative. London: HMSO. Department for Education and Skills (DfES) 1998. Evaluation of the National Literacy Project. Slough: National Foundation for Educational Research (NFER). http:// www.dfes.gov.uk. Department of Education, Training and Employment (DETE) 2002. South Australia Curriculum, Standards and Accountability Framework: English as a Second Language. South Australia: DETE. Dornyei, Z. 2003. Questionnaires in Second Language Research: Construction, Administration, and Processing Mahwah, NJ: Lawrence Erlbaum Associates. Drever, E. 1995. Using Semi-structured Interviews in Small-scale Research. Edinburgh: Scottish Council for Research in Education. EELTS Handbook 1987. Hong Kong: British Council. Eisner, E. W. 1977. One the uses of educational connoisseurship and criticism for evaluating classroom life. Teachers College Record 78(3): 345–58. Eisner, E. 1985. The Art of Educational Evaluation. London: Falmer Press.
296
Bibliography
Elliott, J. 2001. Making evidence-based practice educational. British Educational Research Journal 27(5): 555–74. Ellis, R. 2003. Task-based Language Learning and Teaching. Oxford: Oxford University Press. Ellis, G. and B. Sinclair 1989. Learning to Learn English. Cambridge: Cambridge University Press. EQUALS (European Association for Quality Language Services) http://www. eaquals.org/. Fairclough, N. 1995. Critical Discourse Analysis: the Critical Study of Language. Harlow: Longman. Fayol, H. 1952. General and Industrial Management. London: Pitman. Ferguson, G. and S. Donno 2003. One-month teacher training courses: time for a change? English Language Teaching Journal 27(1): 26–33. Fetterman, D. M. 1988. Qualitative Approaches in Evaluation in Education: The Silent Revolution. New York: Praeger. Fortune, A. and D. Thorp 2001. Knotted and entangled: new light on the identification, classification and value of language related episodes in collaborative output tasks. Language Awareness 10(2/3): 143–60. Foster, P. and Skehan, P. 1996. The influence of planning on performance in task-based learning. Studies in Second Language Acquisition 18: 299–324. Fresko, B. 2002. Faculty views of student evaluation of college teaching. Assessment and evaluation. Higher Education 27(2): 187–203. Fresko, B. and Nasser, F. 2001. Interpreting student ratings: consultation, instructional modification, and attitudes towards course evaluation. Studies in Educational Evaluation 27: 291–305. Frolich, M., N. Spada, and P. Allen 1985. Differences in the communicative orientation of L2 classrooms. TESOL Quarterly 19(1): 27–57. Fullan, M. 1991. The New Meaning of Educational Change. London: Cassell. Fullan, M. 1998. Linking change and assessment. In P. Rea-Dickins and K. Germaine (Eds.), pp. 253–62. Gardner, R. C., R. Clément, P. C. Smythe and C. L. Smythe 1979. The Attitude/Motivation Test Battery – revised manual. Research Bulletin 15. London, Canada: Language Research Group, University of Western Ontario. Gardner, S. and P. Rea-Dickins 2002. Focus on Language Sampling: A Key Issue in EAL Assessment. London: NALDIC Publishing Group. Gass, S. and C. Madden (Eds.) 1985. Input and Second Language Acquisition. Rowley, MA: Newbury House. Germaine, K. and P. Rea-Dickins 1998. Some relationships between management, evaluation and development. In P. Rea-Dickins and K. Germaine (Eds.), pp. 157–74. Gibbs, G. (Ed.) 1994. Improving Student Learning – Theory and Practice. Oxford: Oxford Centre for Staff Development. Gibbs, G. (Ed.) 1995. Improving Student Learning through Assessment and Evaluation. Oxford: Oxford Centre for Staff Development. Giddens, A. 1998. The Third Way – a Renewal of Social Democracy. Cambridge: Cambridge: Polity Press. Gitlin, A. and J. Smyth 1989. Teacher Evaluation: Educative Alternatives. Lewes: Falmer Press. Goodfellow, R. 2003. Online Literacies and Learning – Cultural and Critical Dimensions in a Virtual Power Struggle. Presentation at Multiliteracies Conference, University of Ghent. http://iet.open.ac.uk/pp/r.goodfellow/research.htm. Goodfellow, R. 2004. Inline literacies and learning: operational, cultural and critical dimensions. Language and Education. http://iet.open.ac.uk/pp/r.goodfellow/research.htm.
Bibliography
297
Goodfellow, R., M. Morgan, M. Lea and J. Pettit, 2004. Students’ writing in the virtual university: An investigation into the relation between online discussion and writing for assessment on two Master’s courses. In I. Snyder and C. Beavis (Eds.), Doing Literacy Online: Teaching, Learning and Playing in an Electronic World. New Jersey: Hampton Press. http://iet.open.ac.uk/pp/r.goodfellow/research.htm. Graves, K. (Ed.) 1996. Teachers as Course Developers. Cambridge: Cambridge University Press. Green, D. (Ed.) 1994. What is Quality in Higher Education? Milton Keynes: Open University Press/The Society for Research into Higher Education. Greenwood, J. 1985. Bangalore revisited: a reluctant complaint. English Language Teaching Journal 39(4): 268–73. Gregory, R. D., G. Harland, and L. Thorley 1995. Using the student experience questionnaire for improving teaching and learning. In G. Gibbs (Ed.). Improving Student Learning through Assessment and Evaluation. Oxford: Oxford Centre for Staff Development, pp. 210–16. Grey, D. 1999. The Internet in School. London and New York: Cassell. Guba, E. (Ed.) 1990. The Paradigm Dialog. Newbury Park, CA: Sage. Guba, E. and Y. Lincoln 1989. Fourth Generation Evaluation. Newbury Park: Sage. Hall, D. R. and A. Hewings (Eds.) 2001. Innovation in English Language Teaching: A Reader. London: Routledge. Halliday, M. A. K. 1974. Explorations in the Functions of Language. London: Edward Arnold. Hargreaves, D. 1996. Teaching as a research-based profession: possibilities and prospects. Teacher Training Agency Annual Lecture. London: Teacher Training Agency. Hargreaves, D. 1997. In defence of research for evidence-based teaching: a rejoinder to Martyn Hammersley. British Educational Research Journal. 23(4): 405–19. Hargreaves, D. 1999. Revitalising educational research: lessons from the past and proposals for the future. Cambridge Journal of Education. 29(3): 239–49. Harklau, L. (1994). ESL versus mainstream classes: contrasting L’ learning environments. TESOL Quarterly 28(2): 241–72. Harlen, W. and J. Elliott 1982. A checklist for planning or reviewing an evaluation. In R. McCormick, et al. (Eds.), pp. 296–304. Harris, J. 1990. The second language program-evaluation literature: accommodating experimental and multi-faceted approaches. Language, Culture and Curriculum 3(1): 83–92. Harris, J. and M. Conway 2002. Modern Languages in Irish Primary Schools – An Evaluation of the National Pilot Primary Project. Dublin: Institiúid Teangeolaíochta Éireann. Harris, J. and M. Murtagh 1999. Teaching and Learning Irish in Primary School. A Review of Research and Development. Dublin: Institiúid Teangeolaíochta Éireann. Harris, J. and D. O’Leary 2002. National Survey of Principals of Schools Involved in the Modern Languages in Primary Schools Initiative in Ireland. Dublin: Institiúid Teangeolaíochta Éireann. Haselgrove, S. (Ed.) 1994. The Student Experience. Milton Keynes: Open University Press/The Society for Research into Higher Education. Haynes, R. B., D. L. Sackett, J. A. Muir Gray, D. L. Cook and G. H. Gyatt 1977. Transferring evidence from research into practice. Evidence-based Practice 2: 4–6. Hedge, T. 1992. ELT Resources Centres in Hungary. London: British Council. Hedge, T. 2000. Teaching and Learning in the Language Classroom, Oxford: Oxford University Press.
298
Bibliography
Henry, C. 1994. An ethical perspective. in S. Haselgrove (Ed.), pp. 108–16. Holliday, A. R. 1992. Tissue rejection and informal orders in ELT projects: collecting the right information. Applied Linguistics 13(4): 403–24. Holliday, A. R. 1998. Evaluating the discourse: the role of applied linguistics in the management of evaluation and innovation. In Rea-Dickins and Germaine (Eds.), pp. 195–219. Holliday, A. R. 2002. Doing and Writing Qualitative Research. London: Sage. Hopkins, D. 1989. Evaluation for School Development. Milton Keynes: Open University Press. House, E. R. (Ed.) 1986. New Directions in Educational Evaluation. Lewes: Falmer Press. International Language Testing Association (ILTA). Code of Ethics in Language Testing. http://www.dundee.ac.uk/languagestudies/ltest/ilta/code.pdf. Jacobs, C. 2000. The evaluation of educational innovation. Evaluation 6(3): 261–80. Johnson, R. K. (Ed.) 1989. The Second Language Curriculum. Cambridge: Cambridge University Press. Johnstone, R. 1999. A research agenda for modern languages in the primary school. In P. Driscoll and D. Frost (Eds.), The Teaching of Modern Foreign Languages in the Primary School. London: Routledge, pp. 197–209. Johnstone, R. 2000. Context-sensitive assessment of modern languages in primary (elementary) school and early secondary education: Scotland and the European experience. Language Testing 17(2): 123–43. Joint Committee on Standards for Educational Evaluation 1981. Standards for Evaluation of Educational Programs, Projects and Materials. New York: McGraw Hill. Kember, D, D. Y. P. Leung and K. P. Kwan 2002. Does the Use of Student Feedback Questionnaires Improve the Overall Quality of Teaching? Assessment and Evaluation in Higher Education, 27(5): 411–25. Kemmis, S. 1986. Seven principles for programme evaluation in curriculum development and innovation. In House (Ed.), pp. 117–40. Kennedy, C. 1988. Evaluation of the management of change in ELT projects. Applied Linguistics 9(4): 329–42. Kiely, R. 1994. Teacher education curriculum development – options for action. In R. Kiely (Ed.), Proceedings of PACE Curriculum Development Conference, Kazimierz Dolny: Thames Valley University/University of Warsaw. Kiely, R. 1998. Programme evaluation by teachers: issues of policy and practice. In Rea-Dickins and Germaine (Eds.), pp. 78–104. Kiely, R. 1999. Evaluation by teachers. IATEFL 147: 14–16. Kiely, R. 2000. Program evaluation by teachers: an observational study. Unpublished PhD thesis. University of Warwick. Kiely, R. 2001. Classroom evaluation – values, interests and teacher development. Language Teaching Research 5(3): 241–61. Kiely, R. 2003. What works for you? A group discussion approach to programme evaluation. Studies in Educational Evaluation 29(4): 293–314. Kiely, R. 2004. Learning to critique in EAP. Journal of English for Academic Purposes 3(3): 211–27. Kiely, R., G. Clibbon, P. Rea-Dickins, C. Walter; H.Woodfield 2004. Teachers into Researchers. London: CILT. Kiely, R. and H. Komorowska 1998. Quality and impact: The evaluation agenda. Opening Plenary at conference organised by the British Council and Polish Ministry of Education. Published as Quality and impact: The evaluation agenda. In J. P. Melia (Ed.), Innovations and Outcomes in English Language Teacher
Bibliography
299
Education – Proceedings of the PRINCE Conference, Popowo, Poland. Warsaw: British Council, pp. 15–39. Kiely, R. and A. Murkowska 1993. Initiating evaluation activity: Insider and outsider perspectives. In Kiely et al. (Eds.), pp. 60–72. Kiely, R., D. F. Murphy, P. Rea-Dickins and M. I. Reid (Eds.) 1993. Evaluation in Planning and Managing Language Education Projects – Papers of the First PRODESS Colloquium. Manchester: British Council. Kiely, R., D. F. Murphy, P. Rea-Dickins and M. I. Reid (Eds.) 1995a. Evaluation in Planning and Managing Language Education Projects – Papers of the Second PRODESS Colloquium. Manchester: British Council. Kiely, R., D. F. Murphy, P. Rea-Dickins and M. I. Reid 1995b. PRODESS Evaluation Guidelines. Manchester: British Council. Kiely, R. and M. I. Reid 1996. Evaluation in Projects – The PRODESS (Project Development Support Scheme) experience. In R. Webber (Ed.), The Dunford Seminar Report, pp. 102–9. Kiernan, P. 2004. Cultural imperialism or a bridge to the outside world? The native speaker in EFL in Japan today. Paper presented at Annual Meeting of the British Association of Applied Linguistics, King’s College London. Kirton, M. A., B. Davies, R. K. Ip, K. S. Johnson, A. Lee, J. Poon, J. Shillaw and G. Tang 1989. Final Evaluation Report: Expatriate English language Teachers Pilot Scheme. Hong Kong: British Council. Kramsch, C. (Ed.) 2002. Language Acquisition and Language Socialization. London: Continuum. Kreuger, R. 1994. Focus Groups: A Practical Guide. London: Sage. Kubota, R., 1999. Japanese culture constructed by discourses: Implications for Applied Linguistics research and ELT. TESOL Quarterly 33(1): 9–35. Kubota, R. 2003. The author responds: (Un)raveling racism in a nice field like TESOL. TESOL Quarterly 37(1): 84–92. Kuckartz, U. 1998. Scientific Text Analysis for the Social Sciences: User’s Guide. London: Scolari Sage Publications. http://www.scolari.co.uk. Kushner, S. 1996. The limits of constructivism. Evaluation 2(2): 189–200. Kvale, S. 1996. InterViews: An Introduction to Qualitative Research Interviewing. London: Sage. Lai, Ju-Fang 2003. The native English speaker teacher scheme in Taiwan: A study of teacher and student attitudes. Unpublished MEd dissertation, University of Bristol. Lam, T., A. Cumming and D. Lang 2001. External Evaluation of the Centre for Canadian Language Benchmarks. Toronto: Ontario Institute for Studies in Education, The University of Toronto. Lantolf, J. 1996. Second language theory building: letting all the flowers bloom! Language Learning 46: 713–49. Lantolf, J. 2000. Second language learning as a mediated process. State of the art article. Language Teaching 33(2): 79–96. Lantolf J. and M. Poehner 2004. Dynamic assessment of L2 development: Bringing the past into the future. Journal of Applied Linguistics 1(1): 49–72. Laurillard, D. M. 1994. Rethinking University Teaching: A Framework for the Effective Use of Educational Technology. London: Routledge. Lawrence, L. 1995. Using evaluation to improve teacher education programmes. In P. Rea-Dickins, and A. F. Lwaitama (Eds.), pp. 75–86. Legutke, M. and H. Thomas 1991. Process and Experience in the Language Classroom. Harlow: Longman.
300
Bibliography
Li, E. and G. James (Eds.) 1998. Testing and Evaluation in Second Language Education. Hong Kong: Hong Kong University of Science and Technology. Lincoln, Y. 2001. The future of evaluation in a new millennium. Plenary address to AZENet Spring conference. http://ww.aspin.asu.edu.azenet.fall01.html. Little, D. (Ed.) 1989. Self-Access Systems for Language Learning. Dublin: Authentik. Little, D. 1991. Learner Autonomy 1: Definition, Issues and Problems. Dublin: Authentik. Loh, W. K. 1995. Formative evaluation at work in Melaka. In P. Rea-Dickins and A. F. Lwaitama (Eds.), pp. 29–39. Long, M. H. 1984. Process and product in ESL programme evaluation. TESOL Quarterly 18(3): 409–25. Long, M. H. 1985. Input and second language acquisition theory. In S. Gass and C. Madden (Eds.), pp. 377–93. Long, M. H. 1991. Focus on form: A design feature in language teaching methodology. In K. de Bot, R. Ginsberg, and C. Kramsch (Eds.), Foreign Language Research in Cross-cultural Perspective. Amsterdam: John Benjamins, pp. 39–52. Lowe, G. 1995. Answerability in attitude measurement questionnaires: an applied linguistic study of reactions to ‘statement plus rating’ pairs. Unpublished PhD thesis, University of York. Lynch, B. 1996. Language Programme Evaluation. Cambridge: Cambridge University Press. Lynch, B. 2002. Language Assessment and Programme Evaluation. Edinburgh: Edinburgh University Press. MacDonald, B. 1976. Evaluation and control of education. In D. A. Tawney (Ed.), Curriculum Evaluation Today: Trends and Implications London: Macmillan Education, pp. 125–36. MacDonough, J. and C. Shaw 2003. Materials and Methods in ELT. Second edition. Oxford: Blackwell. Mackay, R. 1994. Undertaking ESL/EFL programme review for accountability and improvement. English Language Teaching Journal 48(2): 142–9. Mackay, R., S. Wellesley, and E. Bazergan 1995. Participatory evaluation. English Language Teaching Journal 49(4): 308–17. Mark, M. M., G. T. Henry and G. Julnes 2000. Evaluation: An Integrated Framework for Understanding Guiding, and Improving Policies and Programs, San Francisco: Jossey-Bass. Markee, N. 1993. The diffusion of innovation of language teaching. Annual review of Applied Linguistics 13: 229–43. Markee, N. 1997. Managing Curricular Innovation. Cambridge: Cambridge University Press. Marsh, D. and G. Langé 1999. Implementing Content and Language Integrated Learning. Jyväskyläa: University of Jyväskyläa/TIE/CLIL. Marsh, H. W. 1987. Students’ evaluations of university teaching: Research findings, methodological issues and directions for future research. International Journal of Educational Research 11(3): 257–387. Maynard, R. A. 2000. Whether a sociologist, economist, or simply a skilled evaluator. Evaluation 6(4): 471–80. McCormick, R., J. Bryner, P. Clift, M. James, and C. M. Brown (Eds.) 1982. Calling Education to Account. London: Heinemann/Open University. McKay, P. 2000. On ESL standards for school. Language Testing 17(2): 185–214. McKay, P., P. Coppari, A. Cumming, K. Graves, L. Lopriore and D. Short. 2001a. Language standards: an international perspective. Part 1. TESOL Matters 11(2): 1–4 .
Bibliography
301
McKay, P., P. Coppari, A. Cumming, K. Graves, L. Lopriore and D. Short. 2001b. Language standards: an international perspective. Part 2. TESOL Matters 11(3): 11–15. McKay, P., C. Hudson and M. Sapuppo 1994. ESL Development: Language and Literacy in Schools Project. Canberra: National Languages and Literacy Institute of Australia. McKay, V. and C. Treffgarne (Eds.) 1999. Evaluating Impact. DfID Education papers Serial no. 35. London: Department for International Development. McNamara, T. (2000) Language Testing. Oxford: Oxford University Press. McPhail, 2001. A nominal group technique: a useful method for working with young people. British Educational Research Journal 27(2): 161–70. Medgyes, Peter. 1994. The Non-Native Teacher. London: Macmillan. Mitchell, R. 1989. Second language learning: Investigating the classroom context. System 17(2): 195–210. Mitchell, R. 1990. Evaluation of second language teaching projects and programmes. Language, Culture and Curriculum 3(1): 3–17. Mitchell, R. 1992. The ‘independent’ evaluation of bilingual primary education. In J. C. Alderson and A. Beretta (Eds.), pp. 100–37. Mitchell, R., B. Parkinson, R. Johnstone 1981. The Foreign Language Classroom: an Observational Study. Stirling Educational Monographs No 9. Department of Education, University of Stirling. Mohan, B. 1986. Language and Content. Reading, MA: Addison-Wesley. Mohan, B., Leung, C. and C. Davison 2001. English as a Second Language in the Mainstream. Harlow: Pearson Education. Moon, J. 2000. Children Learning English. Oxford: Macmillan Heinemann. Morrison, K. 1998. Editorial. Evaluation and Research in Education 12(2): 59–60. Munby, J. 1978. Communicative Syllabus Design. Cambridge: Cambridge University Press. Murphy, D. F. 1995. Developing theory and practice in evaluation. In P. Rea-Dickins and A. F. Lwaitama (Eds.), pp. 10–28. Murphy, D. F. 1996. The evaluator’s apprentices: learning to do evaluation. Evaluation 2(3): 321–38. Murphy, D. F. and P. Rea-Dickins. 1999. Identifying stakeholders. In V. McKay and C. Treffgarne (Eds.), Evaluating Impact. Serial no. 35. London: Department for International Development, pp. 89–98. NALDIC (National Association for Language Development in the Curriculum). http:// www.naldic.org.uk/docs/research/assessment.cfm. Nevo, D. 1986. Conceptualisations of educational evaluation. In E. House (Ed.), New Directions in Educational Evaluation. Lewes: Falmer Press. NLLIA (National Languages and Literacy Institute of Australia) 1994. ESL Development: Language and Literacy in Schools: Volume 1. 2nd edition. Canberra: Department of Employment, Education and Training: National Languages and Literacy Institute of Australia. Norris, N. 1990. Understanding Educational Evaluation. London: Kogan Page/CARE, University of East Anglia. Norris, N. 1998. Curriculum evaluation revisited. Cambridge Journal of Education 28(2): 207–20. North, B. 2000. The Development of a Common Framework Scale of Language Proficiency. New York, Peter Lang. North, B. and G. Schneider. 1993. Scaling descriptors for language proficiency scales. Language Testing 15(2): 217–62. Nunan, D. 1989. Understanding Language Classrooms: a Guide for Teacher-initiated Action. Hemel Hempstead: Prentice Hall International (UK).
302
Bibliography
Nunan, D. 1992a. Research Methods in Language Learning. Cambridge: Cambridge University Press. Nunan, D. (Ed.) 1992b. Collaborative Language Learning and Teaching. Cambridge: Cambridge University Press. Nunan, D. and G. Brindley 1986. A practical framework for learner-centred curriculum development. Paper presented at 20th Annual TESOL Convention. Anaheim, California, March. Nuttall, D. 1991. Evaluating the effects of ELT. Dunford House Seminar 1991. London: British Council. Opacic, S. 1994. The student learning experience in the mid-1990s. In S. Haselgrove (Ed.), pp. 157–68. Paran, A., C. Furneaux and N. Sumner 2004. Computer-mediated communication in distance MA programmes: the student’s perspective. System 3(32): 337–56. Paran, A. and E. Watts (Eds.) 2004. Story-telling in ELT. Whitstable: IATEFL Parkinson, B. 1983. The evaluation of a ‘communicative’ French course. In C. J. Brumfit (Ed.), Learning and Teaching Language for Communication: Applied Linguistic Perspectives, London: CILT, pp. 32–40. Parlett, M. and D. Hamilton 1972. Evaluation as Illumination: a New Approach to the Study of Innovatory Programmes. Occasional Paper no. 9. Centre for Research in the Educational Sciences, Edinburgh. Parton, N. 1994. The nature of social work under conditions of (post) modernity. Social Work and Social Sciences Review 5(2): 93–112. Patton, M. Q. 1982. Practical Evaluation. Thousand Oaks: Sage. Patton, M. Q. 1997. Utilisation-Focussed Evaluation, 3rd edition. Thousand Oaks: Sage. Pawson, R. 2002. Evidence-based policy: in search of a method. Evaluation 8(2): 157–81. Pawson, R. 2003. Evidence-based policy: the promise of realist synthesis. Evaluation 8(3): 340–58. Pawson, R. and N. Tilley 1997. Realistic Evaluation. London: Sage. Peirce, B. N., and G. Stewart 1997. The development of the Canadian language benchmarks. TESL Canada Journal/La Review TESL du Canada 14(2): 17–31. Pennington, M. 1995. The teacher change cycle. TESOL Quarterly 29(4): 705–30. Pennington, M. 1998 Designing language programme evaluations for specific purposes. in E. Li and G. James (Eds.), pp. 199–213. Pennington, M. C. and A. L. Young 1989. Approaches to faculty evaluation for ESL. TESOL Quarterly 23(4): 619–46. Pennycook, A. 1994. The Cultural Politics of English as an International Language. Harlow: Longman. Phillipson, R. 1992. Linguistics Imperialism. Oxford: Oxford University Press. Pletinckx, J. and M. Segers 2001. Programme evaluation as an instrument for quality-assurance in a student-oriented educational system. Studies in Educational Evaluation, 27: 355–72. Poon, A. and T. Higginbottom 2000. NET-Working: Examples of Good Professional Practice within the NET Scheme. Hong Kong: Education Department, The Government of the Hong Kong Special Administrative Region. Popham, W. J. 1975. Educational Evaluation. Englewood Cliffs, NJ: Prentice Hall. Popkewitz, T. S. 1990. Whose future? Whose past? Notes on critical theory and methodology. In E. G. Guba (Ed.), pp. 46–66. Potter, M. and P. Rea-Dickins 1994. Introduction to PRODESS. In R. Kiely, D. Murphy and P. Rea-Dickins (Eds.), Managing Change and Development in ELT and East and Central Europe. Manchester: The British Council, pp. 1–3.
Bibliography
303
Prabhu, N. S. 1987. Second Language Pedagogy. Oxford: Oxford University Press. Provus, M. M. 1971. Discrepancy Evaluation. Berkeley, CA: Sage. Qualifications and Curriculum Authority (QCA) http://www.qca.org.uk/. QCA (Qualifications and Curriculum Authority) 2000. A Language in Common. London: QCA. Quality Assurance Agency (QAA) http://www.qaa.ac.uk/. Rea-Dickins, P. 1991. An Evaluation of the Project for the Improvement of Secondary English Teaching (PISET). London: Overseas Development Administration. Rea-Dickins, P. 1992. An Evaluation of the Project for the Improvement of Secondary English Teaching (PISET), Thailand. London: ODA. Rea-Dickins, P. 1994a. Evaluation and English language teaching. Language Teaching 27(2): 71–92. Rea-Dickins, P. 1994b. The KaNgwane Project Evaluation. Overseas Development Administration Report no.5921. London: Overseas Development Administration. Rea-Dickins, P. 1997. So, why do we need relationships with stakeholders? Language Testing, 14(3): 304–14. Rea-Dickins, P. 1999. Evaluation tender document. Unpublished. Rea-Dickins, P. 2001. Mirror, mirror on the wall: identifying processes of classroom assessment. Language Testing 18(4): 429–62. Rea-Dickins, P. and F. Caley. 1994. Molteno Primary Education Literacy and Language Development Project. Mid-project Evaluation Report. London: British Council. Rea-Dickins, P. and K. Germaine 1992. Evaluation. Oxford: Oxford University Press. Rea-Dickins, P. and K. Germaine (Eds.) 1998. Managing Evaluation and Innovation in Language Teaching. Harlow: Longman. Rea-Dickins, P and A. F. Lwaitama (Eds.) 1995. Evaluation for Development in English Language Teaching: Review of ELT 3(3). Basingstoke: Modern English Publications/ British Council. Rea-Dickins, P., M. Reid and K. Karavas-Doukas 1996. Evaluation of PRINCE: October 1995–February 1996. Poland: British Council. Rea-Dickins, P. (comp.) with M. Mgaya and K. Karavas-Doukas 1997. ELTSP Impact Assessment Study Vols I and II. London: Department for International Development and the British Council. Reid, M. I. 1995a. Resource centres in Central and Eastern Europe. Main Report. Manchester: British Council/Thames Valley University Evaluation Unit. Reid, M. I. 1995b. Resource centres evaluation. PRODESS News Issue 9 Manchester: The British Council/Thames Valley University Evaluation Unit. Richardson, J. T. E. 1994. A British evaluation of the course evaluation questionnaire. Studies in Higher Education 19(1): 59–62. Richardson, J. T. E. 1995. Using questionnaire to evaluate student learning. In G. Gibbs (Ed.), Improving Student Learning through Assessment and Evaluation. Oxford: Oxford Centre for Staff Development, pp. 499–524. Rossi, P. H. and H. E. Freeman 1993. Evaluation: a Systematic Approach. Newbury Park, CA: Sage. Rueda, R., C. Goldenberg and R. Gallimore 1992. Rating Instructional Conversations: a Guide. National Center for Research on Cultural Diversity and Second Language Learning. http://www.ncbe.gwu.edu/miscpubs/ncrcdsll/epr4.html. Ryan, K. 1998. Advantages and challenges of using inclusive evaluation approaches in evaluation practice. American Journal of Evaluation 19(1): 101–22. Sarangi, S. and C. Roberts 2002. Discoursal (mis)alignments in professional gatekeeping encounters. In C. Kramsch (Ed.), pp. 197–227.
304
Bibliography
Saville, N. and R. Hawkey 2004. The IELTS impact study: Investigating washback on teaching materials. In L. Cheng, Y. Watanabe and A. Curtis (Eds.), Washback in Language Testing. Mahwah, NJ: Lawrence Erlbaum Associates, pp. 171–90. Scherer, G. A. C. and M. Wertheimer 1964. A Psycholinguistic Experiment in Foreign Language Teaching. New York: McGraw-Hill. Schick, A. 1971. From analysis to evaluation. Annals of the American Academy of Political and Social Science 394: 57–71. Science Across the World www.scienceacross.org. Scott, C. and S. Erduran. 2004. Learning from international frameworks for assessment: EAL descriptors in Australia and the USA. Language Testing 21(3): 409–31. Scott, M. 1999. Wordsmith Tools. Computer Software, Version 3.0. Oxford: Oxford University Press. Scriven, M. S. 1967. The methodology of evaluation. In R. W. Tyler, R. M. Gagne and M. Scriven (Eds.), pp. 38–98. Simon, H. A. 1976. Administrative Behavior. New York: Free Press. Simon, H. A. 1977. The New Science of Management Decision. Englewood Cliffs. NJ: Prentice-Hall. Simpson, M. and J. Tuson 1995. Using Observations in Small-scale Research. Edinburgh: Scottish Council for Research in Education. Skehan, P. 1996. A framework for the implementation of task-based instruction. Applied Linguistics 17(1): 38–62. Skehan, P. 1998. A Cognitive Approach to Language Learning. Oxford: Oxford University Press Skehan, P. and P. Foster 1997. Task type and task processing conditions as influences on foreign language performance. Language Teaching Research 1(3): 185–211. Slimani, A. 1992. Evaluation of classroom interaction. In J. C. Alderson and A. Beretta (Eds.), pp. 197–220. Smith, P. D. 1970. A Comparison of the Cognitive and Audio-lingual Approaches to Foreign Language Instruction: the Pennsylvania Foreign Language Project. Philadelphia: The Center for Curriculum Development, Inc. South, H. 2003. Proposal to The Paul Hamlyn Foundation. Unpublished. South, H., C. Leung, P. Rea-Dickins, C. Scott, and S. Erduran. 2004. Evaluation of Existing National and State Frameworks for the Assessment of Learners with English as an Additional Language. Draft report. Spada, N. 1987. Relationships between instructional differences and learning outcomes: a process-product study of communicative language teaching. Applied Linguistics 8(2): 137–72. Stake, R. E. 1967. The countenance of educational evaluation. Teachers’ College Record 68(7): 523–40. Stake, R. 1995. The Art of Case Study Research. Thousand Oaks: Sage. Stenhouse, L. 1975. An Introduction to Curriculum Research and Development. London: Heinemann. Storch, N. 1998. A classroom-based study: Insights from a collaborative text reconstruction task. ELTJ 52(4): 291–308. Storch, N. 2001. How collaborative is pair work? ESL tertiary students composing in pairs. Language Teaching Research 5(1): 29–53. Stronach, I. 1986. Practical evaluation. In D. Hopkins (Ed.), Evaluating TVEI: Some Methodological Issues. Cambridge: Cambridge Institute of Education/MSC. Stufflebeam, D. L., W. J. Foley, W. J. Gephart, L. R. Hammond, H. O. Merriman and M. M. Provus 1971. Educational Evaluation and Decision-making in Education. Ithaca, NY: Peacock Press.
Bibliography
305
Stufflebeam, D. L. and W. J. Webster 1980. An analysis of alternative approaches to evaluation. Educational Evaluation and Policy Analysis 2(3): 89–12. Swain, M. 1995 Three functions of output in second language learning. In G. Cook and B. Seidlhofer (Eds.), Principles and Practice in the Study of Language. Oxford: Oxford University Press, pp. 125–44. Swain, M. and S. Lapkin 1982. Evaluating Bilingual Education: A Canadian Case Study. Clevedon: Multilingual Matters. Swain, M. and S. Lapkin 1995. Problems in output and the cognitive processes they generate: a step towards second language learning. Applied Linguistics 16: 370–91. Swain, M. and S. Lapkin 1998. Interaction and second language learning: Two adolescent French immersion students working together. Modern Language Journal 83: 320–37. Swain, M. and S. Lapkin 2000. Task-based second language learning: the use of the first language. Language Teaching Research 4(3): 251–74. Swales, J. 1989. Service English programme design and opportunity cost. In R. K. Johnson (Ed.), pp. 79–90. Taba, H. 1962. Curriculum Development: theory and practice. New York: Harcourt, Brace and World, Inc. Taut, S. and D. Brauns 2003. Resistance to evaluation: A psychological perspective. Evaluation 9(3): 247–64. Tawney, D. A. (Ed.) 1976. Curriculum Evaluation Today: Trends and Implications. London: Macmillan. Taylor, F. W. 1947. Scientific Management. New York: Harper and Row. TESOL (Teachers of English to Speakers of Other Languages) 1997. ESL Standards for pre-K-12. Alexandria, VA, Teachers of English to Speakers of Other Languages. The British Council Accreditation Scheme. http://www.britcoun.org/accreditation. The Standards for Educational and Psychological Testing. http://www.apa.org/science/ standards.html. Thoenig, J.-C. 2000. Evaluation as usable knowledge for public management reforms. Evaluation 6(2): 217–29. Thomas, H. 2003. The arguments for and the meaning of quality. English Language Teaching Journal 57(3): 234–41. Thompson, J. 1993. Language learning and computers: a survey of use in the UK. Higher Education. Hull: CTI Centre for Modern Languages. Timmis, S. 2004. Evaluation of the TESOL Blackboard. University of Bristol: Institute for Learning and Research Technology. Timmis, S., R. O’Leary, E. Weedon, C. Harrison and K. Martin 2004. Different shoes, same footprints? A cross-disciplinary evaluation of students’ online learning experiences: preliminary findings from the SOLE project. Journal of Interactive Media in Education (Designing and Developing for the Disciplines Special Issue) 13. www-jime. open.ac.uk/2004/13. Tomlinson, B. 1990. Managing change in Indonesian high schools. English Language Teaching Journal 44(1): 25–37. Tomlinson, B. (Ed.) 2003. Developing Materials for Language Teaching. London and New York: Continuum. Tribble, C. 2000. Designing evaluation into educational change processes. English Language Teaching Journal 54(4): 319–27. Trinity College London. http://www.trinitycollege.co.uk/. Tyler, R. W. 1950. Basic Principles of Curriculum Development. Chicago: University of Chicago Press.
306
Bibliography
Tyler, R. W., R. M. Gagne and M. Scriven 1967. Perspectives on Curriculum Evaluation. Chicago: Rand McNally. Urwick, L. 1952. Notes on the Theory of Organisation. American Management Association. van de Ven, A. and A. Delbecq 1972. The nominal group as a research tool for exploring health studies. American Journal of Public Health 62(4): 337–42. van der Meer, F.-B. 1999. Evaluation and the social construction of impacts. Evaluation 5(4): 387–406. van Lier, L. 2004. The ecology and semiotics of language learning: a sociocultural perspective. Educational Linguistics 3. Van Patten, B. and Cadierno, T. 1993. Explicit instruction and input processing. Studies in Second Language Acquisition 15(2). Vygotsky, L. 1978. Mind in Society. Cambridge, MA: MIT Press. Vygotsky, L. 1987. The Collected Works of L. S. Vygotsky. Volume 1: Thinking and Speaking. New York: Plenum Press. Wachtel, H. K. 1998. Student evaluation of college teaching effectiveness. Assessment and Evaluation in Higher Education 23(2): 191–207. Wallace, M. J. 1991. Training Foreign Language Teachers. Cambridge: Cambridge University Press. Wallace, M. 1998 Action Research for Language Teachers. Cambridge: Cambridge University Press. Warschauer, M. 1997. Computer-mediated collaborative learning: Theory and practice. The Modern Language Journal 81: 470–81. Watt, L. E. and D. M. Lake 2004. Benchmarking Adult Rates of Second Language Acquisition and Integration: How Long and How Fast? Final Report. Project funded through Alberta Learning – Language Training Programs, and Citizenship and Immigration Canada. Available online at http://language.ca/new.html. Weir, C. 2004. Language Testing and Validation. Basingstoke: Palgrave Macmillan. Weir, C. and J. Roberts. 1991. Evaluating a teacher training project in difficult circumstances. In S. Anivan (Ed.), Issues in Language Programme Evaluation in the 1990’s. Singapore: SEAMEO Regional Language Centre, pp. 91–109. Weir, C. and J. Roberts 1994. Evaluation in ELT. Oxford: Blackwell. Weiss, C. H. 1986. Towards the future of stakeholder approaches in evaluation. In E. R. House (Ed.), pp. 186–98. Widdowson, H. 1978. Teaching Language as Communication. Oxford: Oxford University Press. Wilcox, B. 1992 Time-constrained Evaluation – A Practical Approach for LEAs and Schools. London: Routledge. Wilkins, D. A. 1976. Notional Syllabuses. Oxford: Oxford University Press. Wilson, V. 1997. Focus groups: a useful qualitative method for educational research? British Educational Research Journal 23: 209–24. Wolf, R. L. 1975. Trial by jury: a new evaluation method – 1: the process. Phi Delta Kappa 57(3): 185–7. Worthen, B. R. and J. R. Sanders 1973. Educational Evaluation: Theory and Practice. Worthington, OH: C. A. Jones Publishing Company.
Index
accountability 7, 63, 141, 152, 190, 226 Action Research 35, 246 adversary approach 29 advocacy 122, 132 agents 203 Alderson, J. 47, 66 analysis of variance (ANOVA) 197 Aspinwall, K. et al. 203 assessment benchmarks see also bandscales 178–99 criteria 143–4 frameworks see Assessment benchmarks of performance 83–4, 143–4 standards see assessment benchmarks assimilation stage 40 Attitude/Motivation Test Battery (AMTB) 146 audience 227 audit 11, 15 Australian Migrant English Project (AMEP) 47 autocratic evaluation 48, 50 bandscales 178–99 Centre for Canadian Language Benchmarks (CCLB) 180–2, 190–7 EAL/ESL assessment frameworks 183–90 National Languages and Literacy Institute of Austrailia (NLLIA) 180–2 Qualifications and Curriculum Authority, England and Wales (QCA) 180–2 South Australia Curriculum, Standards and Accountability Task Force 180–2 TESOL ESL Standards for pre-K12 students 183–90 Bangalore evaluation (CTP) 13, 24, 33, 62, 91, 198 baseline evaluation 57–9, 148, 257
benchmarks 12 beneficiaries 203 Beretta, A. 26, 47 bias 122 bilingual education 149 Bilingual Education Project, Scotland (BEP) 47, 227 Blue, G. 53 Brauns, D. 40 Brazilian ESP Project 31, 204 British Council, The 27, 31, 51, 64, 282 see also PRODESS bureaucratic evaluation 48, 50 Burstall, C. 22 case study 76, 97, 117, 122, 132, 137, 253 CATWOE (SSM) 30 Checkland, J. 30 CIPP 28 classroom interaction 57 classroom language use 86, 129–30 classroom observation bandscales 86, 89–94 classroom observation fieldnotes 94–5 clients 186, 203 collaborative evaluation 202 collaborators 203 Colorado Project 24 COLT 60 Common European Framework 117, 136 communication skills 90 Communicational Teaching Project (CTP) see Bangalore evaluation Communicative Language Teaching 119, 139 communities of practice 201 competitors 203 compliance with mandates 50–4, 150, 246, 255–6 concordance 54 confidential annex 244 consent 244 constructivism 14, 40–4, 57, 263 content analysis 170–1 307
308
Index
contextual constraints 240–4 Council of Europe 136–8 countenance model 27–8 coursebook evaluation 46, 250–2 Critical Discourse Analysis (CDA) 70, 71 Cronbach, L. J. 37, 46 curriculum betterment 34–5 curriculum research 33–4 data 14 coding 238 management 237, 264 types 249 database, ACCESS 234 decision-making approaches 23–26 declining standards of English 78, 119–20 Delphi Technique 270 democratic evaluation 48 Department for International Development (DfID) 63–5 diaries 250 dictogloss 63 discovery stage 40 discrepancy model 29 document analysis 234, 249, 270 Donno, S. 54 dynamic assessment 144 EAQUALS 51, 228 ecological approaches 42 effectiveness measures 10 electronic data 269–71 emergent realism 45 engineering models 22, 49 English for Academic Purposes (EAP) 53, 150–77 English as an Additional Language (EAL) 178–99 English Language Teaching Contacts Scheme (ELTECS) 69 English Language Teaching Secondary Project, Tanzania (ELTSP) 213, 227 English as medium of instruction (EMI) 83, 98 ethics guidelines 244–5, 283–4 issues 6, 81, 244–5, 248–9, 260–1 ethnographic evaluation 13, 71, 121, 123–33, 234
evaluation apprenticeship 237 books on 275–8 bulletin boards 284–8 commissioning organisations 227, 231 construct 7, 81, 103–4, 121, 124, 141, 151, 157–8, 256 criteria 13–4, 88, 266 data 15 design 104–11, 124–7, 142–5, 154–6, 266 email lists 284–5 expertise 216 focal points 248 induction and training 97 of innovation 48, 230, 263–4 journals 278–82 of learning materials 46, 103, 250–1; see also coursebook evaluation planning 80, 96, 214–6, 225 policy 152–4 and policy development 49, 131, 147 procedures 83–4; see also evaluation design professional associations 282–3 for program development 256 purpose 8–10, 102, 162, 255–6, 270 questions 231–3, 248, 263, 269; see also evaluation design reporting 153, 243, 268 research 6, 39, 60, 115, 198, 224 sampling 97, 106–7, 140, 197–8, 226, 242–3 standards 30–2 teacher’s role 176 by teachers 57–8, 246–54 of teachers 18–9, 77–98, 152, 256 of teaching 161–77 timing 96, 105, 131, 140 use 15–6, 37–40, 152–4, 187–8, 201, 255–6, 258 evaluator loyalty 82, 191, 215, 244 evidence-based practice 9, 49 existing data sets 213, 229; see also management information systems Expatriate English Language Teacher Scheme (EELTS) 75, 108, 119–34 experimental design 23–4, 124–6, 132, 264 external evaluators 131–2, 141, 191, 215
Index
309
Ferguson, G. 54 field see user groups field notes 241–2, 249 flow 42 form-focused instruction 63 formative evaluation 58, 59, 264 feedback 63 Fourth Generation evaluation 9, 41
Jacobs, C. 262 Jet-in jet-out expert (JIJOE) Joint Committee 31, 37 joint evaluations 213 Julnes, G. 45
gender 214 Gitlin, A. 18 globalisation 43 goal-free evaluation 27 grammar input strategies 63 group discussion see structured discussion, Nominal Group Technique Grundy, P. 53 Guba, E. 41, 203
Language Across the Curriculum (LAC) 99–117 Language Centre evaluation 66–8 language learning 34, 59–60, 173 language tests 57, 85–7, 124–5, 143–5, 148, 257 Lantolf, J. 42, 144 large-scale evaluations 223, 225–45 Lawrence, L. 62 learning activities 110–11, 113, 148, 252, 269 from evaluation 12, 116, 134, 153, 160, 246, 289–90 through evaluation 32–6, 152–3, 183–4, 247 to do evaluation 7, 58, 236–7, 247–50, 255–61, 289–90 Learning in English for Academic Purposes, South Africa (LEAP) 261–5 Legutke, M. 34 Lincoln, Y. 41, 43, 203 locus of control 214 logframe see logical framework logical framework 63–6, 77, 206, 207–9
Hamilton, D. 32–3 Hargreaves, D. 10 Henry, G. T. 45 Holliday, A. 70 Hong Kong 75, 119–34 Hong Kong Attainment Test (HKAT) 124–5 identity politics 43, 175 illuminative evaluation 32–3 immediate project objectives 65, 266–7 impact 15–6, 133–4, 198, 213–4, 230 Indonesian University language centre evaluation 66–8 Information and Communication Technology (ICT) 99–117, 141, 209–13, 269–71 innovation management 48, 124, 226, 230, 251–2 insider–outsider dichotomy 204 insider’s dilemma 47 inspection 14, 15, 50 institutional self-assessment 50 internal evaluators 131, 150–187, 201 interpersonal factors 133, 174 interpretation 198–9 interpretive turn 43 interviews 192, 249–50, 252, 266 Ireland 75, 136–149
64
Kemmis, S. 50 Kiely, R. et al. 237
MacDonald, B. 48, 215 Mackay, R. 66–8 Management Information Systems 68, 257 mandates see compliance with mandates Mark, M. M. 45 Marsh, H. W. 162–4 MacDonald, B. 48, 82, 215 measurement of outcomes 57, 123, 198, 235; see also language tests media accounts of evaluation 120, 133 Mitchell, R. 60–1 mixed method evaluations 121, 132, 263–4 molar relationships 45, 269
310
Index
Molteno Project, South Africa & Namibia 227 multi-site evaluation 99–101 Murphy, D. F. 219 native-speaker language teachers 119–34, 140 Nominal Group Technique 155–6, 210, 220, 252, 269–70; see also structured discussion Norris, N. 49, 50 Nuttall, D. 64–5, 230 observation classroom 86, 88–96, 249–50 ethnographic 123, 157, 235 in resource centres 266 systematic 234 Open Learning Environment 209 see also Virtual Learning Environments Parlett, M. 32–3 participatory evaluation 66, 201, 217, 243, 264 particularisations 33 partnership 82, 217; see also participatory evaluation, collaboration Patton, M. Q. 31, 38, 257 Pawson, R. 39, 44 peer review 50 Pennington, M. 42 performance indicators 12–14, 50, 257 periodic review 15 pilot studies 84–6 EELTS 119–34 PMLP 136–49 Poland see PRINCE political dimensions 46–50, 82, 227 Popham, W. J. 27 post-colonial critique 43, 70 postmodernism 43, 70 Potter, M. 68 power relations 202–5, 217, 290 Primary French evaluation 22–3 Primary Modern Language Project (PMLP) 75, 130, 136–49, 191, 227–8 PRINCE (Poland) 206–8, 228, 258–60 process-oriented approaches 26–30 process-product studies 60
PRODESS 4, 31, 58, 66, 68–70, 228, 277 PRODESS Evaluation Guidelines 68, 237 professional development 246; see also learning from evaluation, learning through evaluation program development 26–30 fairness 26 fidelity 24 worth 26–30 psychometric approaches 22 putative hypotheses 211 Qualifications and Curriculum Authority (QCA) 51, 228 qualitative data analysis software ETHNOGRAPH 234 NUDIST 234 WINMAX (MAXqda) 234–5 WordSmith Tools 235 quality 11–12, 214, 248, 256, 267 assurance 11–12, 150–77, 226 Quality Assurance Agency, UK (QAA) 228, 283 questionnaires 105, 125–6, 143, 155, 161–7, 238–9, 249–50, 259 Rea-Dickins, P. 68, 69, 219 readiness for evaluation 38, 40, 248 realist evaluation 44–6, 59, 116 regulators 203 resistance to evaluation 39–40, 167, 271 resource centres evaluation 232–3, 265–9 roles in evaluation 202 Romanian ESP Project (PROSPER) 69, 228 Sanders, J. R. 27 Science Across Europe (SAE) 99–117, 191, 204, 242 scientific management 20 Scott, M. 66 Second Language Acquisition 8, 13, 60 self-directed learning 113 self-evaluation 105, 234, 247 Skehan, P. 62 Smyth, J. 18 socio-cultural learning 42 soft-systems methodology (SSM) 29–30, 59, 70, 163
Index spreadsheet, EXCEL 234 Stake, R. 27–8, 33, 137, 147–8, 163, 250 stakeholder classifications 203 stakeholders 11–12, 132, 151, 173–5, 191, 226, 231 stakeholding 200–20 static characteristics 27 statistical software FACETS 235 SPSS 235 Stenhouse, L. 33–6, 106, 200 Stirling evaluation 60 structured discussion 168–73, 263, 269; see also Nominal Group Technique Student Evaluation of Educational Quality (SEEQ) 162–4 Student On-line Learning Experience (SOLE) 99 student satisfaction 10, 162, 166, 257 Stufflebeam, D. 31 summative evaluation 152, 155 suppliers 203 surveys 97, 104–7, 183 electronic 111, 192–3 Taba, H. 20–2 task-based learning (TBL) 62–3 Taut, S. 40 Teacher development 158–60, 186–7, 246–8, 255 teacher education programs 51–4, 206–9, 252–4 teacher intention 62 teacher-led evaluation 246–54
311
teacher profiles 111 teachers’ English language competences 77–98 technical rationality 18 Terms of Reference (TOR) 63, 78–9, 102, 206, 231 Third Way 8 Thomas, Helen 11 Thomas, Howard 34 Tilley, N. 44 tissue rejection 48 triangulation 41 Trinity College London (TCL) Cert TESOL 51–2 Tyler, R. 20, 200 unanticipated impacts 21 user groups 152–3, 191 utilisation-focused evaluation see evaluation, use of 38, 201, 289 validity 96–8, 209, 237–8, 290 threats to 229 value for money (VFM) 11, 122, 148, 225–6, 257 victims 203–4 Virtual Learning Environments 99, 269–71 Weiss, C. H. 12, 201 wider project objectives Worthen, B. R. 27
64–5, 230
Zambian ESL syllabus evaluation 61–2, 91