Developing Organizational Simulations A Guide for Practitioners and Students
7
crete by the specification of relevan...
84 downloads
1368 Views
12MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Developing Organizational Simulations A Guide for Practitioners and Students
7
crete by the specification of relevant overt behaviors and thus are easier to observe than intelligence or personality characteristics. Simulation exercises mimic the work environment without replicating it; completely. Therefore, simulation exercises are not pure samples of work behavior A good assessment program often incorporates multiple measurement techniques in order to provide a full picture of the individual being assessed. Different combinations of techniques may be appropriate for different purposes and positions. To compare simulation tests to other major assessment techniques, it is first helpful to review the major features of alternative techniques.
COMPARISONS OF MAJOR ASSESSMENT TECHNIQUES Paper-and-Pencil Intelligence Tests and Personality Inventories Paper-and-pencil intelligence and personality tests have been used for nearly 100 years in a variety of organizational settings, including government, military, and private corporations. Intelligence tests are designed to measure cognitive ability—either a general mental ability (commonly called IQ) or specific abilities such as verbal, numerical, or spatial visualization ability. Personality inventories are designed to measure various characteristics such as conscientiousness, extraversion, agreeableness, or emotional stability, which are presumed to be stable within individuals. Intelligence tests and personality inventories are quite difficult and time-consuming to develop. Fortunately, a number of well-developed tests of aptitude (e.g., the Wonderlic Personnel Test, 1992) and personality characteristics (e.g., the NEO PI-R, Costa & McCrae, 1992) can be purchased relatively cheaply, saving the development time. These tests and inventories have a number of benefits for organizations such as ease and speed of administration and scoring, ability to administer to groups, and established reliability and validity evidence. Moreover, both intelligence and personality (especially the trait of conscientiousness) have been shown to predict the performance of employees in a variety of jobs and in a variety of hierarchical levels within the organization (Barrick & Mount, 1991; Hunter & Hunter, 1984; Ones, Viswesvaran, & Schmidt, 1993; Schmidt & Hunter, 1998; Tett, Jackson, &Rothstein, 1991). Finally, these types of assessment tools, especially intelligence tests, also offer considerable financial returns on investments (Hunter & Hunter, 1984; Schmidt & Hunter, 1998). However, these types of tests and inventories also have a number of drawbacks, including: • They generally cannot be customized to meet the organization's needs.
8
*>
CHAPTER 1
• Positive responses on paper-and-pencil personality inventories can be faked and thus must be used cautiously in selection (Douglas, McDaniel, &Snell, 1996; Zickar & Robie, 1999). • Intelligence tests often result in adverse impact against some ethnic groups (Hunter & Hunter, 1984; Sackett & Ellingson, 1997) and there is some evidence, albeit limited, that personality inventories may create adverse impact (Fallen, Kudisch, & Fortunato, 2000; Hough, 1998; Ryan, Ployhart, &Friedel, 1998). • Applicants may dislike paper and pencil tests (Thornton & Byham, 1982) and applicants who are dissatisfied with an organization's selection procedures may be less likely to accept an employment offer (Rynes, 1991), may perceive the organization as less attractive, and may be more likely to complain or legally challenge the selection system (Arvey & Sackett, 1993). There is also evidence that applicant reactions can adversely affect test taker performance (Chan, Schmitt, Sacco, &DeShon, 1998). • Some paper-and-pencil tests may not be appropriate for professional development if they measure stable traits that cannot be changed easily.
Interviews Interviews are undoubtedly the most commonly used assessment method used in organizations. Interviews typically have very high face validity for applicants (Thornton &. Byham, 1982). That is, applicants find interviews an acceptable way of selecting people for employment or promotions. Interviews may range from very unstructured to highly structured. Unstructured interviews are unstandardized and typically are not valid or reliable. In contrast, structured interviews can exhibit considerable reliability and validity (McDaniel, Whetzel, Schmidt, &Maurer, 1994;Taylor &Small, 2000), and they may offer incremental validity over intelligence tests (Schmidt & Hunter, 1998). Structured interviews may be situational or behavioral background. Situational interviews typically involve questions that present a hypothetical situation and then ask the applicant to describe how he or she would react (i.e., if you were faced with an angry customer, how would you handle the situation?). Situational interviews assess the person's statement of intentions to behave, but they do not require the person to actually demonstrate the behavior. Behavioral background interviews involve asking the applicant to describe how he or she has behaved in past situations (e.g., "Tell me about a time when you were faced with an angry customer; how did you handle the situation?"). Structured interviews can be an excellent addition to any assessment program; however, they do have a number of limitations, including:
SIMULATION EXERCISES
"^ 9
• Well-developed interview questions and evaluation guidelines take time and expertise to create. • Background interviews rely on self-reported information, and positive responses may be faked. • In a situational interview, applicants may be able to articulate what should be done and what they intend to do, but they may lack the skills or willingness to actually do it when placed in an actual or simulated work setting. • Just as some people may not accurately be able to describe their own personalities, applicants may not be an accurate judge of how well they handled past situations in a behavioral background interview • Some skills and dimensions such as leadership, and problem solving, are difficult to assess in an interview. • Because they rely on human judgment, interviews can be biased by interviewer rating errors (e.g., primacy-recency effects, halo error) and interviewee characteristics (Harris, 1989). • Despite attempts to overcome problems with the interview, users may react negatively to strategies for increasing the structure and standardization of interview (Campion, Palmer, & Campion, 1997). Another form of interviewing could be to include a simulation activity into a situational or behavioral background interview. For example, after asking the candidate to describe his or her relevant background, examples of behaviors demonstrated in the past, and intentions for handling situations that might be encountered in the future, the interviewer could shift into a one-on-one role-playing activity. At this point the interview could say, "Picture yourself in the job setting, and pretend I am a customer in your office. I have made a complaint that one of your staff members was rude to me. Speak to me as you would in the job situation. What would you say to me if I said 'Miss Franklin said that I should have read the directions more carefully before coming to her.'" At this point in the interview, the candidate must actually demonstrate the behavior of providing good customer service, and not just describe such prior experiences or just state what he or she would intend to do in future such situations.
Work Samples Work samples require an applicant to complete a small portion of the work he or she would actually perform on the job (Thornton & Byham, 1982). For example, an applicant for a secretarial position might be asked to type a memo, or an applicant for a teaching position might be asked to deliver a lecture to a class. The product of the sample of work or the process of producing the work can then be evaluated. Work samples tend to be the most face valid and ac-
10 ^^
CHAPTER 1
ceptable assessment technique, because they are clearly job related (Rynes & Connerley, 1993). Also, work samples predict performance above that which is predicted by cognitive ability tests alone (Schmidt & Hunter, 1998). However, work samples are not appropriate for certain types of jobs. For example, it may be difficult to have a true work sample test for a management or executive position, because these positions are extremely varied and multidimensional. It would be nearly impossible to obtain a single sample of work that would capture a large portion of the important aspects of this type of job. Additionally, work samples are only appropriate when you expect the person to already have the skills needed to do the job (Thornton, 1992). Lastly, a given work sample is suitable only for a specific job. Thus, it would be inappropriate to use a work sample in a promotion setting when none of the applicants have relevant experience, or in a setting where people must be evaluated for a broad range of jobs. WHY USE SIMULATION EXERCISES? As we show in later chapters, it is somewhat more difficult to develop and implement simulation exercises than it is to simply purchase published, standardized tests or questionnaires. Given the time and expense needed to develop reliable and valid simulations and then to administer and score them, many professionals may question the need to add them to their existing battery of assessment, selection, training, and development arsenal. However, simulation exercises offer many advantages, such as: • Some dimensions/competencies such as communication, problem solving, interpersonal relations, and leadership, are difficult to assess without seeing complex, overt behavior displayed in simulations. • Simulations typically allow for the simultaneous measurement of multiple behavioral dimensions (that is, they may have more "band width," Thornton &Byham, 1982). • Well-developed simulations have good psychometric properties (Thornton &Byham, 1982; Thornton & Rupp, 2003). • Simulations are perceived to be more job-relevant than paper-andpencil tests and thus may be more acceptable to test takers (Rynes &. Connerley, 1993; Thornton, 1992). • Simulations are harder to fake than personality tests. • Simulations typically show less adverse impact against legally protected groups than paper-and-pencil tests of cognitive abilities (Hoffman & Thornton, 1997). Lesser adverse impact may result from the use of simulations because they involve two features Pulakous and Schmitt (1996) identified as methods that have been shown to reduce sub-group differences, namely: they are an alternative measurement
SIMULATION EXERCISES
•
•
•
•
method (that is, they are not paper-and-pencil methods), and they provide measures of different characteristics (that is, they measure something beyond cognitive abilities). Well-developed simulation exercises are valid for predicting future job performance and promotion (Bray &. Grant, 1966; Gaugler, Rosenthal, Bentson, &. Thornton, 1987; Tziner, Ronen, &Hacohen, 1993). Simulation exercises assess both declarative knowledge (i.e., the ability to state how a task should be done, such as being able to list the steps to facilitate a meeting) and procedural knowledge (i.e., the ability to actually perform a task, such as demonstrating the behaviors to actually facilitate the meeting). Simulations are versatile and can be used for other purposes beyond evaluation, such as for training, development, and research (Thornton & Cleveland, 1990). Simulations can be tailored to meet the individual organization's needs.
Thus, the question is not whether to use a paper-and-pencil test or a simulation exercise. Rather, the question is: In what circumstances can simulation exercises be used to augment an assessment process, and how can different forms of assessment be integrated? The following two cases illustrate examples of how simulations were used to measure attributes that were of interest to the organization, but which were not measured adequately with other methods. EXAMPLE 1: SIMULATION EXERCISES FOR SELECTION A medium-sized organization wished to enhance its selection system for hiring sales and customer service representatives. The organization already had a good selection system in place that included multiple structured interviews, a personality test, and a thorough reference checking procedure. The existing selection system resulted in hires who fit the culture of the organization, but who did not have optimal sales skills. The historical culture of the organization placed high value on taking time to be friendly, getting to know customers personally, and not pressuring customers to purchase products and services. As the company had grown, it had become more important for sales employees to be willing and able to initiate new sales, accomplish cross-selling, and quickly close sales. Thus, the organization was looking for a way to assess individuals who could balance a strong customer-service focus with the need to increase sales. A job analysis was conducted and several characteristics emerged as important to the sales and customer service positions, including problem solving, rapid decision making, and persuasion. Several exercises, ranging from simple role-plays to complex problem analyses, simulating telephone and
12
*xa>
CHAPTER 1
face-to-face sales situations were developed to assess these characteristics. Each exercise assessed a different set of dimensions, though all of the exercises assessed sales and persuasion ability. By augmenting their existing hiring process with the simulation exercises, the organization was able to hire employees who fit the culture and also had good selling skills. Applicants also felt like they had a fair chance to show their selling skills and were able to learn about the complex job expectations. EXAMPLE 2: SIMULATION EXERCISES FOR PROFESSIONAL DEVELOPMENT A city government wished to enhance its supervisory training program. Although the city already had a formal and extensive training program, the human resources department determined that supervisors seemed to still have difficulty in three areas: coaching employees, managing conflict, and handling disciplinary issues. The human resources director requested that an assessment center consisting of multiple simulation exercises be developed to assess these skills and diagnose supervisor's strengths and needs for development. In this project, the challenge was to construct exercises that would be relevant to supervisors from a variety of functional areas, including public safety, utilities, maintenance, and administration. To address this need, several interviews were conducted to better define these dimensions and to collect critical incidents that could be used in constructing the exercises. From the interviews, several common themes emerged that were integrated into the exercises. For example, supervisors who were interviewed reported similar types of people who were most difficult to deal with (i.e., individuals who were manipulative, did not put forth effort, and would not take responsibility for their actions). The final exercises consisted of two role-play scenarios and one case study analysis that assessed the three dimensions of coaching employees, managing conflict, and handling discipline issues. The assessment was set up and administered as a developmental center for first-level supervisors. After being assessed, supervisors received individualized feedback on their performance along with suggestions for receiving additional training relevant to their developmental needs. In follow-up interviews, the supervisors commented that the exercises provided useful feedback that the existing programs did not. They also felt that they had clearer direction in what actions to take to improve performance even further. USES OF SIMULATION EXERCISES These two examples demonstrate the most common use of simulation exercises: selection and development. When used for selection (as well as pro-
SIMULATION EXERCISES
^^
13
motion), a simulation provides the organization an opportunity to see the candidates demonstrate skills and competencies before decisions are made. When an organization uses two or more simulation exercises, the procedure is called an assessment center. An assessment center is a method that uses multiple assessors to observe multiple exercises to evaluate job related skills. Assessments of performance in the simulations provide a wide variety of information relevant to future job performance in conjunction with information from resumes, interviews, and tests. When used for professional development, simulation exercises are useful for diagnosing strengths and weaknesses among current employees. This information can then be used to form the basis for individual developmental actions on or off the job, or for broader organizational development across the organization. Simulations also give the student or trainee a chance to apply concepts learned in lectures or readings. They also provide opportunities for practice and feedback on skills being learned. If a simulation is to be used for training purposes, the practitioner will want to consider whether the participant will benefit from experiencing a simple or complex simulation, and then match the developmental readiness of the trainee with the developmental complexity of the simulation. For example, a relatively unsophisticated supervisor may be able to learn from a simple one-on-one role-play exercise but be completely lost in a complex organizational game. On the other hand, an experienced middle manager may find that a roleplay exercise is too simple to provide a growth experience, and may need to be challenged in a complex game. These considerations will help the practitioner decide what type and complexity of simulation exercise to build for training purposes. Simulations are also used for research purposes. Following the classic experimental paradigm, the researcher can systematically vary one bit of information within the scenario of the simulation, and test the effect on subsequent behavior of the respondent. For example, in the in-basket, a memo from the CEO can ask the respondent whether a given employee should be recommended for a special, highly sought after international assignment. Throughout the in-basket there can be a variety of information supporting and not supporting this assignment. The experimental manipulation can be the gender of the staff person in question. This simulation provides a subtle way of studying sex discrimination. Complex games can be used to study the decision-making processes of individuals and teams (Thornton & Cleveland, 1990). In our experience, it is very difficult to use a given simulation exercise for several purposes at the same time. Many features of the exercise itself, and many aspects of the climate surrounding the assessment program, differ markedly from selection and promotional programs to developmental and training programs. For example, whereas a selection or promotional pro-
14 's^s)
CHAPTER 1
gram must emphasize strict, formal, prescribed, and even somewhat stressful conditions to yield fair and accurate results, a developmental or training program emphasizes open, flexible, comfortable conditions where participants can experiment, practice, and learn new skills. More details on the differences between selection and development programs are described in Thornton (1992). At the very least, the organization has professional, ethical, and legal obligations to be very clear about how the results will be used, and then stick to those pronouncements. Assessment programs using simulations can fail if the organization misuses the results, as described in the following example. Why Simulation Exercises Fail Fatal Error 1. Misuse of Results A manufacturing company developed an assessment program using simulations for the purpose of developing supervisory skills among lower- and middle-level managers. As stated in the program announcements and at the orientation for the program, the information was to be used only for developmental purposes, that is, to write individualized development plans, guide the supervisors and managers into training programs, identify special assignments to close experience gaps, and stimulate self-development activities. By the end of the program's first year, nearly 50 supervisors and managers had been assessed and given feedback. At this time, the organization experienced a serious loss of business and had to lay off large numbers of manufacturing employees and supervisors. Higher-level management used the assessment information to identify the weaker supervisors who were then laid off. The simulation became known as the hatchet job and no one would participate in this "development program." HOW TO USE THIS BOOK Because there are unique ideas presented in each chapter, we believe everyone will benefit from reading this entire book. Realistically, we know many people are short of time, and may be interested in only certain concepts. The hurried reader may want to focus on certain chapters. We strongly urge everyone to read Chapters 2 and 3 as they provide a conceptual model for constructing simulation exercises as well as specific steps to follow when developing them. If the reader is familiar with job analysis and competency modeling procedures, he or she may wish to skim Chapter 4 that describes analyses used to determine the dimensions to be assessed and the content making up the simulations. In Section II, the reader may then wish to focus only on those chapters describing simulations of interest. Keep in mind that
SIMULATION EXERCISES
**+a>
15
some exercises build on one another; for example, a presentation (chap. 6) may build on a case study (chap. 5). Finally, the reader will almost surely want to study Chapter 13, Assessor Training, because the observation and rating process is central to all simulations. Chapter 14 contains information on how to meet professional and legal standards for developing simulation exercises. Chapter 15 summarizes key points. The appendices provide a variety of illustrative material that simulation developers may find useful, including: a list of dimensions and definitions, organization charts, lists of questions that can be used after the administration of exercises to obtain input from participants, and examples of several types of exercises described in this book.
Development of Simulations: A Model and Specific Steps
In this chapter we provide a model for the development of simulation exercises and a set of specific steps to follow in their development. The model and steps are designed to help the simulation developer attain the ultimate goal of producing a realistic testing situation that elicits behavior related to dimensions of job and organizational success. The model gives an overall conceptual framework for the exercise construction process. It is quite general, but gives a road map for all subsequent work. The model provides a rationale for: • • • • •
steps that are presented later in this chapter, issues needing to be considered during exercise construction (chap. 3), methods for analyzing jobs and organizations (chap. 4), requirements relevant to each specific type of exercise (chap. 5 to 12), and suggestions for evaluating the quality of the simulation (chap. 14). MODEL FOR CONSTRUCTING SIMULATION EXERCISES
Figure 2.1 shows six major phases of the simulation development process across the top, as well as four different types of situation analysis down the left side. The six phases of the simulation construction process show that the actual writing of exercise materials is only one part of a process of integrated activities. The process starts with several analyses that lead to clear specifications of what is to be accomplished. Then, after constructing the ma16
DEVELOPMENT OF SIMULATIONS
***a>
17
terials, the developer will carry out pilot testing to contribute to initially evaluate the materials before application. Following application, the final phase involves an evaluation of the end products. The four types of situation analysis show that an analysis of the tasks carried out on an existing job is only one consideration when building a simulation exercise. In addition, the more comprehensive analyses include a study of the dimensions underlying effective performance on the job, the competencies needed throughout the organization in the future, critical tasks which are carried out on the job, and the organizational setting in which the job is carried out. The model suggests that the information gathered by these four types of analyses is useful at each subsequent phase of exercise construction and evaluation. Numbers in Fig. 2.1 show the links between initial and subsequent activities in the exercise development process.
Analyses Dimension analysis and competency analysis provide information about the human attributes needed for effective performance in the target job and the entire organization, and Links 1 and 2 show that these analyses guide the developers in specifying what dimensions will be evaluated. Link 6 shows that after dimensions are specified, rating scales can be constructed for assessors to evaluate performance in the exercise. The competency analysis also gives an indication of the level of performance expected in the future and thus guides the specification of the difficulty built into the exercises (Link 3). The specified difficulty level of the assessment is manifested in: • The anchors put on the rating scales (Linkage 7), • Challenges presented by role players and assessors if they interact with assessees (Link 8), and • The complexity of the stimulus materials in the exercises (Link 9). As indicated by Link 4, task analysis provides guidance on the type of exercise (e.g., an in-basket exercise may or may not be appropriate for first line supervisors) and the basic content of the exercise (i.e., the problems which should appear in the in-baskets). Finally, organization analysis guides the specification of the setting in which the exercise will be cast (Link 5). For example, the exercise may depict a government agency, a manufacturing operation, a bank, or a restaurant. Arguments for and against depicting the exact setting of the target job are presented in Chapter 4, which discusses several issues of exercise construction.
FIG. 2.1.
Model for Constructing Simulation Exercises.
DEVELOPMENT OF SIMULATIONS
***3>
19
Specifications The four situation analyses lead to the preparation of specifications, the second phase of the exercise construction process. An early statement of specifications is especially important. Specifications are affirmative statements about various parameters of the assessment process: the dimensions to assess, the difficulty level of the simulation, the type of simulation to use and its content, and the setting in which to portray the simulation. It is essential that specifications are outlined before beginning the phase of actual construction of the materials, because specifications provide clear communication among various parties involved in the assessment project. It is helpful to have line managers, human resources managers, consultants, and exercise developers agree on specifications regarding dimensions, type of simulation, setting, and other factors.
Construction The construction phase involves writing the actual exercise materials. The exercise developer is guided by the specifications prepared in the previous phase. The rating scales are built to assess the dimensions specified (Link 6) at the specified level of difficulty (Link 7). Role player and administrator guides will be shaped by the specifications also, as suggested by Link 8. For example, role players and administrators may be instructed to put moderate to strong pressure on a participant to enable assessors to assess perseverance and stress tolerance. The stimulus materials shown to the participant include instructions, case facts and data, information about the organization, and description of what the participant is expected to accomplish. The problems embedded in the materials and the setting for the simulation are also prescribed by the specifications (Link 10 and 11). Each of these construction activities is described in Chapter 5 to Chapter 12 that cover the various types of simulations.
Pilot testing At the pilot-testing phase, the developer should try out all the rating scales (Link 12), role player and administrative guides (Link 13), and stimulus materials for the participants (Link 14). Individuals making up the pilot group should be similar to the persons who will be future participants, and should include representatives of special interest groups such as minorities, women, older individuals, and persons with disabilities, if they can be recruited. The purpose of the pilot testing is to obtain feedback from participants, role players, and assessors on several aspects of the simulation materials.
2O ^^
CHAPTER 2
This information contributes to formative evaluation, that is, evaluation that helps form or shape the simulation. Participants can be asked to com' ment on the clarity of the instructions, the wording of the exercise materials, and the time allowed. Assessors can provide information about the opportunities to make observations about the pilot participants' behavior (e.g., does the exercise elicit a sufficient number of behaviors for each of the dimensions so that assessors indeed have enough information on which to base their ratings) and the adequacy of the ratings scales. Role players can provide information about the adequacy of the guidance given to interact with the participants. Additional evaluations can be collected by asking assessors to judge whether the dimensions and behavioral anchors on the rating scales match job requirements (Link 15) and if the content of the exercises match job activities (Link 16). These data are also relevant to summative evaluation, as described next.
Application Armed with this input from participants, assessors, and role players, the developer can make revisions to the exercise and proceed to the application (Link 17). The application is simply carrying out the exercises for whatever goals or purposes they were designed to meet. Applications include assessing candidates' qualifications, assessing training needs, developing skills, and conducting research experiments.
Summative Evaluation After the application of the simulations to a group of participants, another kind of evaluation, "summative evaluation" can be carried out. This evaluation examines the adequacy of the entire sum of developmental work on the simulation. In addition to querying participants about their reactions to the simulation, psychometric studies can be conducted that assess the reliability and validity of dimensions, exercises, and ratings. Ideally, the developer will be able to examine whether ratings of effectiveness in the exercise are related to some measure of effectiveness on the job. Seldom is it feasible to gather such information in the short run. However, the developer can carry out other evaluation steps by obtaining judgments of the relevance of dimensions to job requirements (Link 18) and the relevance of exercise content to job tasks and the organizational setting (Link 19). In addition, analyses of ratings of performance in the exercise can be conducted (Link 20). These various approaches to evaluating reliability and validity are discussed in Chapter 14.
DEVELOPMENT OF SIMULATIONS
***> 2 1
STEPS IN THE DEVELOPMENT OF SIMULATION EXERCISES Table 2.1 provides a list of specific steps to follow in developing a simulation exercise. Here we break down each development phase into specific steps. Some steps are described in more detail, whereas other steps are covered only briefly because subsequent chapters go into more detail on these procedures. PHASE 1 SITUATION ANALYSIS
Step 1. State the Purpose of Simulation and Determine the Resources Available The purpose for constructing the simulation should be clearly stated. Simulations can be used for various purposes including selection of new employTABLE 2.1 Steps in Developing a Simulation Exercise Phase I. Situation Analysis 1. State the purpose of the simulation and determine the resources available 2. Conduct situation analyses (chap. 4) Phase II. Specifications for Dimensions and Simulation (chap. 4) 3. Specify the dimensions and difficulty level 4. Specify features of the simulation Phase III. Exercise Construction (chap. 5 to chap. 12) 5. Prepare the participants' materials 6. Prepare support materials for administrators, assessors, resource persons, and role players Phase IV. Pilot Testing and Formative Evaluation 7. Train assessors (chap. 13) 8. Administer the simulation to a pilot group Phase V. Application 9. Use the simulation for selection, promotion, training organization development, or research Phase VI. Summative Evaluation 10. Solicit participants' reactions 11. Conduct psychometric evaluation (chap. 14)
22
*xa>
CHAPTER 2
ees, identification of long-range potential in personnel to rise to higher levels of management, promotion of current employees from one position to another, diagnosis of individuals' training needs, development of specific skills, identification of training needs across a level of personnel, or research on managerial and leadership proficiencies. The ultimate use of the simulation will determine many of its features. Although all simulations have many features in common, each simulation must be designed somewhat differently depending on the purpose of assessment. The develpper will also want to understand the personnel resources available for constructing and implementing the exercise. One needs to know: • who will be available as resource persons to gather information for the situation analyses; • who will be assessed by the simulation, in terms of their background, education, experience, technical expertise, and how many persons will be assessed over some reasonable period of time; • information about the observers/assessors: who they will be, how much experience they have, how long they will be available for assessor training, how often they will serve as assessors, and so on; and • who will be available to serve as role players, if they will be used. The developer will also want to understand the organizational policies that impact the utilization of information gathered in the simulation exercise. Important questions to ask during this step include: • How does the simulation fit in with other assessment tools that may be used? • Will feedback be provided based on the participants' performance in the exercise? If so, what type of feedback (i.e., written, oral), and who will deliver it? • Who will have access to the findings? • Can individuals be reassessed? After what period of time and under what conditions? These and other considerations have a great impact on the types of information the developer will gather in the analysis phase and will lead to clear specifications for the exercise. For example, if the simulation is used as a part of a program to identify persons with long-range, high potential for advancement to higher-level management, and there are substantial resources available to develop and implement the program, the developer will want to select dimensions that reflect basic aptitudes for development over a period of several years, and will develop relatively complex exercises
DEVELOPMENT OF SIMULATIONS
^^ 23
that require substantial assessor skills and training. In contrast, if the simulation is used to identify persons who currently have specific skills to carry out immediate tasks, and the exercises are scored by persons with limited time to work as assessors, the developer will want to select dimensions that reflect specific abilities, and the administration and scoring will have to be relatively straightforward.
Step 2. Conduct Situation Analyses The four types of situation analyses described earlier in this chapter are designed to obtain different information that is essential for the specification of features of the simulation exercise, and the development of actual content of the exercise materials. The initial situation analyses may provide all the information needed to satisfy both purposes. For example, for a training application, a single set of situational analyses of the target job may provide information needed to set the specifications for the exercise and to write the content of the exercise materials themselves. On the other hand, if the developer wishes to build a simulation representing a different job and setting from the target job, an additional set of situational analyses will be necessary to gather the information needed to write the exercise materials. The developer may wish to simulate a job and setting different from the target job in order to create a neutral assessment technique that does not favor any group of participants in a selection or promotion application. In either case, the four types of situation analyses provide useful information. Dimension analysis involves a study of the human attributes contributing to effective performance on the job. Competency modeling is a process of identifying the core or common characteristics that key officials believe are important for the long-range success of all members of the organization in the future. Task analysis is a study of the unique duties and responsibilities that persons on the job are expected to accomplish. Organizational analysis is a study of the setting in which the job is carried out, including the structure and climate of the organization and the industry and market in which the organization operates. When conducting these various analyses the developer needs to talk with people who can provide relevant ideas and information. Subject matter experts (SMEs) within the organization where the exercise will be used can provide information needed for specifying the dimensions, the difficulty level of the dimensions and exercises, and other important features of the exercises. In addition, internal SMEs can provide information needed to create the actual exercise materials. At other times, when the exercise is cast in an organizational setting outside the home organization, the developer will want to interview SMEs from other organizations. To identify internal
24 ^^
CHAPTER 2
and external SMEs, the developer might make inquiries with colleagues, individuals known from past consulting projects, contacts at conferences, reports in newspaper or magazine articles, or new contacts in relevant organizations. Before meeting with SMEs, the developer should prepare an interview guide to ensure one gets the variety of needed information, including various points of view that may bear on the problem (e.g., financial figures, human resource issues, production and process operations, legal issues). It is often helpful to read extant information about the industry, organization, and job to prepare oneself to understand some of the language of the setting that one will hear about. During the initial contact with an SME, the exercise developer needs to establish some ground-rules about confidentiality of information, and whether the name of the unit and organization will be identified or disguised. To build a realistic exercise, the developer may need to interview more than one SME who can provide details about various specific problems encountered in a real organization. The developer can collect extensive written materials from the SMEs, such as prior reports and examples of possible solutions and outcomes. It may help for the SME to talk about a recent problem or critical incident that was worked on and "solved" satisfactorily. More details on the analysis steps are provided in Chapter 3. The goals of these analyses are two-fold: to specify the features of the simulation exercise, and to obtain a variety of information to use throughout the development, implementation, and evaluation of the exercise. Thus, we cannot emphasize strongly enough the importance of carefully carrying out these activities, and thoroughly documenting all the processes and findings.
PHASE 2: SPECIFICATIONS Step 3. Specify the Dimensions and Difficulty Level The dimension analysis and competency modeling activities provide the developer the information needed to specify the dimensions that will be assessed by the simulation and the difficulty level to be set for performance on the dimension. Dimension specifications will cover: • the number of dimensions to be assessed: seldom can assessors observe and evaluate more than a small number of dimensions, say 3 to 5 per exercise (Gaugler &. Thornton, 1989);
DEVELOPMENT OF SIMULATIONS
***a>
25
• the nature of the dimensions: whether abilities (i.e., "can do" variables) or motivations and interests (i.e., "will do" variables) or both will be assessed; • whether the dimensions be basic aptitudes (e.g., judgment) or well-developed skills (e.g., ability to conduct financial analyses); • the breadth of the dimensions: whether the dimension is quite broad (e.g., general problem solving) or more narrow (e.g., facets of general problem solving, such as problem analysis, creativity, decision analysis, and judgment); • the difficulty level of the dimension, that is, the skill expected of a novice or expert. Examples of dimension specifications: "Four attributes which customer service representatives are expected to possess at time of entry into the job, including oral communication, empathy, drive, and problem solving ability will be assessed." "Technical knowledge of the specific products sold by the Company will not be assessed."
Step 4. Specify Features of the Simulation The exercise specifications provide useful benchmarks in the project to ensure that all parties agree before taking the time, effort, and money on the next steps. Simulation specifications can cover: • the type of exercise: individual or group, oral or written, cooperative or competitive; • the content of the problems in the simulation: technical, structural, financial, marketing, sales, or human resource problems; • the setting of the organization and job: manufacturing, retail sales, service, or government; • the job or occupation being simulated: sales person, supervisor, or executive; • the level of conceptual difficulty of the exercise, ranging from a simple financial analysis or human resource problem to a very complex analysis of a potential reorganization and restructuring of a multinational corporation; • how "technical" the exercise will be, for example, does one assume participants already know how to conduct a return-on-investment analysis; • the length of time for the exercise.
26
<eva>
CHAPTER 2
Examples of simulation specifications: "The scenarios will depict at least three different problems and opportunities encountered by CSRs when they interact with clients, including (mildly) irate complaints and the challenge of cross selling" "The exercise must consume no more than one hour of the participants' time." Alternatives for Setting Time Limits The developer can set the time for an exercise in three ways: allocate an amount of time that fits with the schedule in the assessment or training program, and then tailor-make the exercise to that limit; develop the exercise and then derive, via pilot work, the time needed to complete the work; leave the preparation time open-ended. The time available for an exercise may be partially determined by the schedule in the assessment or training program. The exercise may have to be limited to 60 to 90 minutes to fit with other program activities. Thus, the exercise materials may need to be limited to an amount that can reasonably be digested in the available time. Only experience can teach one how much material is appropriate for various target groups. Pilot work with initial drafts will indicate to the developer if materials need to be expanded or reduced. Of course, the developer can adjust the scoring standards to take into account the time the participants had to work on the case. Lower expectations for performance can be set if the time limits do not allow the thorough analysis one would ideally expect with more liberal time allowances. The second method of setting time—basing time limits setting on the demands of the exercise—is appropriate when the developer has clear performance requirements that must be met. The exercise is constructed and the scoring- standards are set on the basis of these performance requirements. Then the pilot work will show the time that must be allocated for the exercise in the assessment or training program. For each type of exercise covered in Chapters 5 to 12, suggested time limits are presented based on past experience, but these may not apply in all cases. The third approach, allowing open-ended work time or setting no time restrictions, is appropriate if the participants have relatively unlimited and flexible time to work on the exercises. For example, if the assessment program is held at a remote site where the participants are staying overnight, the instructions can be distributed at 4:00 p.m., with the report due at 8:00 a.m. the next morning. Some participants will complete the work in two hours, whereas others may work late into the night. Even though the timing of the exercise is not standardized, this format may af-
DEVELOPMENT OF SIMULATIONS
*>«9>
27
ford unique insights into the participants' personality. Performance differences in the analysis and report may reflect not only ability differences in problem solving but also behavioral differences in work patterns and motivation. Therefore, the assessors may want to ask participants how much time they took to complete the exercise, and the approach they took to solve the problems. PHASE 3: EXERCISE CONSTRUCTION Step 5. Prepare the Participants' Materials The exercise developer may have obtained the information necessary to write the exercise based on the initial situation analyses. In other cases, the developer will want to simulate a job and setting different from the target job. Thus, the developer will need to conduct other situation analyses of the job to be simulated in the exercise. Several elements will be included in the exercise materials for the participant. Fewer of these elements might be included in shorter exercises, such as the cooperative non-assigned role leaderless group discussion. Possible exercise elements include: • basic information on the date of the current situation, the name of the organization, and the names of key persons; • background information about the organization, for example, products and services, location, size, historical information on sales, profitability, an organization chart; • a description of the problem to be addressed; • ancillary surveys and reports providing details of units produced, prices, a list of complaints from customers, and so forth; • letters and memos from other parties who are contributing information and recommendations on the problem; • instructions for what the participant is expected to do, for example, "Provide a written report and prepare a presentation." The instructions may be vague to allow the participant wide latitude (e.g., "The manner in which you conduct the meeting is entirely up to you.") or quite specific to ensure thought processes are clearly displayed, for example, "Give three alternatives and their benefit/cost consequences;" • the time the participant will have to complete the exercise; • a concluding sentence, highlighted in bold type, that repeats key information (e.g., "Remember, you are Pat Ryan, a newly hired team coordinator of the Hardware Division of Genesis Computers. You will have 15 minutes to review the enclosed information and then 20 minutes to meet with your subordinate").
28
<e**a>
CHAPTER 2
Step 6. Prepare Support Materials for Administrators, Assessors, Resource Persons, and Role Players The "test" that confronts the participant consists of more than the written materials he or she reads. Many other factors in a simulation exercise will affect the participant's performance, including: • • • •
The instructions provided by the administrator Challenges presented by the role player Questions posed by assessors. Openness or re ticence of the resource person to provide information.
Not all of these persons interact with participants in all types of simulation exercises, but if they do, their behavior is a part of the stimulus package to which participants will react. The scoring guides used by assessors can also influence the final ratings. Therefore, the developer will want to carefully prepare support materials for all these parties. Administrators may give verbal instructions, assessors may introduce an exercise and ask follow-up questions, and role-players present questions and comments—all of which are a part of the "test" for the participant. To the extent possible, these stimuli should be standardized for all participants. The developer will want to write out clear instructions for the administrator to read, or prepare an audio or videotape of the verbal instructions. Assessors can be provided scripts for what to say at the beginning of each exercise with each participant. The instructions for role players are particularly important, and are covered in more detail in Chapter 8 on one-on-one interaction simulations. Role players should be provided general guidance on the nature of the character being portrayed in the simulation and detailed information that can be used to answer questions asked by participants. A critical component of the assessor materials is the list of dimensions to be assessed by each simulation exercise along with their definitions. The short labels for dimensions often have quite different connotations for different assessors. Leadership means different things to different people. Thus, clear definitions are needed. The meaning of a dimension can be expressed in at least four forms: the label or title, a short definition, a list of key behavioral anchors illustrating performance on the dimension, and an expanded definition. The title is a shorthand label stated in one or two words. For example, "planning and organizing" or "judgment." The short definition includes a phrase or two. For example, • "Planning and organizing is the ability to efficiently establish an appropriate course of action for self and/or others to accomplish a specific goal; make proper assignments of personnel and other resources."
DEVELOPMENT OF SIMULATIONS
^^ 29
• "Judgment is the ability to develop alternative solutions to problems, to evaluate courses of action, and to reach logical decisions." Definitions of dimensions from various sources are included in Appendix A. Both the title and short definition are generic and appropriate for any job. By contrast, the list of key behavioral examples is written for the specific exercise and the expanded definition is written for each specific target job. The behavioral examples are a list of actions showing effective and ineffective performance on the dimension in a specific exercise. For example, behavioral examples for judgment in an in-basket might include: • • • •
Asked a subordinate to gather information about the options. Listed three suggestions for the engineer to consider Gave reasons for his recommendation to the boss. Told the secretary to handle the problem without alerting her to potential problems she might encounter • Delegated the task to a new employee who was inadequately trained. The expanded definition describes the specific meaning of the dimension in the target job and the way in which the dimension is carried out in the specific organization and setting. Examples of the expanded definitions for planning and organizing and judgment are included in Appendix B Support materials to help the assessors score the behavior they observe consist of: • • • •
a list of the most important issues embedded in the exercise, guidance on the "expected" or "correct" solution to exercise problems, behavior checklists to highlight effective and ineffective behavior, and behaviorally anchored rating scales to provide examples of behaviors at various levels of performance on the dimensions.
These and other support materials are illustrated in the chapters on specific simulations. PHASE 4: PILOT TESTING Step 7. Train Assessors Assessors must be trained to evaluate performance in the simulation exercise. Assessor training typically includes the following topics: • Descriptions of the target job and its organizational setting • Knowledge of the dimensions and their meaning in the target job.
3O ^^
CHAPTER 2
• The content of the-simulation exercise. • The skills of observing behavior, classifying behaviors into their respective dimensions, and rating performance effectiveness. • How to administer the exercise, if required. • Interacting with participants to ask questions, if required. • Preparing reports, if required. • Giving feedback, if required.
Chapter 13 provides more details on assessor training Step 8. Administer the Simulation to a Pilot Group The pilot group should be similar to the persons who will be examined in the actual application, and should include representatives of special interest groups such as minorities, women, older individuals, and persons with disabilities, if they can be recruited. One difficulty in conducting a good pilot test occurs in selection and promotion applications where security issues may be prominent. For example, in one police jurisdiction, the developers could not pilot test the simulations because there was great suspicion that the pilot group would divulge the content to some other individuals seeking promotion. Even though there were safeguards to prevent information from actually leaking out and the pilot group was sworn to secrecy, the perception of favoritism was so prevalent that pilot testing in the organization before exercise administration was precluded. As a substitute, the exercises were pilot tested in a secure university setting The purpose of the pilot testing is to obtain feedback on several aspects of the simulation materials. This information contributes to formative evaluation, meaning evaluation that helps form or shape the simulation. Participants can answer several questions including: • • • • • • •
Were the instructions clear? Did you understand the content of the exercise? Did you know what your assignment was? Did you need any additional information to complete the task? Did you have enough time? Were there any words you did not understand? Was there any offensive, inappropriate, or biased content?
Additionally, the developer can seek feedback from assessors and role players (if used). The assessors can provide information about the adequacy of the ratings scales and observations about the pilot participants' behavior. The role player can provide information about the adequacy of the guidance he or she was given on interacting with the participant.
DEVELOPMENT OF SIMULATIONS
31
PHASE 5: APPLICATION
Step 9. Use the Simulation for Selection, Promotion, Training, Organization Development, or Research The simulation should be conducted in conformance with its intended purpose and according to how it was pilot tested and revised. An important step in achieving conformance is the preparation of a manual or set of guidelines for using the simulation. The manual should describe the policies and practices governing various applications of the simulation. This information is important because it ensures that the simulation is used for its intended purposes and minimizes the chances that the simulation will be misused. If the simulation is for administrative decision-making, such as for selection or promotion, the developer and users should make sure the simulation is conducted in accordance with the standardized procedures. Failure to follow standardized procedures in the implementation of an assessment technique can destroy its reliability and validity (Schmitt, Schneider, & Cohen, 1990). The manual should include statements about the following matters: • Purpose of the simulation (for example, selection). • Instructions for administering the simulation. • The target participants (for example, applicants for supervisory positions) . • How participants will be informed about participation in the simulation. • Who the assessors will be and how they will be trained. • Whether participants will receive feedback, what form feedback will take (e.g., oral, written) and who will deliver feedback. • Policies regarding reassessment. • How results of the simulation will be used in conjunction with other information. • Who will have access to the simulation materials and results. • What steps will be taken to keep the simulation materials secure. PHASE 6: SUMMATIVE EVALUATION
Step 1O. Solicit Participants' Reactions It is common practice to solicit reactions to the assessment process from participants. A benchmark study by Kudisch et al. (1999) showed that over 80% of assessment center users who participated in the study reported collecting such data. The majority of the respondents obtained reactions regarding face validity of the exercises (i.e., the extent to which the exercises appeared to be valid), the fairness of the process, and the usefulness of feed-
32
'•xa>
CHAPITER 2
back. About half of the respondents examined whether participants perceived the feedback as accurate. Participants' reactions to the assessment process can be very informative for the various applications of simulations. Therefore, the developer will benefit from asking a variety of appropriate questions after the assessment process. Appendix C includes a list of questions that may be useful in different follow-up questionnaires. Questions might include: • • • • • •
How difficult/stressful was the exercise? Was the exercise fair? Were you treated fairly by the administrator and assessor? Did you have an opportunity to perform at your true level? Do you believe the exercise was a valid measure of your abilities? Was the feedback you received clear and understandable? Accurate? Useful? • What do you plan to do with the feedback?
Perceptions of the validity of the technique can affect the actual validity of the scores earned by the participants. If participants believe the diagnostic tool is accurate and they accept the feedback, they are more likely to act on the recommendations for development (Kudisch, Lundquist, &. Smith, 2001).
Step 11. Conduct Psychometric Evaluation Ideally, statistical psychometric evaluations of the simulation should take place before implementation. Unfortunately, in most applied settings the developer of a local simulation is seldom able to collect enough data to conduct extensive analyses of reliability and validity. Typically, not enough individuals are tested with a simulation to provide enough quantitative data to conduct the types of psychometric analyses conducted on paper-and-pencil tests. For example, criterion-related validity evidence is often difficult to obtain because the developer seldom has a large enough sample, and criterion measures of performance for many managerial jobs is difficult to obtain. Also, the dimensions measured are often quite complex and construct validation studies are difficult to complete. Therefore, the most appropriate method of studying validity is to gather evidence of the content-representativeness of the stimulus materials in relation to the target job and setting. This approach is quite feasible, and some of the methods of gathering such evidence are described in Chapter 14. Reliability of simulation exercises cannot be studied by some of the traditional methods used with most paper-and-pencil tests that have multiple items. That is, coefficient alpha and other internal consistency methods are not possible because simulations do not have multiple items. Furthermore,
DEVELOPMENT OF SIMULATIONS
*>«a>
33
test-retest methods are not appropriate because simulation participants are likely to remember how they responded initially. In addition, simulations are unique experiences and it is difficult to develop two forms that can be considered parallel. Fortunately, there are feasible methods of studying reliability in the form of inter-rater agreement of the ratings from multiple raters. Some of these methods are discussed in Chapter 14. Steps to evaluate the agreement of ratings given by assessors can be examined in assessor training (as described in chap. 13), and steps to evaluate the content validity of the exercises can be gathered before implementation. If the sample of individuals participating in the simulation contains enough members of the legally protected classes (i.e., ethnic and racial minorities, women, and older workers), the developer should conduct analyses of performance levels and potential adverse impact. These analyses may involve comparison between groups of average scores or pass/fail rates assuming some alternative cut-off scores. These analyses may be especially important for those types of exercises that are potentially more highly cognitively loaded than others (e.g., in-baskets or case studies). Differences in subgroup performance on an exercise do not render the exercise unusable, they just impose higher standards for demonstrating validity and job relatedness. An overall evaluation of an assessment or training program includes consideration of whether the costs of the effort required to develop and administer a simulation exercise is outweighed by the value of the additional benefits made to selection/promotion accuracy or developmental gains. Substantial research on past applications shows that simulations provide added value for these purposes (Thornton, 1992; Thornton & Byham, 1982; Thornton & Cleveland, 1990; Thornton & Rupp, in 2003). Nevertheless, developers of new simulations in their own organizations must guard against setting a process that is too cumbersome, and thus falls under its own weight, as illustrated in the following box. Why Simulation Exercises Fail Fatal Error 2: Overkill With a Cumbersome Process In a selection program using simulations, assessors were asked to evaluate 12 dimensions in each of 6 exercises. The process of taking notes, making ratings, discussing the findings after observation, and writing feedback reports covering the 12 dimensions was too onerous. Notes were sketchy, ratings were done in a perfunctory manner, the assessors could not distinguish among the several dimensions, and the reports were too terse to be useful. Managers, who were given the assessment information, supposedly to aid in selection decisions, did not find them helpful. The program was dropped because managers perceived that it was not adding enough value for the amount of time it required.
3 Issues and Specific Tips in Constructing Simulation Exercises
In this chapter we discuss a number topics that are relevant to the development of all simulation exercises. They range from such broad issues of how much fidelity should be built into exercises (i.e., how similar should the simulation's content be to the target job) and how much technology should be used in the administration of exercises to many very specific details, such as, what names should be used for characters in the exercises and how much time a simulation exercise should take. It is important for the developer of a simulation exercise to understand these issues and make some determination of how he or she will handle them before undertaking any exercise-construction activity. Clarity on these issues helps the simulation developer design a process that fairly elicits behavior relevant to dimensions of job and organizational effectiveness. Clarity is needed before engaging in the process of gathering information for the construction phase. The developer will want to have these issues clearly in mind when interacting with subject matter experts (SMEs) in obtaining information in the situational analyses, so that the correct and complete information is gathered when doing the fieldwork. An example from a past simulation development project emphasizes this point. In this project, the team did not have the information it needed to construct the exercises because Step 2 (conducting the situation analyses) was not carried out appropriately. If the original field exploration had gathered just a few more bits of information, the exercise construction phase could have proceeded more
34
ISSUES AND SPECIFIC TIPS
***a>
35
smoothly. Unfortunately, the team had to go back to the field to gather the needed information. GENERAL ISSUES RELATED TO CONSTRUCTION OF SIMULATIONS
Level of Fidelity As we discussed briefly in Chapter 1, the amount of fidelity in a simulation refers to how much the simulation resembles what actually occurs in the target organization. A simulation can range in fidelity from a near duplication of the actual work setting to merely resembling the actual job. A key question then is how similar to the target job should the simulation be? The answer depends on what the simulation is being used for and who the participants will be. If the simulation is used for training purposes then high levels of fidelity may be appropriate; however, if the simulation is used for selection or promotion purposes, then it may be appropriate to have lower levels of fidelity so that no segment of candidates will have the advantage of familiarity with the simulation setting. Additionally, if the simulation will assess knowledge of the job, industry, or organization, then higher levels of fidelity may be needed, whereas if participants are not expected to have any prior specific experience or knowledge, then lower levels of fidelity are desirable. We now address the question of "fidelity to what?" Simulation exercises can be built to have various degrees of similarity to the target job along several dimensions: • The industry presented in the simulation can be the same as or different from the target job. For example, if the target job is a sales job in life insurance, the simulation can portray sales in life insurance or retail sales of clothing. • The content of the problems presented in the simulation can be the same as, or different from, those in the target job. The simulation of the sales job could include problems of preparing a marketing plan or dealing with irate customers. • The importance of the tasks to be carried out in the simulation can be matched more or less carefully to the job. For example, the tasks may involve talking with a potential client (highly important) or preparing a call report (relatively unimportant). • The medium for presenting the stimulus materials in the simulation can be similar to or different from the job. For example, on the job, incumbents may get the most information by talking directly with others, but in the simulation this information may be presented on paper
36
'*X3>
CHAPTER 3
• The response mode (i.e., the way the assessee is asked to respond) in the simulation may be similar to, or different from, the job. For example, whereas an actual job incumbent may use a word processor, the assessee may be asked to hand write responses in an in-basket simulation. A simulation may vary in its degree of fidelity by each of the aforementioned factors. For example, a simulation may have high fidelity with regard to the problems presented (i.e., the problems can be actual challenges encountered on the job) and low fidelity with regard to the industry The developer of the simulation needs to make decisions about what level of fidelity is appropriate for each different assessment situation. Whereas high fidelity on some dimensions may be appropriate for some applications, it may not be for other applications. In fact, it may be highly appropriate not to have high fidelity on some of these dimensions of fidelity. For example, it might be appropriate to cast the simulation in an industry, organization, or setting that is not like the target job if one is interested in assessing long range potential for individuals to develop into jobs with which they are not familiar. A high degree of fidelity on these dimensions may penalize the person who has high levels of potential yet does not have specific experience with the target job. Persons with recent experience may have some knowledge of the job depicted in the simulation but not have potential to develop in the long run. Thus, some organizations have chosen to cast the simulation in a setting that is neutral in terms of familiarity to all candidates, and therefore, all participants are on equal footing regarding the specifics of the simulated job.
Separate Versus Integrated Exercises A separate exercise is one that is used by itself in a stand-alone fashion. Everything that the participant needs to know to complete the exercise is contained in that simulation's materials. If there are multiple exercises, separate background materials are provided for each simulation, and they can be completed in any order. Separate exercises may all use the same basic scenario or simulated organization; however, information from one exercise is not needed to complete other exercises. In contrast, integrated exercises consist of two or more simulations that are connected to one another by common background information and build on each other in sequence. For example, there may be a set of detailed information regarding the simulated organization that will be used for several exercises. Taking integration a step further, the exercises may be linked so that information obtained during one exercise is necessary to complete a subsequent exercise. For example, an in-basket exercise may contain several bits of information about an on-going problem faced by the organization, and
ISSUES AND SPECIFIC TIPS
*^ 37
then this problem may be discussed in a subsequent leaderless group discussion exercise. The ultimate integrated exercise is the "day in the life" simulation in which the participant completes a series of tasks designed to simulate a day in the life of an incumbent in the target job. For example, a managerial candidate may start the day by going through an in-basket, only to be interrupted by phone calls and visitors throughout the day. This technique is discussed more thoroughly in Chapter 12. Kudisch et al.'s (1999) survey shows that only 18% of the respondents reported using all integrated exercises, in comparison with 38% who used all separate exercises, and 44% who used some related exercises. The decision to use separate or integrated exercises depends on several factors, including the resources available to develop the exercises, the dimensions to be assessed, the purpose of the assessment, the nature of the target job, and a consideration of the advantages and disadvantages of each approach (Thornton, 1992). Table 3.1 briefly outlines the advantages and disadvantages of separate versus integrated exercises.
To Use or not Use Technology The use of technology in simulations ranges from basic to highly sophisticated. Basic applications of technology in simulations include: • Background information may be read on a computer (e.g., via e-mail or word-processing program) rather than on paper. Of course, some combination of written and computer-formatted modes of presentation of information could be used. The basic outline of a case study could be presented on paper and then the participant can be given a list of Web sites (URLs) that might be accessed to gather more information. • Instructions and other materials may be presented via video or audio tape. • The simulation may be videotaped for later scoring, and audiotaped as a backup in the event the video fails. • The simulation may be conducted via telephone conference call, videoconference, or webcasting. • The simulation may involve working with another individual on a project that requires the participant to use a computer (e.g., preparing a Power Point presentation with a coworker). • The participant may work in a simulated office complete with a phone, fax, and PC. • Participants may use a word-processing program to generate their responses (e.g., the participant in a case study exercise could be asked to type the report using a computer).
38 *xs>
CHAPTER 3 TABLE 3.1
Advantages and Disadvantages of Separate Versus Integrated Exercises Separate Exercises Advantages
Exercises may be developed independently from one another Exercises may be completed in any order.
Integrated Exercises May be more realistic in that most jobs consist of a series of related tasks and interactions. Individuals use information from one interaction to inform their behavior in another interaction,
Each exercise is an independent measure of behavior—evaluation is more accurate when based on Can measure candidates' ability several independent observations, to capitalize on information gained over time. The participant gets a "fresh start" in each exercise—poor May have higher "face validity" performance in one exercise does and thus be more acceptable to not penalize the individual in other participants, exercises. Only one set of background Disjointed exercises may accurately materials is required, reflect many management jobs. Disadvantages
Multiple exercises may seem disconnected and disjointed from one another.
Exercises cannot be developed independently—this may require extra development time to ensure integration is smooth.
Completing separate tasks may not accurately reflect the target job. Exercises must be completed in a prescribed order, which may make Separate background information scheduling somewhat difficult. needs to be developed for each exercise. The participants' overall performance cannot be evaluated without considering performance in all of the exercises.
• Assessors may use a word-processing program (e.g., via laptop computer) to record observations and ratings and to generate reports. • Statistical software packages may be used to aggregate and analyze performance ratings. Many of these techniques are used frequently (e.g., using word processing programs to record observations and ratings) while others are used quite in-
ISSUES AND SPECIFIC TIPS
«^a)
39
frequently (e.g., the simulated office). These basic uses of technology may enhance the simulation or make it easier or more, realistic, because many participants probably use a computer to gather information and to write and submit reports. The potential weaknesses of using a computer are that it may introduce contaminants in the demonstration of skills the exercise is designed to measure. For example, a person may not know how to access certain web sites, yet be quite capable of analyzing the information as soon as he or she has it. If skill in accessing the Web is not what one wants to assess, or if Web accessing can be taught easily, then the developer would not want knowledge of Internet techniques to influence the assessment of core problem-solving skills. In addition, one must consider whether all participants are familiar with the specific software used for report writing in this assessment exercise. A participant may be quite capable of writing a strong report, and even be familiar with one or more word-processing programs, yet stumble with an unfamiliar program used in the exercise itself. Because most simulations allow limited time, any distraction can lead to poor performance for reasons unrelated to performance dimensions. An additional downside is that using technology almost always increases costs: the equipment must be maintained and it may break down and need to be repaired. Breakdowns during an assessment process can be frustrating to participants and highly disruptive during assessment programs that are often tightly scheduled. Technicians with specialized knowledge (e.g., in videoconferencing) may need to be hired to run the equipment, and the equipment may not be portable. In addition to the added cost, using technology may result in less scheduling flexibility. The decision whether to use these techniques depends on the resources available to the organization and must be weighed against practical considerations. For example, in one case-study simulation we developed, participants commented that it would be easier (and more reflective of their actual job) to type their responses on a computer rather than hand-write them. However, the organization decided not to implement this suggestion because the effort involved in periodically moving computers to the testing facility (which was off-site) would be unnecessarily cumbersome. In contrast to those rather basic approaches to using technology, more sophisticated uses involve using computer software that interacts with the participants and software, which aggregates ratings and generates narrative reports. Some recent high-tech approaches have included: • A business game simulation that leads the participant through a variety of scenarios. The participant can choose where to go in the virtual office and with whom to talk. When engaging in "conversation," the participant selects from a number of response options. At several points in the
4O ^*^
CHAPTER 3
game, decisions must be made and the participant selects a course of action from several choices. Different choices will result in different responses by the computer. Thus, several people may play the same game yet have a different experience, depending on the options they select. The computer automatically evaluates the participant's performance and generates a descriptive report listing strengths and weaknesses. • An interaction simulation that presents the participant with a virtual person. Like the business game, the virtual person will talk and the participant may respond by selecting from a number of options. Subsequent responses by the virtual person will differ, depending on the response option chosen. Responses are scored automatically after the interaction is over and a report is generated. • In-baskets that are delivered and scored by computer. The in-basket materials are presented on the computer screen and the participant chooses from a variety of response options. In our opinion, the format in which participants pick one of a set of predetermined response alternatives is not a true behavioral assessment procedure. Selecting among multiple-choice alternatives involves recognizing what might be a good response, and is not the same thing as generating and executing the behavioral response. Although these forms of technology are still in their infancy and used rather infrequently (Kudisch et al., 2001), there may be advantages to this approach: administration may be more standardized, the participant's responses can be recorded instantly, and different types of performance indicators may be recorded (e.g., response time to individual items in an in-basket). Moreover, some aspects of scoring can be automated. However, despite these advantages, several disadvantages remain. These types of applications are very expensive and time-consuming to develop and require special programming expertise. Moreover, the equipment required to run these programs may be costly and impractical for many organizations. Our perspective is that for the time being, the use of technology in simulations should be limited to the more basic approaches previously discussed. First of all, '.here is no substitute for real human interaction and human judgment in the observation and evaluation process. Currently, high-tech applications are not yet sophisticated enough to completely replicate the experience of interacting with a real person. Second, regardless of how dependent we become on technology, people will still spend much of their work time interacting with other people. Therefore, interpersonal skills are still an important determinant of success in the workplace, and the best way to assess these skills is to assess how people deal with other people. Nevertheless, these new approaches offer some promise of valuable methods of assessment in the future.
ISSUES AND SPECIFIC TIPS
"^ 41
Number of Dimensions Assessed Historically, assessors have been asked to rate up to 15 dimensions after observing participants' behavior in a simulation exercise (Thornton & Byham, 1982). However, applying theories of social cognition, assessment center critics have noted that the cognitive demands of observing complex behavior may preclude the meaningful distinctions of more than a few performance dimensions (Lievens &Klimoski, 2001; Zedeck, 1986). In addition, research has shown that assessors may not be able to distinguish among such a large number of distinct performance dimensions (Gaugler & Thornton, 1989). In practice, assessment materials used by some consultants call for ratings on only a small number (typically 4 or 5) of dimensions for most simulation methods. Finally, having a large numbers of dimensions makes the process of developing simulation exercises and assessor materials quite onerous. As a result of these theoretical considerations, research findings, and practical constraints, recent advice on this topic has suggested that simulation developers limit the number of dimensions assessors are asked to rate. We suggest that assessors should be asked to rate no more than 3 or 4 dimensions in most simulation exercises. In addition to limiting the number of dimensions to be assessed, we advise using dimensions that are conceptually distinct from each other. One test of conceptual distinction between two dimensions is to ask whether a participant who scores high on one dimension could score low on a second dimension. For example, oral communication could be considered conceptually distinct from leadership, if in the simulation exercise, a participant could demonstrate effective oral communication and yet not be an effective leader, or the participant could show poor oral communication and still be a good leader. To be sure, oral communication skill and leadership are probably correlated with each other, at least to some extent. At the same time, this proposed test of conceptual distinctness examines whether it is conceivable for the participant to demonstrate the patterns described.
Trait Activation Social scientists have long recognized that a person's behavior is "caused" by both characteristics of the individual (e.g., personality and ability) and characteristics of the situation (Kendrick &Funder, 1988). Mischel (1973) has distinguished between strong and weak situations. Strong situations are situations that are so powerful they suppress individual differences. For example, most people are quiet and subdued at a funeral, regardless of how lively they may be at other times. A funeral generates such strong social norms for
42 ^^
CHAPTER 3
behavior that individual differences in behavior are difficult to observe. In contrast, weak situations are those with few normative expectations for behavior, and therefore, individual differences in personality are readily observable. For example, a casual cocktail party can be considered a rather weak situation. Some people will be outgoing and gregarious and others will tend to be quiet and reserved. However, even in typically weak situations there are some norms for behavior. For example, even the most punctual individual may be fashionably late to a cocktail party. Therefore, situations are not universally strong or weak; rather, they may be strong or weak for certain traits or dimensions of individuals. When designing a simulation exercise, the purpose is to assess how individuals differ along several dimensions (e.g., leadership, communication skills, interpersonal sensitivity, etc.). Therefore, the developer must take care to ensure that the situation in the exercise is weak enough to allow individual differences to shine through. Closely related to this concept is the idea of trait activation (Tett, 1996; Tett & Guterman, 2000). According to the theory of trait activation "the behavioral expression of a trait requires arousal of that trait by trait relevant situational cues" (Tett & Guterman, 2000, p. 398, emphasis in original). In other words, for an individual to demonstrate a particular personality trait through his or her behavior, the situation must allow for the trait to be expressed. For example, if an organization wishes to assess oral communication skills, assessees will be more or less able to demonstrate their skills depending on how the exercise is structured. For example, if the exercise is a leaderless group discussion with non-assigned roles (chap. 7 has more details regarding this type of exercise), there may or may not be a chance to observe oral communication skills. If the group contains a few very aggressive and talkative individuals, these participants may dominate the conversation, allowing very few opportunities to observe the communication skills of the quieter group members. Instead of eliciting oral communication, the simulation in this example has elicited dominance. Therefore, after the desired dimensions have been identified, the simulation developer must then design the exercise so that these dimensions can be elicited. Although Tett and his colleagues suggested expression of the trait requires the trait be aroused by relevant clues, our view is that this is not always necessary. In situations where ability is being measured (i.e., what the person can do), it is appropriate to elicit the trait directly. In situations where willingness is being measured (i.e., what the person will do if given an opportunity), the simulation should be designed to allow for the trait to be displayed. In either case, we agree with Tett's position, that the trait will not be apparent unless trait-relevant cues are present. The next section expands on this concept further.
ISSUES AND SPECIFIC TIPS
^^ 43
Importance of Instructions The instructions given at the beginning of an exercise may determine whether or not the participant displays behaviors relevant to the dimensions one wishes to assess. For example, the instructions for a case study can be written to ensure that the participant displays behaviors relevant to the targeted dimensions. To be sure that problem analysis behaviors can be observed in the written report, the instructions should direct the participant to provide his or her reasoning for the recommendations. Specifically the instructions can ask the participant to list the problems portrayed in the case and substantiate each problem listed by citing relevant data in the case. Without this specific instruction, the participant may write down only the solutions and recommendations without specifying the problems being faced. As another example, if the developer wishes to assess the way the participant goes about evaluating alternative solutions to a problem, the instructions should ask for a list of potential alternative solutions, followed by an evaluation of the possible costs and benefits of those alternatives. Without those instructions, the participant may write down only the final recommendation, and thus the assessor will not be able to observe behaviors relevant to how the participant analyzes alternative solutions. The goal of specifying the instructions very clearly is to make the exercise more transparent to the participant. Transparency means that the participant sees very clearly what the purpose of the exercise is and understands what is expected. Kleimann, Kuptsch, and Koller (1996) demonstrated that increasing the transparency of behavioral assessment techniques increases their construct validity. That is, the exercise is more likely to demonstrate convergent and discriminant validity of the intended dimensions if the participants know what dimensions are being assessed and what behaviors are expected of them. In the example of a case study measuring decision analysis, if the participant knows that this dimension is being measured, he or she will display the most effective decision-analysis skills that can be mustered. The exercise is not then contaminated with the participant's motivation to display the sequence of his or her thinking in making a given recommendation.
Maximum Versus Typical Performance Maximum performance can best be described as the level at which an individual is capable of performing. For example, a short paper-and-pencil cognitive ability test is considered a test of maximum performance. If the test taker tries to do his or her best, the score will reflect what he or she is capable of doing. Most individuals can only perform at their maximum level for a short time. On a day-to-day basis, the average level of performance one ex-
44 «^
CHAPTER 3
hibits is defined as typical performance. Typical performance can be assessed in different ways that usually involve observing several examples of behavior over time. Supervisory evaluations of job performance can provide assessments of typical behavior in that ratings are based on several observations of the employee's behavior over time. Responses to self-report personality questionnaires probably indicate a person's reflection on how he or she has behaved in a variety of situations over time in the past. Simulations can be designed to assess either maximum or typical performance, and the choice of which type of performance to assess will depend on the purpose of the assessment. In some cases, the organization might be interested in evaluating maximum performance. This is useful in a position in which consistency of performance is less important than "stepping up to a tough task" when needed. For example, an organization may want to know if a candidate can occasionally handle a high level of stress when.confronted with the serious challenges of an investigative reporter. Thus, for jobs that require short bursts of maximum effort followed by periods of "down time," we would be interested in assessing maximum performance. In other cases, the organization may be more interested in evaluating typical performance. For example, knowing that a person can consistently handle customer inquiries and complaints over a long period of time in pleasant manner is more important than knowing if the person can occasionally demonstrate a high level of charm. If the job requires the individual to come in and perform consistently day in and day out, we would be interested in evaluating typical rather than maximum performance. A single, short simulation exercise generally assesses maximum rather than typical performance—that is, it evaluates what an individual is capable of doing in a short amount of time rather than what the person will do on a daily basis. However, simulations can assess typical performance if: multiple exercises are used to get several observations of behavior in different situations over time; longer exercises are used to evaluate how the assessee performs over time; and instructions are open ended to allow the assessees a choice of how much effort to put into the simulation.
Ability Versus Motivation Closely related to the concept of maximum versus typical performance is the concept of assessing for ability or motivation. One's ability typically determines the maximum level of performance for that individual—what the person can do. In contrast, one's motivation determines the typical level of performance for that individual; that is, what the person is willing to do on a day-in and day-out basis. The instructions given in a simulation exercise partially determine if the exercise provides an opportunity to assess maximum performance (i.e., the ability to act) or typical performance (i.e., moti-
ISSUES AND SPECIFIC TIPS
^*^ 45
vation to act). To assess ability, a simulation should include very specific guidelines for what is expected. For example, in an in-basket exercise, the participants may be instructed to respond to a particular memo, say from the boss. Because everyone is required to respond to the memo, the quality of the response will relate more to one's ability to respond rather than one's willingness. To assess motivation, a simulation should offer choices about what the participants may do. For example, participants may be given no specific instructions as to how to deal with a particular memo. Participants can respond to it directly, delegate the response, or ignore it. The choice of how to respond is an indication of the participant's motivation. SPECIFIC SUGGESTIONS FOR WRITING SITUATIONAL EXERCISES In the previous section, we described several general considerations in constructing simulation exercises. In the next sections we provide several specific suggestions for the actual wording of simulations.
Screen Content for Inappropriate Language Inappropriate language can mean language that is too formal or informal for the audience, language that is sexist or racially offensive, language that uses jargon or acronyms that are too specific, or any other language that is inappropriate for the target audience. A few of the more common problems with language are discussed below. Sexist Language Although it was once common to use "he," "his," and "men" to refer to people in general (e.g., "Every employee must wash his hands before starting work."), this practice is now considered outdated and sexist. It is preferable to use "his or her" or to rewrite the sentence to be gender-neutral (e.g., "All employees must wash their hands before starting work."). The Style Manual of the American Psychological Association (APA, 2001) offers several excellent suggestions for using non-sexist language. Offensiue Language In today's political climate, some things that used to be perfectly acceptable are now considered offensive (e.g., referring to administrative assistants as "the girls"). Although some people may complain that political correctness has become too extreme and that everything is bound to offend someone, it is in the organization's best interest to ensure that simulation materials (especially those used in a selection context) are free from both
46 ^^
CHAPTER 3
obvious and subtle offensive language. Sexist or racial jokes, off-color humor, cutting remarks, and profanity are never appropriate and detract from the quality of the simulation.
Jargon The use of organizational or industry-specific jargon should be avoided. Using jargon provides an advantage to participants who are familiar with the organization or industry and may prevent other candidates, who are not familiar with the jargon, from performing well in the simulation. If industry or organization specific terms must be used, they should be listed and defined in the background materials.
Acronyms The simulation materials should be screened for acronyms that are not universally recognized or that are not appropriate for the intended audience. For example, most U.S. audiences would readily understand what IRS (Internal Revenue Service) means, but it would not be appropriate to refer to NAS (for the National Academy of Sciences). On the other hand, international participants may not know what IRS means, but a group of university physicists might readily understand NAS. The best way to screen for inappropriate language is to have someone not connected with the exercise development efforts read all the simulation materials. Even the most thorough developer can miss problems—the closer one has been to the development process, the harder it is to see minor problems. Ideally, this outside reviewer should be someone of a different gender and ethnicity than the developer. For example, if the author is a White male, have a Hispanic woman review the content for inadvertently offensive language. If someone outside the organization developed the simulation, someone on the inside should review it to ensure that the language is compatible with the overall culture of the organization.
Monitor the Reading Level and the Conceptual Level of the Exercises The developer should monitor the reading and conceptual level of the exercises so as to avoid unnecessary and irrelevant difficulty in the exercise materials. The basic principles are to match the reading level of the exercise to the reading level of material in the target situation, and to avoid introducing material that is inappropriately difficult. Assessing the reading level of a simulation exercise is easily accomplished with many word processing programs. One index of reading ease is the Flesch Reading Ease Score. This score ranges from 0 to 100 with lower numbers indicating greater levels of difficulty. Other
ISSUES AND SPECIFIC TIPS
*X3>
47
measures of readability include the Flesch and the Flesch-Kincaid grade level. The reading level of the exercises should be matched to the reading level required on the job; however, a reading level over grade 13 is considered too dense for many organizational settings (V S. Harvey, 1997) as the average America adult reads at about 7th or 8th grade level. In a related vein, the developer should avoid making the exercise so conceptually difficult that the intended audience cannot demonstrate the targeted skills. In one organization the in-basket was so loaded with cognitively complex issues that it limited the demonstration of other skills, for example, planning and organizing, judgment, and decisiveness.
Accommodate Disabilities It is important to consider whether the performance requirements of the situational exercises would unfairly interfere with the performance of individuals with disabilities. Under the Americans with Disabilities Act (ADA) of 1990, employers are required to make reasonable accommodations for individuals with disabilities in employment testing. These accommodations are necessary regardless of whether the testing is for selection, development, or training. Most accommodations are relatively easy and inexpensive to implement. The following is a list of common reasonable accommodations for simulation exercises: • Use a telecommunication device for the deaf (TDD) in a telephone simulation for individuals with hearing impairments. If a TDD is not available, many areas have relay operators that allow phone users without a TDD to speak with individuals who use a TDD. • Use a sign-language interpreter in a group discussion or one-on-one interaction simulation. Interpreters, if not already employed by the organization, may be employed on an hourly basis for a relatively small fee. • Make an audio recording of instructions and other exercise materials for individuals with vision impairments. Audio recordings can be made relatively inexpensively; however, it is important the recording be of good quality. • Provide special testing conditions for individuals with learning disabilities on individual timed exercises such as case studies or in-baskets. These conditions may include extra time or a separate, distractionfree testing area. • Make testing facilities accessible to individuals who use wheelchairs. If the testing facility is not already accessible, the organization may wish to use an alternative facility for the disabled individual. • Use voice recognition software to record written responses for individuals with visual or physical impairments. This type of software is rela-
48 'sva>
CHAPTER 3
tively inexpensive and has recently advanced to the point where it is a viable option for persons who are unable to write or type. There are no absolute guidelines on what constitutes a reasonable accommodation. What is reasonable for one organization may not be reasonable for another. For example, providing extra time to complete an in-basket may be appropriate when the nature of the work is such that fast responses are not critical to success. However, if a key dimension to be assessed is speed of response due to the requirements of the job, then allowing one participant unlimited time while holding everyone else to a strict time limit might not be reasonable.
Use Neutral and Diverse Names Use gender-neutral, ethnically diverse names for the role of the participant and for key individuals in the exercises. Gender-neutral names have the advantage of making the exercise appropriate for either male or female participants and role players. They also help to ensure the simulation does not inadvertently reinforce negative stereotypes. Table 3.2 lists several genderneutral names. Table 3.3 lists several names from diverse cultures. For all characters mentioned in the simulation, use a mixture of male and female names, and a mixture of ethnic names to show diversity in the workforce in the exercises. Be especially careful not to use names for individuals who display negative behaviors that are stereotypically associated with a gender or ethnicity.
TABLE 3.2 Gender Neutral Names Pat
Jamie
Dale
Corey
Chris
Jordan
Lee
Devon
Adrian
Gayle
Casey
Kelly
Lane
Cassidy
Alex
Jean
Terry
Sandy
Avery
Jessie
Dylan
Elaine
Bailey
Rory
Blake
Austin
Cameron
Shannon
Dana
Sal
Ricky
Taylor
Sidney
Skylar
Max
Sam
Robin
August
Wynn
Initials (e.g., J.T.)
ISSUES AND SPECIFIC TIPS
*^a>
49
TABLE 3.3 Ethnically Diverse Names Maria
Dalia
Hector
Ealat
Juan
Mujani
Ihsam
Meseret
Jaewon
Masuma
Awad
Mohammed
Chen
Nishant
Tokunda
Wisam
Lin
Nambury
Ahmed
Varda
Hsu
Tasha
Mustafa
Shaul
Zulang
Naama
Nasser
Marcus
Noemi
Fumio
Shahada
Syed
Taslin
Vera
Aisha
Kamara
Ayman
Taha
Golnaz
Dakar
The seriousness of even inadvertent gender bias is illustrated in Fatal Error 3. Why Simulation Exercises Fail Fatal Error 3: Alleged Gender Bias Allegations that the exercise endorsed gender bias were raised because "problem employees" in the fact-finding exercise were women who were reporting late for work. Even though the exercise showed men demonstrating other work-related problems in other situations, the exercise was roundly criticized. Union Activity Consider whether or not to use incidents that involve union activity. This may be highly appropriate for some organizations, but totally unacceptable in some non-union environments where management wants no mention of unions. Scenarios involving unions can be an excellent way to assess how human resource personnel and managers handle labor relations issues. However, before writing such a scenario, it is wise to gain a better understanding of the organizational culture and management attitudes towards unions.
Pick a Neutral Setting To construct a realistic exercise that provides a concrete setting in which problems can be embedded, the developer must describe a specific job and organization in a specific industrial sector. The participants must be able to
5O f^a>
CHAPTER 3
identify with the setting and understand the basic materials being presented. Because participants often come from diverse backgrounds and may not have experience with the target job itself, it is often advantageous to place the simulation is a neutral setting. A neutral setting is one that is different from the target job and a setting with which all participants can identify. Thus, it is often advisable to pick a job, organization, and industry with which all participants are familiar. For example, one colleague who developed exercises for the training and development of college business majors used a sports store and a hotel for the settings of the simulations. These settings were familiar to virtually all the participants. However, the scenarios were general enough so that individuals who did have work experience in these industries did not have any special advantage. The developer should avoid settings that have highly specific cultures (e.g., a military squadron) unless the simulation will be used for assessment, training, or development in that setting. Also consider the degree of desired fidelity (discussed earlier in this chapter). A setting that is more similar to the actual organization provides greater fidelity, which may or may not be desirable, depending on the purpose of the assessment.
4 Determining What to Assess: Identifying Dimensions, Exercise Content, and Performance Standards
An essential step in the process of developing a simulation exercise is a careful analysis of the job, organization, and setting that will be simulated. For convenience, we use the short-hand term, situation analysis, to refer to this step, but the reader will soon know that we mean much more than analyzing just information about any one aspect of the job or organization. This step is designed to gather the following types of information: • The industry in which the entire exercise will be set, for example, a public agency (e.g., city, state, federal, or international), a private company (e.g., manufacturing, financial, or service company), or nonprofit organization (e.g., foundation, United Way, Youth Center). • The tasks or responsibilities that persons in the job carry out, for example, writing reports, discussing performance issues with subordinates, taking orders from clients. • Changes in the job duties in the future to meet changing organizational objectives. • The types of problems or opportunities that job incumbents face, for example, production delays, poor performance of subordinates, complaints from irate customers. 51
52
-8^9>
CHAPTER 4
• Examples of effective and ineffective behaviors in carrying out the job tasks and dealing with problems encountered on the job. • The attributes to be assessed (e.g., leadership, oral communication, decision making), their relative importance, and how they will be combined (equal or differential weights), if an overall score is to be derived. • The current and expected levels of proficiency on these attributes. This information is used for several steps in the process of developing, implementing, and evaluating a simulation exercise. Information about the industry, job, tasks and responsibilities, and problems encountered on the job are used initially in writing the stimulus materials in the exercise. This information enables the simulation developer to write realistic scenarios, and present appropriate challenges to the assessee. Information about the attributes needed for performance effectiveness on the job allow the developer to specify the dimensions to be evaluated by the assessors in the exercise. Examples of attributes assessed by simulation exercises are included in Appendix A. We use the term dimension to refer to a wide variety of characteristics evaluated in simulation exercises. As defined in Chapter 1, a dimension is a cluster of behaviors that can be reliably classified together. As the list in Appendix A reveals, dimensions can include clusters of behaviors that are cognitive abilities (e.g., problem-solving ability), skills (e.g., delegation), motivations (e.g., initiative), or interpersonal tendencies (e.g., teamwork). A term closely related to dimension is competency. Unfortunately, competency is used in a wide variety of different ways (Byham, 1996), and one must carefully attend to what the speaker or writer means (Byham, 1996). Organizational competencies refer to strengths of an organization that distinguish it from other organizations, for example, Sony's techniques of miniaturization. Personal competencies refer to underlying personality traits, for example, ego strength. Job/role competencies refer to descriptive labels for behaviors related to job success, and thus this use of the term closely matches the way we use the term in this book. Examples of effective and ineffective behavior are used for several purposes. They can be used to define, clarify, and operationalize performance dimensions that can sometimes be vague and interpreted in different ways. Behavioral examples help in training assessors what a given organization means by a dimension such as "customer service orientation." Behaviorally anchored rating scales can be constructed to improve inter-rater agreement among assessors. In addition, behavioral examples can be used in feedback to participants to clarify what the organization means by effective performance.
DETERMINING WHAT TO ASSESS
*^3>
53
Why Simulation Exercises Fail Fatal Error 4: Vaguely Defined Dimensions A large state agency developed a set of simulation exercises to identify training needs among middle-level manager and administrators who were slated to move into top echelons in the organization in the next few years. Vague and undifferentiated definitions of leadership, communication skills, interpersonal effectiveness, planning and organising, and decision making led to confusion among assessors and extreme frustration with the process. Several influential administrators serving as assessor rebelled and refused to participate in subsequent assessment efforts. The program was dropped because the staff and participants did not think it contributed to the differential diagnosis of training needs. Information about changing duties and performance expectations coming from competency modeling discussions ensure that the assessment process is not out-of-date, but rather oriented to future organizational objectives. Finally, when the organization documents all of the information gathered in these processes it has the basis for establishing the content relatedness of the simulation exercise. Such information is one element in the bundle of evidence sometimes called content validity, and can help establish the job-relatedness and validity of simulation exercises used for personnel selection.
Job Analysis and Competency Modeling Two different yet complementary approaches to gathering information have been taken in the past, and they each have strengths and weaknesses: job analysis and competency modeling. Job analysis is a set of processes for identifying tasks, responsibilities, and requirements (i.e., knowledge, skills, abilities, and other characteristics) for jobs, as they are currently being carried out (Brannick&Le vine, 2002; Harvey, 1991;McCormick, 1979). Job analysis tends to be an inductive approach examining the specific activities of persons currently performing the job, and making inferences about what is needed for effective performance in the future. The strengths of job analysis methods are that they tend to be quite objective, they have been used extensively in the past, they have been researched quite thoroughly, they have a strong track record of reliability and validity, and they are recognized as defensible in employment discrimination legal decisions. The weaknesses of traditional job analysis methods are that they tend to be rather conservative, they tend to focus on the way things have been done in the past and assume jobs are static, to some people they have connotations of being "dry"
54
'BxaJ
CHAPTER 4
and "out of date," they tend to focus on tasks specific to a job rather than more general qualities considered important throughout the organization, and the terminology and results may not be in the language of managers and executives. Competency modeling is a set of procedures for examining how a job fits into the overall organization objectives (Schippman, Prien, & Katz, 2000). Competency modeling often involves discussions with higher-level executives or other thought-leaders in the organization who are asked to specify what they.expect from people in various jobs in the organizations in the future. This process is similar to strategic job analysis, because it is focused on the organizations strategic goals for the future (Schneider & Konz, 1989). Competency modeling can be viewed as a top-down, deductive process starting with organizational objectives, then moving to departmental, unit, and job objectives. The strengths of competency modeling are that job activities are clearly linked to organizational objectives, job requirements are stated in terms that executives understand, expectations for performance can be raised by setting new goals, the results apply across a variety of jobs, and the findings tend to be better accepted by organizational members. The weaknesses of competency modeling are that the methods are relatively subjective, the methods have not been evaluated for reliability and validity, and the results may reflect nai've expectations from managers who are not close enough to the work settings to make realistic assessments of what can be done on the job.
Need for Multiple Methods No one method of gathering the diverse sets of information needed for the development, implementation, and evaluation of simulations will suffice. An exercise developer will probably want to use multiple methods to identify the information needed to design an effective simulation exercise. The need for multiple methods and a more detailed description of many methods is included in several sources (e.g., Brannick & Levine, 2002; Gael, 1983; Gael, Cornelius, Levine, &. Salvendy, 1988). The following techniques often prove useful at this stage.
Read Existing Literature About the Organization, Job, and Situation Such information might include company brochures, job descriptions, training manuals, organization mission statements, information about the company on the Web, house organs, orientation manuals, and so on. To design a simulation of the IT division in a bank, we found extensive information about a number of large national and international banks on the Web
DETERMINING WHAT TO ASSESS
'^a)
55
and drew up a fictitious yet realistic bank as the shell of the exercise simulating a sales call to the IT director
Observe Incumbents Performing ihe Job These observations can range from cursory walk-throughs of the plant or office setting, to the typical "ride along" with police officers, up to the more systematic shadowing of a manager for several hours. Observing the production operations of brewery led us to design a simulation of one aspect of the job of general brewery worker. The "Power Transfer Unit" is a metal board with wheels, pulleys, belts, and chains. Assessees have to figure out why one turn on the input crank does not result in the prescribed output of two revolutions on the output wheel, and then make certain changes and repairs. Observations of performance in this exercise allow the staff to assess mechanical aptitudes and skills relevant to the assembly line operation in the brewery.
Interview Job Incumbents, Supervisors, and Trainers Semi-structured job analysis interviews are designed to inquire about the tasks, responsibilities, difficulties, interpersonal conflicts, and opportunities for improvement on the job. The analyst can also request supportive materials, such as an organization chart, policies and procedures manuals, training booklets, and so forth—all of which can provide rich sources of information for writing the case materials. An effective part of the interview process can be the critical incident technique (Flanagan, 1954) that involves asking SMEs to describe critical situations that incumbents have faced on the job. The critical incident should include a description of a problem or situation the incumbent faced, what action or behavior was taken, the reaction of others, and the outcome of the behavior. Informants can be asked to describe both positive and negative incidents—that is, incidents that illustrate effective and ineffective performance. These descriptions provide a rich mine of diverse information including situations, content for exercises, and dimensions to be assessed, and behavior illustrating effective and ineffective performance. Critical incidents gathered by the authors in the study of a customer service job were used to enhance feedback after a training program involving simulations. Trainers used behaviors cited in the positive critical incidents to provide examples of alternative positive behaviors after practice rounds with the simulations.
Administer Questionnaires to Incumbents and Supervisors Questionnaires provide quantitative data about the importance of tasks, responsibilities, and attributes to be assessed. Conducting inter-
56
*^3>
CHAPTER 4
views and observing incumbents on the job is almost always limited to a small sample of incumbents and supervisors. Questionnaires can be distributed relatively cheaply and quickly to a larger sample of SMEs. The increase in sample size usually provides confirmation of the initial findings, helps to minimize individual biases, and is helpful in involving a broader amount of participation throughout the organization. Questionnaires can be distributed to SMEs in remote locations not easily accessible to observations and personal interviews. Such information is one critical bit of information that establishes the content representativeness of situational exercises. Content representativeness is one type of validity information (American Educational Research Association, et al., 1999). Any individual simulation exercise need not cover all aspects of the job, or even a major portion of job activities. But, the questionnaire data can establish that the exercise covers essential, critical, or highly important performance domains on the job (Guion, 1998; Lawshe, 1975).
Conduct Focus Group Discussions Focus group discussions (Krueger, 1988) can be conducted with executives, managers, or other key personnel who are familiar with both the job and broader organizational objectives. These individuals can be asked to state organizational objectives and clarify how the job in question can be conducted to foster these objectives. Some organizations have "organizational competencies" they are trying to promote. The focus group can discuss how these organizational competencies can be manifested in the job being studied. The authors helped a small publishing organization develop simulations to assess willingness and ability of sales personnel to assertively sell products and services over the telephone among sales personnel while at the same time maintain the traditional customer service orientation valued by the organization. Discussions were held among key executives about these new expectations and what "assertive selling" behaviors would be expected in the future. One of the outputs of this type of discussion is clarification of the expectations that managers have for the level of performance in the future. This information can be used in the development of performance standards used by assessors in evaluating assesses in the simulation exercises. This topic is discussed in a later section of the book where we describe methods of developing scoring standards.
Avoiding Pitfalls in Job Analysis There are many potential sources of inaccuracy in the job analysis process. Morgeson and Campion (1997) provided an excellent description of 16 po-
DETERMINING WHAT TO ASSESS
***a>
57
tential social and cognitive sources of inaccuracy and their effects on job analysis data. For example, social pressures within the group of job analysts may lead to artificially high agreement among analysts or inappropriately high or low ratings on job requirements. An example of a cognitive source of error would be a situation where the extensive amount of job analysis information overwhelms analysts and do not distinguish appropriately among the job characteristics being evaluated. The authors make four general recommendations that may help developers of simulation exercises: • obtain job analysis information from multiple sources; • use a variety of methods to gather job analysis information; • make the purpose and importance of the job analysis project clear to the subject matter experts and make the information gathering techniques clear and understandable; and • closely supervise the data collection, and monitor the accuracy of the information.
Summary A wide variety of information must be gathered in the early stages of building a simulation exercise. Whereas a job analysis for a set of cognitive tests may focus solely on the attributes to be tested, a task analysis may be sufficient to build a training program, and competency modeling may be adequate to clarify organizational objectives prior to a team building program, no one of these methods is likely to give the information necessary to build a simulation exercise. In addition, no one source of information will be adequate. The simulation developer will probably obtain valuable information by reading written documents, interviewing job incumbents, supervisors and executives, observing incumbents, and administering questionnaires to a diverse sample of persons in the organization. Armed with the wealth of information gathered about the job, organization, and setting, the simulation designer will be ready to undertake the creative and enjoyable task of writing the materials for one or more specific simulation exercises. If the various situation analysis methods have been done fully, the developer will not have to return to the worksite to gather more information. In the next section of this book, we discuss how the information is used to build individual and group exercises to assess a wide range of performance dimensions.
This page intentionally left blank
II Specific Simulation Exercises 'S**^gJ There are several different types of simulation exercises that can be used to elicit behavior relevant to dimensions of performance effectiveness. This section contains seven chapters that describe techniques for building these different types of simulation exercises. Each chapter provides a description of the exercise, a brief background of the exercise with key historical references, steps in developing the exercise, discussion of special issues related to that exercise, and examples of specific substantive material. The final chapter in this section describes one form of a day-in-the-life arrangement of simulations wherein several activities are combined into one integrated simulation. These chapters focus on the construction phase of simulation development and steps involved in preparing the materials for participants, assessors, and role players. The steps preceding and following the construction phase are discussed to the extent that special considerations are applicable to that particular simulation or the purpose of the application featured in that chapter. As a framework for the ensuing chapters, the following list shows the purpose illustrated and the dimensions featured in that chapter. Each type of exercise can be used to assess many different dimensions and to accomplish many different purposes. The sampling depicted here and described in the following chapters are meant to ensure some diversity in our coverage.
6O
«r»*3)
Chapter
Purpose Featured in the Example
Exercise
Dimensions Featured
5
Case Study
Problem Analysis Decision Analysis
Selection
6
Oral Presentation
Oral Presentation Persuasiveness
Promotion
7
Leaderless Group Discussion
Group Leadership Interpersonal Sensitivity
Selection
8
One-on-one Interactions: Individual Leadership Role Play Exercises Conflict Management
9
In-Basket
Planning and Organizing Delegation
Promotion
10
Oral Fact Finding
Oral Problem Analysis Listening Skill
Training
11
Business Game
Teamwork Planning and Organizing
Organizational Development
12
Day-in-the-life
Diagnosis
5 Case Study: Analysis Problem
A case study is an individual, written exercise that calls for the participant to read a complex set of materials and prepare a written report. This simulation exercise is also known as an analysis problem exercise because it places a premium on the ability to understand a wide variety of written information (including quantitative and qualitative data), note patterns in the data, identify the important problems embedded in a complex situation, and recommend solutions. The materials in a case study exercise typically include text that describes a complex situation in an organization, background information about the organization and situation, preliminary data relevant to the problem (often collected and assembled by others), financial information such as budgets and expense reports, and other information that may or may not be useful to the participant. Instructions for the exercise often present a set of questions that someone in the organization wishes to have answered, issues to be addressed, or the type of recommendations the participant is expected to make. Often case studies provide much more information than is really necessary in order to see if the participant can sift through and identify what is relevant to the real problem. The participant is typically asked to study the information contained in the text and in the quantitative data, analyze the situation, list and evaluate potential action plans, and make recommendations. The participant is usually asked to prepare a written report for a higher-level manager or executive committee. The participant may be asked to use the results of the analysis in additional ways. For example, participants may need to: 61
62
CHAPTER 5
• make an oral presentation of the analysis to other parties. For example, the participant may be asked to present the analysis to one assessor who acts as a senior manager or to two or more assessors who act an executive committee. When a verbal presentation is included in the case study, the simulation resembles a presentation exercise described in Chapter 6. • interact with an assessor in the role of manager who asks questions, challenges the participant, and requests a defense of recommendations. This interaction is discussed more thoroughly in Chapter 8 that covers simulations of one-on-one interactions. • engage in a group discussion with other participants. Thus, the results from the case study by individual participants can be used as the starting point for a group discussion exercise described in Chapter 7. Several features distinguish the case study exercise from similar exercises. In comparison with other types of simulation exercises, a case study contains more extensive information, usually including quantitative data. Whereas the information presented in a group discussion exercise may be one or two pages long, the case study materials often occupy six to eight pages. In contrast to the in-basket exercise, which may also encompass several pages of materials involving several different problems and call for brief responses to several different persons, the case study usually is focused on one problem and calls for a formal report to be submitted to a higher authority in the organization. Whereas both an in-basket and a case study require the participant to write responses (and thus allow for the assessment of written communication ability), a case study report is usually expected to be better organized and more formal than the writing in an in-basket.
Time Requirements Introduction and instructions: 5 to 10 minutes Preparation and report writing: 45 to 90 minutes Follow-up: Additional time for presentation, interaction with manager, or discussion with group. BACKGROUND
The case study method has been used in educational, training, and assessment situations for many years. The method has been an integral part of management and legal education for decades (Pigors, 1976). In fact, the method has been so common in business schools that it is sometimes referred to as the "Harvard Method." In educational settings, the method is
CASE STUDY: ANALYSIS PROBLEM
63
used to help students learn for themselves by reading about real life situations. Students are expected to extract the basic principles of business operations from specific examples of decision makers in action. As an assessment technique, the case study provides a setting in which observers see whether participants can demonstrate the skills of analyzing complex situations to accurately identify problems and make logical decisions. STEPS IN DEVELOPING A CASE STUDY EXERCISE
Step 1. State the Purpose of the Simulation and Determine the Resources Available For purposes of presentation in this chapter, we discuss the development of a case study exercise for the selection of college graduate recruits going into management trainee positions. In this application, the organization saw an opportunity to improve its college recruiting by developing a simulation exercise that could be used for selection of a wide variety of entry-level management positions that were filled by college graduates with various backgrounds in business, engineering, and science. The developers made the preliminary determination that they wanted to assess a variety of attributes relevant to the general problem-solving abilities of the candidates during recruiting visits to the organization, job candidates would be asked to spend a couple hours participating in the simulation assessment technique. Candidates could be expected to be familiar with the case study method as a part of university courses, so they are not surprised by the format. The developers had the time and the support of other staff members to gather the information needed to develop the case study. Managers in the organization were committed to obtaining better information about the analytic skills of candidates so they were willing to go through assessor training and to evaluate the reports produced by candidates.
Step 2. Conduct Situation Analyses In this application, the developers interviewed key managers in several departments throughout the organization to understand the initial assignments of recent college graduates hired as management trainees. Task analyses showed that new hires in various junior management positions are often asked to conduct studies of a unit's operations and propose improvements. These assignments often took the form of project work in which the trainee investigates a difficult situation in the organization, proposes alternative courses of action, and prepares a report for management. This type of assignment took place in a variety of functional areas.
64
'Bvs)
CHAPTER 5
Step 3. Specify the Dimensions and Difficulty Level Assuming that the participant prepares and submits a written report, the written product can be read and evaluated on several dimensions relevant to many jobs in production, information technology, administration, and marketing. The case study exercise provides opportunities to observe behaviors relevant to the following dimensions: problem analysis, decision analysis, written communication skill, and others. In the example in this chapter, we focus on two aspects of problem solving: problem analysis and decision analysis. Problem analysis behaviors include understanding the facts and other information in the case, seeing the relationships among various bits of information, seeking out new information to help understand the situation, recognizing that a problem exists that needs some solution or that an opportunity exists for improving the situation, identifying the causes of problems, and picking the most likely cause which needs attention in the problem solving phase. Decision analysis behaviors include listing a set of alternative solutions, assessing costs and benefits of each alternative, evaluating the balance of benefits versus costs, making a recommendation of which choice to make, and then anticipating any new problems that might ensue if the recommendation is followed. In addition to the problem solving skills of problem analysis and decision analysis, other dimensions can be assessed by well-developed case studies, for example, attention to detail, awareness of internal and external business considerations, and written communication skill. If the participant is asked to verbally present the recommendations to an assessor who follows up with questions, other dimensions can be assessed such as oral presentation skill, stress tolerance, and interpersonal sensitivity. If the participant is asked to follow the analysis with a discussion of the findings with a group of other participants, still other dimensions can be assessed (e.g., flexibility, leadership, and teamwork).
Step 4. Specify Features of the Simulation In our example, the new management trainees may go into a variety of functional areas, and thus the materials must be generic. That is, the case study should be complex enough to allow persons with a variety of university backgrounds—for example, finance, general management, production, and engineering—to complete the exercise successfully. Because the organization wished to hire management trainees who were very bright, highly capable, and with high motivation to achieve in management, it was decided to make the exercise quite challenging. Because the organization policy was to hire management trainees who had the potential to move up into middle management positions, and because the trainees were given complicated
CASE STUDY: ANALYSIS PROBLEM
^^ 65
tasks soon after placement, the developers decided to build a fairly complicated and challenging case study exercise. They decided to construct an exercise with a variety of types of information, including production, sales, and financial data, and to present more information than most candidates could handle in the specified time. It was determined that no more than 2 hours of the recruits' time during the company visit could be devoted to this activity.
Step 5. Prepare the Participants' Materials The example of the case study for college recruits demonstrates the richness of information that can be included in a case study. The basic problem in this example is that a division of small corporation has lost money in the past 2 years and must decide whether to close this division or expand the operation to produce additional products. The participant is asked to serve as a consultant and make recommendations. The case materials include: • • • • • • •
a history of the company and its founders; financial figures related to assets and liabilities; the products and their cost of production; wholesale and retail revenue; information about the manufacturing facilities and processes; production and sales volume for the past few years; preliminary information about the possible plant expansion gathered by the founders and their staff and presented in various reports; • current and anticipated interest rates for loans, along with the existing debt structure of the company; • conflicting points of view from the founders and their key staff on whether or not to expand and how to fund the possible expansion. The participant is asked to analyze the materials and prepare a written report that addresses several issues: • • • •
What should be done about the lost revenue. Should the plant expansion be built. What new product mix should be pursued. How the expansion should be financed.
Other Content A variety of problems can be portrayed in a case study, and should be matched to the profession or occupation of the target job. For example, the content may present problems in finance, marketing, production, human resource management, or information technology. If the participants come
66 '8^*
CHAPTER 5
from heterogeneous backgrounds, the content of the case study might include some topic that all participants can identify with and make contributions to. For example, the case may involve safety issues or customer services complaints about a variety of product or service problems. Some of the specific topics that might be built into case studies for selected jobs are shown in Table 5.1. This table shows that case studies can be built for various levels of jobs in an organization ranging from non-managerial, supervisory, mid-level management, to executive positions. Appendix D contains an example of case study developed for a job requiring financial analysis skills. It is a moderately complicated situation in which the participant is asked to give advice about purchasing a business.
Current Organizational Problems An alternative to writing a generic case containing largely fictitious information, is to make the content of a case study an actual problem or situation that the organization is currently dealing with. The department or unit in which the target job exists may have a problem that managers are currently attempting to solve. A description of the problem can be presented to the participants, along with relevant information that has already been collected by managers or a committee working on the problem. The advantage of presenting such a problem is that it will have high face validity, and ideas TABLE 5.1 Some Possible Content for Case Studies in Various Jobs Job
Content
Executive/General manager Advice on whether a successful parts manufacturer should seek a wider international market for its products. Director of transportation
Analysis of a regional transportation plan for a state.
Marketing manager
A marketing plan to sell a product or service to a wider customer base.
IT consultant
A recommendation on which of several potential software systems to develop to market to different lines of business, for example, health care, retail sales.
Maintenance supervisor
Scheduling craft workers with various skills to work various shifts in various departments.
Production analyst
An analysis of a flow of operations in filing insurance claims.
Financial analyst
A cost/benefit analysis to make or buy a product needed by the organization.
Sales
Recommendations for displaying various household goods in a department store based on past sales figures.
CASE STUDY: ANALYSIS PROBLEM
«**a>
67
from the analysis may even contribute to real organizational effectiveness. A potential disadvantage of presenting "real life" problems in a selection context, is that some participants may have the advantage of prior exposure to the issues and information involved, whereas other participants would have to spend more time getting acquainted with the background data. Another disadvantage of using a "real life" problem is that the content of the case would have to be written anew for each administration of the exercise. That may make the exercise unstandardized and may even favor some class of participants. The developer will want to avoid any topics that are too highly controversial or inappropriate for a given settings. For example, we had to eliminate reference to sexually inappropriate behavior in an exercise designed for a fundamentalist religious organization because the administrators did not even want participants to be exposed to such issues. The following box describes another example where an exercise failed because it raised a taboo subject. Why Simulation Exercises Fail Fatal Error 5: Inappropriate Content A case study included a description of an employment-testing program that was totally unacceptable to the CEO. The testing program was central to a number of issues raised throughout the case and the case could not easily be revised to exclude that element. The exercise had to be dropped. The development team started over with another setting and set of problems for the case study.
Step 6. Prepare Support Materials for Administrators, Assessors, Resource Persons, and Role Players An integral part of the development of a case study is the preparation of a set of supporting materials for the assessors. When the case study is being developed, the developer should prepare initial guidelines for the assessor when reviewing the written report and scoring the assessment dimensions. These preliminary guidelines will almost certainly be revised during the processes of training assessors (chap. 13) and pilot testing the exercise. The support materials for the assessor should include, at the very minimum, all the materials given to the participant. For the case study for college recruits, the following additional support materials were provided: • A review of the important points written into each section of the case materials. • A statement of central issues the organization is facing
68
-wxs>
CHAPTER 5
• A number of general areas for the assessors to consider • Expectations for the key points that the participant should emphasize. • A sample of the proper, "correct answers" the participant should give in identification of problems and recommendations. For example, in the financial analysis section of the report, the following list of major problems should be dealt with: pricing, cost of expansion, incomes from different product mixes, cost of loan financing. For each area, the relevant case details were shown so the assessor could match the participants' responses to the expected analyses. In addition, the support materials should include examples of behaviors that illustrate effective and ineffective performance on the dimensions being assessed in this case analysis. These support materials can be in the form of a checklist of behaviors illustrating each dimension, or behaviorally anchored rating scales for each dimension. Examples of behavioral anchors for the dimensions of problem analysis and decision analysis are shown in Table 5.2. The reader must recognize that these behavioral examples are somewhat general and do not refer to the content of any specific case. In a real application TABLE 5.2 Behaviorally Anchored Rating Scales for a Case Study Problem Analysis 5
Highly effective: Made connections among disparate data; documented the "real" problem with substantial evidence; carried out unique and informative analyses of trends in the data.
4
3
On target: Stated key problems in the exercise; used data from the survey; performed adequate calculations of ROI.
2 1
Below expectations: Made unfounded assumptions about the situation; failed to use data provided in the task force report; failed to articulate the "root problem" in the organization.
Decision Analysis 5
Highly effective: Benefit/cost analyses accompanied each recommendation; priorities for solutions were stated and documented.
4
3
On target: Made three specific recommendations and gave reasons; stated how each solution addresses an identified problem.
2 1
Below expectations: Recommendations were not supported by facts; negative ramifications of recommendations were not stated.
CASE STUDY: ANALYSIS PROBLEM
^^ 69
the behavioral anchors would make reference to specific information in the case study. Additionally, the exercise developer will want to involve subject matter experts in the determination of behaviors considered effective and ineffective in the organization.
Step 7. Train Assessors Assessors must be trained in all the basic skills described in Chapters 2 and 13. Additionally, for the case study method, the key to this process is providing assessors with guidance on the important issues that are embedded in the case. The assessors will examine the written report to see if it contains the "correct" identification of the actual problems being experienced in the organization.
Step 8. Administer to a Pilot Group The try out with a pilot group is designed to provide valuable information in the development of this type of exercise. Participants can be asked about: • the clarity of the instructions, • the adequacy of the information to conduct thorough analyses and make logical recommendations regarding costs and benefits, • whether the amount of time is enough to allow participants to conduct analyses and write a report, • suggestions for other information to be included in the background information. Armed with this information, the developer can make revisions in the instructions and other materials. The try out also allows the assessors an opportunity to practice with the supporting materials. The assessors can then tell the developer whether the support materials adequately guided them through the observation and evaluation of performance in the exercise. They can indicate if the scoring standards were adequate, as well as suggest additional examples of effective and ineffective behavioral anchors for the rating scales. The pilot participants may display unexpected, yet quite effective steps to analyze the information. These actions allow the developers to revise the support materials.
Step 9. Use Simulation for Selection, Promotion, Training, Organization Development, or Research Once the case study is revised, it can be used for screening purposes. The case study developed for the selection of college recruits was one step in a se-
7O *V3>
CHAPTER 5
quence of screening activities. Screening started with campus recruiting and interviewing. Resumes were reviewed to ensure that the candidates had the appropriate education and experience. During the company visit, candidates went through a series of interviews with human resource staff, unit managers, and representatives of the technical area where the candidate might be assigned. All candidates also completed the case analysis, which was then reviewed by a manager in the hiring unit. After the manager evaluated the write-up of the case study, he or she met with HR to make a hiring decision.
Step 10. Solicit Participant Reactions Pilot participants can be asked a series of questions about the case study exercise. This information can be gathered anonymously so that the participant does not give just socially desirable responses in order to look good to the developers. Participants can be asked to provide ratings for a series of statements such as the following (Kudisch, Lundquist, & Smith, 2001): • "I clearly see the relationship between this tool and what is required on the job." (measures face validity) • "I enjoyed this exercise." (measures affect) • "This instrument accurately identified my skills, abilities, and so on." (measures accuracy) • "It would be easy for a person to falsify his or her responses on this exercise." (measures fakability) • "I found this exercise offensive." (measures invasiveness) • "I felt stressed while completing this exercise." (measures stress) A more extensive list of post-exercise questions is provided in Appendix C.
Step 11. Conduct Psychometric Evaluation An evaluation of the case study should examine whether the content of the exercise matches the level and complexity of the problems faced by incumbents on the job, and whether the assessors can accurately rate the effectiveness of participants in identifying and solving the critical problems that are presented in the situation. Chapter 14 provides suggestions for how to establish reliability and validity of simulation exercises. The reader desiring more information on the process of writing cases and using the case study method in teaching can consult other books (e.g., Leenders & Erskine, 1989; Robson, 1993; Yin, 1989) and web sites (e.g., http://gise.org/case_writing.htm, http://nacra.net/).
6 Oral Presentation Exercises
In an oral presentation exercise the participant is asked to prepare for and deliver a formal, "stand up" speech about some subject matter to someone in authority or important to the individual or organization. The participant is first given instructions and some materials to study. After a relatively short time to prepare, the participant is asked to make a presentation for 10 to 20 minutes depending on the demands on the job. After the presentation, the assessor may ask questions. Assessors can evaluate the presentation on dimensions such as technical knowledge, oral presentation skill, persuasiveness, and empathy for the audience, and possibly stress tolerance if the assessor wishes to challenge the participant. There are two common types of presentation exercises: self-contained and advance-preparation. In the self-contained arrangement, the instructions, preparation, presentation, and observation/evaluation take place in one location in a relatively short period of time. For example, after reporting to an assessment or training facility, the participant is asked to prepare and present in a morning or an afternoon. In the advance-preparation arrangement, the participant is given instructions several days or weeks prior to the assessment event. He or she can do the preparation whenever it is convenient prior to the assessment event. The oral presentation can be a stand-alone exercise or a part of another, larger exercise. As a stand-alone exercise, the presentation is administered as a discrete event in which the participant's behavior is evaluated by the observers. As a part of a larger simulation exercise, several formats have been followed. The oral presentation can be added to a case study exercise. After
71
72 'PS8'
CHAPTER 6
analyzing the case materials and writing a report, the participant may be asked to make an oral presentation of the findings to someone in the role of a higher-level manager. In competitive LCDs, participants may be asked to make a presentation of their respective positions, as well as suggestions for addressing the situation at hand. Assessors can make separate assessments of the written report and oral presentation, or evaluate the combination of communication modes. In another context, an oral presentation can be added at the beginning of a leaderless group discussion. Each participant can be asked to make a presentation of his or her initial thoughts on how the problems should be handled before the group engages in the discussion. The advantage of having each participant make a brief presentation before the group discussion is that this format ensures that the more reticent individuals get their ideas expressed to the group before more assertive participants dominate the discussion. The preliminary presentations before the group interactions also provide assessors an opportunity to assess oral presentation skills. Assessors can make either separate or combined assessments of the oral presentation and discussion segments of the exercise.
Time Requirements Introduction and Instructions: 5 to 10 minutes Preparation: Self-contained format: 0 to 60 minutes; Advanced-preparation format: Unlimited time. Presentation: 10 to 20 minutes Follow up: 10 to 20 minutes BACKGROUND Giving a presentation or "making a speech" is a common activity in educational and training courses. Nearly everyone has been asked to make a presentation as a part of some program to develop communication skills. The presentation exercise as an assessment and development tool is a logical extension of this common educational practice. Another example of this type of developmental experience is the formal program of Toastmasters. Toastmasters International is a worldwide organization that arranges meetings, often accompanying a meal, at which members give speeches to each other in order to develop their own listening and speaking skills. As a reflection of the common use of this technique, Kudisch et al. (1999) found that the oral presentations were the most popular type of interpersonal simulation exercise among their survey respondents.
ORAL PRESENTATION EXERCISES
"**>
73
STEPS IN CONSTRUCTING A PRESENTATION EXERCISE
Step 1. State the Purpose of the Simulation and Determine the Resources Available For the purpose of illustration in this chapter, we discuss the use of a presentation exercise for making promotion decisions in a police department of a large city. The organization in question wished to supplement the information it used to make promotions to the rank of captain. Captains were frequently asked to make presentations to internal and external audiences. These presentations are made to internal groups of lower ranking officers who were considering alternative plans for new programs in the department or to larger external groups in the community. Because the topic for this presentation came directly from the organization's recent effort to introduce community-centered policing, as is described in Step 5, the exercise was relatively easy to develop. Therefore, no special resources were needed to write the content.
Step 2. Conduct Situation Analyses Prior to the development of this simulation exercise, very thorough task analyses had previously been conducted in connection with the development of promotion and training programs in the department over the previous years. Detailed job descriptions had already been written. These documents showed that captains were frequently asked to make presentations to internal and external audiences. Prior to building the presentation exercise described in this illustration, the consultants conducted interviews and observations of captains in the department to gain additional insights into the types of presentations captains were required to make. Within the department, captains frequently conducted meetings with small and large groups of lieutenants, sergeants, and officers to explain new programs being implemented in the department. Additionally, captains made presentations to the chief and his staff, and to city officials to explain operations, to make budget requests, or to summarize data gathered in response to annual audits. Outside the department, captains often made presentations to civic organizations, professional associations, neighborhood groups, and to the media. In many of these presentations, the captain was required to first make a prepared statement, and then to answer questions from the audience. These audiences frequently asked many pointed questions that required thoughtful answers beyond the planned topics. Frequently audience members presented opposing views and were, on occasion, even quite irate. These situations required the captain to respond in a nondefensive manner while simultaneously defending the position being presented.
74 'Bxs'
CHAPTER 6
Step 3. Specify the Dimensions and Difficulty Level The analyses revealed that the captain had to make persuasive communications to groups who were reluctant to accept at face value the information presented. At times, these groups are even hostile and antagonistic. Therefore the simulation was expected to include at least some mild confrontation for the participants. Assuming that we are talking about the stand-alone presentation exercise (and not the presentation in conjunction with another exercise), this simulation is particularly effective in assessing oral presentation skills and persuasiveness. Although these two dimensions overlap to some extent, they are distinct enough to warrant separate ratings if the target job demands these skills. Oral presentation skill is the effective communication of information and ideas to others in a formal or semi-formal setting when given time to prepare. In this organization, oral presentation skill was defined primarily as a one-way process whereas persuasiveness was defined as a process of interacting with others to actively listen to their concerns and address these concerns in a direct, calm, logical, and nondefensive manner. In summary, persuasiveness is the ability to convince others that one's own ideas and recommendations are sound and should be adopted by others. This exercise also provides an opportunity to assess stress tolerance and flexibility. To assess these dimensions, the participant can be asked to respond to a series of questions and probes either during or at the conclusion of the presentation. For example, in a simulation of a press interview, the assessors can observe how the participant responds to the same questions asked repeatedly or to repeated interruptions by others.
Step 4. Specify Features of the Simulation Before building a presentation exercise, one needs to clarify several parameters: • The job to be described in the exercise. • To whom the presentation is made. • If the participant is given materials to read or asked to speak extemporaneously. • Whether the assessor (s) will interact with the participant in any way. • If there will be interaction with the participant after the initial presentation, what level of stress will be induced. Once these parameters have been defined, based on information gathered in the analyses, the developer can proceed to construct the presentation exercise.
ORAL PRESENTATION EXERCISES
's*^) 75
Step 5. Prepare the Participants' Materials In the presentation exercise developed for the promotion of captains, the exercise simulated a meeting with a group of citizens to explain policies and practices of the department. Candidates were asked to explain the department's new emphasis on community-centered policing. This program was a major thrust of the new chief and many resources had been directed toward training all members of the department in this complex, and somewhat revolutionary program. Although community policing was used successfully in other jurisdictions, strong and diverse opinions existed in this police department and in the community. One of the responsibilities of captains was to attend community orientation meetings to explain the concept and specific practices to be followed by this department to community members. Because training on community policing had been held in the department, candidates were at least somewhat familiar with the content of what they were asked to present. At the same time, exposure to the concepts was uneven. For example, all candidates may not have known about all specific programs in the community policing initiative. Thus, the exercise materials included summaries of the philosophy and key programs being launched. The participants were given 30 minutes to review these materials and prepare for their presentation. They were told that they would have 20 minutes to make their initial presentation to a group of "citizens" (who were actually the assessors), and this would be followed by questions from the audience. Analyses of the job requirements showed that real citizens would have a variety of reactions, and thus the assessor/citizens asked pointed questions and made somewhat stress-provoking comments. The assessors rated the participants on soundness of understanding of the concepts of community policing, ability to convey a real endorsement of the philosophy behind the concept, stress tolerance in handling reactions and questions from the audience, as well as the dimensions of oral presentation skill and persuasiveness previously mentioned. This list of dimensions illustrates that the presentation exercise can be built to satisfy a number of different objectives in an assessment process. The exercise can be designed to emphasize the participants' understanding of substantive matters and their ability to explain technical matters to a lay audience, or to emphasize communications processes, in which case only general presentation skills are assessed.
Other Content for Presentation Exercises Table 6.1 presents a list of some representative content that might be included in the presentation exercise for different jobs when administered in the self-contained format. Additionally, the table shows the simulated audience to whom the presentation might be made.
76
-*va>
CHAPTER 6
TABLE 6.1 Examples of Presentation Exercises Target Job
Content
Chief executive officer
Year end report
Stockholders
Middle manager
Expansion proposal
Executive committee
Supervisor
Justification for expenses
Budget committee
Public relations specialist Plans for new real estate development
Audience
Citizens in community
Human resource representatives
EEO report and affirmative action Government compliance plan official
Financial planner
Investment and retirement programs
Service club
Sales person
Reasonable ways to submit expense reports
Sales manager
Trainer or supervisor
Presentation of training materials
Students/trainees
A major decision in the choice of content is whether the presentation will cover only materials that are presented in the instructions for the exercise or materials that the person has access to from other sources (e.g., his or her own work and life experiences). If the substance can come only from materials presented in the introduction to the exercise, then this exercise resembles the case study. For the stand-alone presentation exercise the exercise materials are usually much briefer than the materials in a case study. In the presentation exercise, the emphasis is on the participants' ability to orally present the materials in a persuasive manner, rather than on the extensive analysis that is required in the case study exercise. Therefore, the materials to be studied in a presentation exercise are usually relatively short and well organized. Preparation time may be only 20 to 30 minutes, followed by a 10 to 15 minute presentation. If the content is such that the participant can draw on any prior knowledge and experience, then the preparation time may be quite short. For example, newly hired management trainees who are all college graduates may be asked to summarize the key issues among college students in recent years. For example, the assignment might be "Describe what university students are thinking these days about the role of government in social service programs." Appendix E shows an example of an exercise that calls for an extemporaneous presentation. It was used with a group of candidates for a consultant's position at a large management consulting organization.
ORAL PRESENTATION EXERCISES
-«^ 77
Other examples of generic topics that can be assigned for presentation after a relatively short preparation time are shown in Table 6.2. Each of these topics are such that almost any individual has some background or experience in the organization to draw on that would allow them to make a 5 to 10 minute talk.
Self-Presentation Exercise Another example of the presentation exercise is the self-presentation. The participant is asked to describe his or her strengths and weaknesses in relation to the target job and performance dimensions of interest (e.g., experience in project coordination or marketing). The preparation time can be relatively short—for example, the participant can be given only a few minutes to prepare before the actual presentation. In such a situation, the participant has to rely on memory to describe relevant life and work experiences. Alternatively, the participant can be notified several days before the occurrence of this activity, and allowed, or even encouraged, to marshal all the relevant documents and credentials. The participant can collect records of educational and job achievements, assemble written documents, collect endorsements and testimonials of competencies, and so on. One form of this documentation is often times called a portfolio. A portfolio consists of a collection of work products or other demonstrations of effectiveness in the dimensions being assessed. An example of a self-presentation exercise is one contained in a certification program for education consultants for a large information technology (IT) organization. Education consultants assist organizations in maximizing the training investments made in high-tech employees. The candidates for certification were given several months to prepare a portfolio of their work products before coming to the assessment event at which time they engaged in several simulation exercises including the self-presentation. The portfolio could include examples of proposals for consulting engagements, examples of contracts with clients, systems developed for employee selection or train-
TABLE 6.2 Generic Topics Requiring Little Preparation Time for Presentation Characteristics of an effective supervisor of IT workers Ideas to improve safety Characteristics of ideal leaders in a changing global marketplace What motivates young workers these days? Suggestions for ensuring "home land security" in an organization (This topic was of high interest at the time of this writing 2001-2002.)
78
-«xa>
CHAPTER 6
ing, end-of-project reports, evidence of successful performance such as documentation of awards for excellent IT service, or documents submitted to the headquarters to share with other consultants. During the self-presentation exercise, the participants could refer to items in the portfolio to substantiate their descriptions of job related proficiencies.
Resource Materials for the Participant For this exercise, the developer may need to provide virtually no supplies or resource materials. Therefore, the self-presentation can be a relatively easy or difficult exercise to develop. It can be easy to develop if one chooses one of the generic topics for presentation that require little time to prepare. It can be difficult to develop if one chooses to construct materials for the participant to read, digest, prepare, and present. The content should be matched to the level of conceptual complexity required by the target job. For example, job incumbents may be required to present only simple descriptions to others in noncombative situations, or they may be required to explain quite complex concepts in situations where opposition is strong. The developer also needs to decide what ancillary materials will be allowed or provided. For self-contained presentation exercises where preparation occurs only at the assessment site, the developer will have to specify what materials are available (e.g., paper, flip charts, marker pens, transparencies). If advance preparation is involved, the developer will have to specify any limitations on the materials that can be prepared to support the presentation. For example, will the participant be allowed to bring formally printed handouts and display charts? Will computer-generated presentations be allowed? And, if they are allowed, who will provide the computer and screen for projection. Failure to provide adequate equipment can lead to the failure of this type of exercise, as illustrated in the following box. Why Simulation Exercises Fail Fatal Error 6: Equipment Problems An organization using a self-presentation exercise invited participants to prepare to display their credentials using a computer program. Participants were told that the facility would provide the necessary equipment for projection. During the event, the computer, connections, and projector did not work properly for some participants. Participants reacted negatively and charged that the process was unfair. These complaints were raised in a later lawsuit surrounding the selection decisions made in the program.
ORAL PRESENTATION EXERCISES
***>
79
Step 6. Prepare Support Materials for Administrators, Assessors, Resource Persons, and Role Players The assessors should be provided behavioral examples of effective and ineffective behaviors in the form of behavioral checklists or behaviorally anchored rating scales, as illustrated in Table 6.3. The behavioral examples shown here are for demonstration purposes only and are rather general. In practice, the anchors would refer to the specific content of the exercises.'
Follow-Up Questions For some applications, where adherence to standardized conditions must be quite strict, no follow-up questions or comments by assessors may be allowed. For example, in some police jurisdictions, the threat of charges of favoritism is so predominant that no interaction between assessors and participants is allowed after the initial instructions are given. TABLE 6.3 Behaviorally Anchored Rating Scales for an Oral Presentation Exercise Oral Presentation Skill 5
Highly effective: Used novel and effective visual displays; moved effectively from the podium to various positions near the audience; make firm eye contact.
4 3
On-target: Spoke clearly and concisely; used appropriate hand gestures.
2 1
Below expectations: Sentences tailed off; did not make eye contact with audience; fumbled with the displays and handouts.
Persuasiveness 5
Highly effective: Cited unique reasons why his or her recommendation would benefit different audience members; checked with the audience to confirm they understood; asked for agreement and commitment.
4
3
On-target: Used data and gave clear reasons to support his or her recommendations.
2 1
Below expectations: Did not provide evidence of the accuracy of his/her assertions or the value of recommendations; did not seek feedback from the audience if they understood or were committed.
8O
CHAPTER 6
In other applications, assessors are instructed to follow-up the presentation with comments and questions. The assessors may probe for reasons for recommendations by saying something like: • "You recommended that the organization institute a safety regulation. What was your reasoning?" • "You recommended that subordinates be asked to participate in the project. How would you go about getting their cooperation?" Or the assessors may place even more stress on the participant, assuming this is warranted by the situation analysis, by directly challenging the soundness of a recommendation with a comment such as: • "I don't think your recommendation to expand the program is supported by facts to show it would be cost effective. How do you defend your recommendations?" • "Your recommendation to involve subordinates would probably not work because they really don't have the type of experience that would provide good ideas. Do you still think they should be involved?" In more extreme situations, the assessors may act like irate citizens and voice vehement emotional opposition. In the example of police captain's promotional examination, the assessors, playing the role of citizens at the town-hall meeting, followed two phases in the follow-up question period. In phase 1, one assessor asked non-threatening questions to clarify the procedures in the community-policing program. For example, "You said the department would involve community leaders. Please tell us more about how this would be done." In phase 2, another assessor asked more challenging questions and became rather irate. For example, "I don't think you'll get cooperation from our neighborhood because of the bad treatment we received from Officer Harris! We don't want him around here. Will you send a better officer?"
Step 7. Train Assessors Training the assessors to observe the presentation exercise for the police promotional project was relatively easy. The jurisdiction used assessors from outside the home department, and the consultant who developed the program was able to recruit experienced, high-level officers from other jurisdictions who had served as assessors in a number of other assessment programs. These assessors were familiar with the roles and duties of police captain, they knew the concepts of community-based policing, and they had been trained previously in the basic principles of behavioral observation and evaluation.
ORAL PRESENTATION EXERCISES
***> 81
Furthermore, it is relatively easy to observe and evaluate the initial behaviors in a presentation exercise. During the formal presentation segment, the participant is the only person speaking, and he or she controls the pace and style of speaking. As was true in the police example, the assessor usually knows the topic of the presentation and thus can anticipate most of the content that will be covered. The speaker often uses some visual display or handout to outline the talk, and thus the observer can typically follow the flow of the presentation. If the assessors follow up with questions and challenges to the participant, assessment can become a bit more difficult. In such applications, the tips for observing interactions simulations described in Chapter 8 are helpful. Assessors may need some special training in differentiating various aspects of the oral communication process in order to distinguish superficial things such as physical demeanor and verbal fluency from the more meaningful aspects of effective communication. A speaker may display good presence, and yet this characteristic may not be totally adequate for the demonstration of high levels of persuasiveness. In fact, some highly articulate speakers may be "highly polished," yet appear insincere. The opposite pattern is also possible: some speakers do not use perfect sentence structure and grammar and would not be described as articulate, yet can be highly persuasive because they speak at the level of the audience, they display sensitivity to the needs of the situation, and they adjust to the demands to the situation. Assessors may need to be taught to watch for these behaviors, and not be unduly influenced by other, superficial elements of the communication process. Techniques for training assessors how to observe the behaviors relevant to assessing a variety of dimensions are covered in Chapter 13.
Step 8. Administer to a Pilot Group Ideally, the simulation developer will administer the initial exercise to a pilot group who are very similar to the target group to be assessed. This may not be possible in some situations, and may not be so important in the development of a presentation exercise. Pilot work may not be essential here because the materials and instructions are relatively simple. Most people have seen presentations given by others, and they have probably given presentations themselves. Thus the instructions seldom ask for anything unusual. Frequently there are no additional materials beyond the instructions. Thus, pilot testing is not always necessary in this situation. For the police department, using the presentation exercise described earlier in the chapter for promotion purposes, pilot testing in the organization was not possible. Both the department and the civil service agency administering the promotion process was concerned about security of the examination materials. Experience in the past had shown that even when pilot
82
'^^
CHAPTER 6
participants were "sworn to secrecy," information about the content of the exercise leaked out, and this dissemination was uneven across the candidates. As a substitute, the exercise developers tried out the exercise on a sample of advanced students in a criminal justice program in a community college in another city some distance away. These students then provided feedback about the clarity of the instructions, and the developers were able to see if the time limits were reasonable and what behaviors might be demonstrated to illustrate performance levels on the dimensions being assessed.
Step 9. Use Simulation for Selection, Promotion, Training, Organization Development, or Research The presentation exercise is relatively easy to use. Various forms of the presentation can be configured to fit into most any time frame. Very simple presentations have been built in as fillers in the midst of more elaborate assessment and training programs. For example, participants may be asked to give an extemporaneous speech about a current organizational issue at a luncheon event. In the police department, the presentation exercise was one part of a more elaborate assessment center process, which came at the middle stage of a sequence of screening procedures. Prior to the assessment center, the participants had passed a review of their work history and a test of knowledge of police operations. During the assessment center, other simulations such as an in-basket and analysis case study were administered. Following the assessment event, a panel of higher-level administrators, HR staff, and citizens interviewed the candidates.
Step 1O. Obtain Participants' Reactions If this type of exercise is to be used to help make important employment decisions, participants' reactions to the exercise are particularly important. The organization will want to know if the process is perceived to be fair by the participants. Feedback at two stages in the process can be helpful. Input from the participants at the end of the program, and before any results are announced, is helpful in understanding if the participants found the instructions clear, the time limits adequate, and the administration fair. This feedback should be collected anonymously so that participants do not worry that their comments will be used in the assessment process. The administrator of the program should assure the participants that assessors will not see this information. Feedback can also be gathered some time after the assessment event. At this time the participants have better perspective on the process in light of the assessment results and any evaluations they received about their per-
ORAL PRESENTATION EXERCISES
"^ 83
formance. In the police department, the information gathered at this stage was quite surprising to the developers and assessment staff. The participants indicated that the presentation exercise was quite realistic and fair, but that the follow-up questions and challenges were rather mild compared to the challenges they had encountered on the job from both internal dissenters and external citizens. They recommended that in the future, the assessors should present even harsher criticisms in the follow-up phase of the exercise. It is possible that these real job candidates were much more experienced and skilled than the pilot group of students. In subsequent programs that used the presentation exercise, the assessors introduced high stress levels.
Step 11. Conduct Psychometric Evaluation Some aspects of reliability and validity for the presentation exercise are easy to establish. Reference to a thorough job description with detailed task lists can demonstrate whether or not incumbents in the target job actually make stand-up presentations on a regular basis, and thus whether this general form of simulation is appropriate. The exercise developer may need to do additional interviews to pin down the types of content that are presented, the types of audiences faced, and the expectations of the organization for the importance of substance or process in the presentation. Clear documentation of this information forms the basis of defending the content representativeness of any particular presentation exercise. Establishing the reliability and validity of ratings of defined dimensions of performance in the presentation exercise is a bit more problematic. Ideally, one would like to have predictive validity evidence over a period of time in the actual organization using the simulation. Seldom is it feasible to gather that sort of evidence, nor is it necessary with an exercise that is so clearly related to job requirements. What is more feasible and also more relevant, is the demonstration that assessors can observe and rate behavior on the defined dimensions with a reasonable degree of inter-rater agreement. It is not difficult to gather such evidence during the assessor training process. After initial training, assessors can be asked to observe performance in the exercise. Their behavioral observation notes, their ability to classify behavior into dimensions, and their ratings on performance dimensions can be compared to determine inter-rater reliability and agreement. Other evaluation activities are described in Chapter 13.
7 Leaderless Group Discussions: Non-Assigned Roles and Assigned Roles
In a leaderless group discussion (LCD), participants work in groups of 4-6 to solve a problem or make a decision within a specified period of time. There are two forms of LCDs: non-assigned role LCDs (the most common form of this exercise) and assigned role LCDs. In a non-assigned role LCD, all participants work toward a common goal (e.g., jointly solve an organizational problem). In contrast, in an assigned role LCD, each participant has a different goal (e.g., secure the biggest raise for one's subordinate). Although both forms require cooperation to arrive at an equitable solution, assigned role LCDs add an element of competitiveness to the discussion process. Typically, participants are first given background information about the organization and the nature of the problem, and then they are instructed to discuss the issue with the members of their group. These background materials may be specific to the LGD exercise, or they may have come from a previous exercise such as an in-basket, case study, or fact-finding exercise. If the participants are not assigned roles, then each person will receive exactly the same information. If the participants have assigned roles, each person will get the same general information, along with some unique information. For example, if the purpose of the discussion is to obtain a raise for one's subordinate, all participants may have some general information about the process used to allocate salary increases and a thumbnail sketch of each of the employees being considered for a raise. In addition, each participant also re84
LEADERLESS GROUP DISCUSSIONS
85
ceives a more detailed profile of information about the subordinate for whom they are advocating. After the participants have been given a short time to review the background materials, they are instructed to discuss the issues or problems and arrive at a solution within a specified time. These instructions may be very vague (e.g., prepare a solution to the problem) or very specific (e.g., indicate in writing the exact dollar amount that each employee will receive for a raise and then ensure everyone initials the agreement; arrive at consensus on the rank order of issues from most important to least important). Instructions as to how the participants should arrive at their answer may also be vague or specific, depending on which dimensions are being assessed. During the discussion, assessors observe the participants and record their behavior. Once the participants have completed their discussion, the assessor may ask them to explain or justify their conclusions.
Time Requirements Introductions and instructions: 5 to 10 minutes Preparation: 0 to 60 minutes Discussion: 30 to 90 minutes Follow-up: 5 to 15 minutes. BACKGROUND The use of the group discussion technique dates back to German military officer selection programs in the 1920s. The British War Office Selection Boards (WOSB's) began using the LCD approach in the early 1940s (Thornton & Byham, 1982), and the practice continues to the present day in the British Civil Service Selection Board. Early use of this approach was based on the premise that group situations provide the best opportunity to assess leadership. Candidates were assigned to groups of eight and then given a problem-solving task, with no leader assigned. The candidates were evaluated on their ability to cooperate with the other participants in order to solve the problem, and also on the way they interacted with the other candidates. The LDG technique was later used by Bass (1950, 1954) to assess emergent leadership, and has since been used in a variety of military, government, and private organizations. Although it is still a popular technique for assessing managerial candidates, use of the LCD has been expanded to nonsupervisory positions as many organizations have become less hierarchical and employees at all levels have been expected to demonstrate leadership and teamwork skills.
86 'Bxs'
CHAPTER 7 STEPS IN DEVELOPING A LEADERLESS GROUP DISCUSSION
Step 1. State the Purpose of the Simulation and Determine the Resources Available Leaderless group discussions provide a versatile and relatively simple means of observing how candidates interact with others and solve problems. Any number of topics can be included in the exercise content, and no special equipment or materials are required. Concerns have been raised about using LGDs for selection and promotion decisions due to their lack of standardization and potential biases against women and members of certain ethnic groups. These concerns about the LCD may explain why this type of exercise ranked third among interpersonal exercises behind oral presentations and one-on-one interactions (Kudisch et al., 1999). Nevertheless, LGDs continue to be used because they provide one of the best ways to observe how individuals interact with others in a group setting to solve problems. When organizations choose to use an LGD as a selection device, care must be taken to ensure that the procedure is fair and job related. The LGD is appropriate in a selection context when: • assessment of interpersonal skills is necessary, • the target position requires working or making decisions in groups, • the organization has the resources to construct a simulation that is fair and unbiased, • the organization has identified individuals who are able to serve as assessors, • the organization is willing to invest additional time in the assessment and selection process beyond the typical selection interview In this chapter, we demonstrate that the LGD can be used successfully in a selection context through the example of a large brewery. The brewery, which had been operating for many years, had recently undergone some process and culture changes. Previously, the job of a general brewery worker was very narrow in scope. Workers were trained to handle only very specific tasks, and when a problem arose, they were required to notify the supervisor, who would take care of it. However, the roles and responsibilities of the general brewery worker changed as the company structure became less hierarchical. This flatter structure encouraged general brewery workers to function in self-managed work teams. Every employee was expected to work with others to identify and resolve problems that occurred on the job, relying on the supervisor's help only when necessary. Employees were expected to bring problems to
LEADERLESS GROUP DISCUSSIONS
*xs>
87
the attention of their coworkers in a sensitive manner and ensure that work tasks were completed on time. Therefore, when hiring new workers, the brewery needed to ensure that they had effective interpersonal skills, and it was determined that a leaderless group discussion was an appropriate way to assess these skills.
Step 2. Conduct Situation Analyses Once the decision had been made at the brewery to incorporate an LGD simulation into the selection process, several analyses were conducted to gather information for developing the simulation. First, organizational goals, mission, and value statements were examined. These documents clearly demonstrated the organization's commitment to employee involvement and empowerment. A review of past and current organizational charts revealed that the span of first-level supervisors' control had increased from an average of 10 workers to over 30 workers. Therefore, supervisors had less time to spend on small problems and were expected to devote their efforts to process improvement and employee development. In turn, much of the quality control and day-to-day problem solving responsibilities fell to the individual workers. Moreover, workers' jobs had been enlarged to include more tasks that overlapped with tasks of other workers. For example, when brewery equipment broke down, the previous procedure called for the worker to report the problem to the supervisor, who would call the maintenance supervisor, who would direct the maintenance personnel to repair the equipment. The new procedure called for general brewery workers to gather information about the problem, independently contact maintenance personnel, and work cooperatively with other departments to ensure the problem was corrected in a timely manner and that interruption to the workflow was minimized. These changes in worker responsibilities led to changes in the requisite competencies for the workers. Previously, workers did little more than monitor the production process. Now, workers needed to be able to communicate with others effectively, identify problems independently, cooperate with others to resolve problems, evaluate product quality, and suggest process improvements. Moreover, workers were expected to demonstrate leadership skills by modeling effective communication to others and encouraging teamwork. Because of concerns about fair hiring and equal employment opportunities, the brewery needed to ensure that the exercise was clearly job related. Therefore, current general brewery workers and their supervisors were observed and interviewed to gather information about critical incidents. These incidents revealed major problems that had occurred previously and how they were handled, and the incidents were used to construct simulation content that was relevant and important.
88 ^^
CHAPTER 7
Step 3. Specify the Dimensions and Difficulty Level Based on the analyses conducted in Step 2, the organization determined that two of the key dimensions that were essential for general brewery workers were group leadership and interpersonal sensitivity. Group leadership was necessary to ensure that workers use appropriate interpersonal styles and methods in guiding a group toward accomplishing its goals while fostering cohesiveness and cooperation. Interpersonal sensitivity was necessary to ensure that workers communicated with others in a sensitive manner that indicated an awareness of how their actions impacted others. In addition to the dimensions featured here, the LCD exercise has also shown to be an effective technique for measuring other dimensions such as problem solving, flexibility, initiative, oral communication, confrontational skill, and decision making. In addition to choosing the appropriate dimensions to be assessed, it is necessary to decide on the difficulty level of the simulation. The level of difficulty should be based on the level of complexity required by the job. The general brewery worker position is moderately complex, and therefore, the simulation was designed to be moderately difficult. Techniques for increasing or decreasing the level of difficulty are discussed in Step 4.
Step 4. Specify Features of the Simulation The information gathered during the analysis phase will suggest necessary features of the simulation. In a selection setting, applicants may or may not be expected to have previous experience. If applicants are not expected to have previous experience, the simulation should contain a generic problem or issue to be solved. The simulation then provides an opportunity to assess the applicants' interpersonal or leadership skills rather than their job knowledge. In contrast, if applicants are expected to have some previous experience, the simulation can be written around an actual process or procedure. In this arrangement, applicants will be able to demonstrate how they apply their knowledge of the job to problem solving and leadership opportunities. In the brewery example, the organization was looking to hire individuals with little or no experience. Therefore, it chose to use a generic problem set in a food manufacturing plant. The organization chose not to set the simulation in a brewery because it was felt that this might give candidates with exposure to the brewery environment an advantage over inexperienced, yet equally qualified applicants. As experience was not a factor in hiring decisions, the organization wanted to get a pure measure of the participants' interpersonal skills that was not confounded with their previous experience. Another feature to consider when constructing the LGD simulation is whether to provide the participants with assigned or non-assigned roles. As-
LEADERLESS GROUP DISCUSSIONS
"^ 89
signed role LCDs are most appropriate for positions that require persuasiveness and some degree of dominance such as a department head or supervisor. In contrast, when cooperation is more important, a non-assigned role LCD is more appropriate. Because the brewery workers were expected to operate in a cooperative fashion and work together to attain common goals, the organization chose to use a non-assigned roles approach that would highlight the participants' leadership skills and interpersonal sensitivity as demonstrated among peers. Lastly, the length of time required for the simulation needs to be specified at this stage. The length of time needed depends on several factors including the complexity of the problem to be addressed, the difficulty level of the exercise, and practical considerations. In general, assigned roles LCDs will take longer than non-assigned roles LGDs because participants must prepare their strategy, review both the general information given to everyone and the information they alone possess, and then make a presentation of their ideas before the general discussion starts. The time required to complete the LCD may also be a function of the number of participants one plans to assess at one time: A larger number of discussants requires more time to arrive at a solution. Consideration of the number of participants raises the question of how many can actually be accommodated in a LGD. Experience shows that 4 to 6 participants in the optimal range. Three or fewer do not provide a rich mixture of interactions that simulates most work groups. Seven or more participants are usually too many to allow time for all to exhibit enough behaviors to permit thorough evaluation by assessors. Developers are cautioned against trying to accommodate too many participants in their LGDs in response to pressures to assess a large number of candidates in a short time.
Step 5. Prepare the Participants' Materials It is important that the developer write problems at the level of difficulty and complexity for the intended audience. Fatal Error 7 illustrates what can happen if the problems are too easy. In the brewery example, the participants were given a set of relatively straightforward problems that occurred occasionally in these work settings. Short descriptions of three or four of the following types of problems were presented to the job applicants: violations of minor safety rules, minor instances of insubordination to supervision, conflicts between two or more employees, taking excessive break time, not completing a task assignment, bringing food into a restricted area, and failing to notify maintenance of a malfunction in a machine. These problems were deemed appropriate because although the organization had policies and procedures to cover major
9O
-«X9>
CHAPTER 7
Why Simulation Exercises Fail Fatal Error 7: Content Was Too Simple The four problems presented in a leaderless group discussion were too simple for a group of supervisors in an organization manufacturing medical equipment. One problem had already been faced and solved in the work place, a second was covered by a clear policy, and two others were solved with a short discussion. In several groups of participants, all matters were typically resolved in about 20 minutes of a projected 60-minute exercise. The assessors had few meaningful observations about the problem solving abilities of the participants. violations in each area, management wanted employees to take more responsibility to monitor themselves and each other in the teams and work groups. Also, the problems provided opportunities to observe and evaluate behaviors relevant to two important dimensions, namely group leadership and interpersonal sensitivity (see Step 6). The depth and breadth of the participants' materials can vary widely and depends largely on the purpose of the exercise as specified in Step 1. The materials can range in length from a simple half page brainstorming exercise to a multi-page booklet with detailed information. Several factors that will affect the complexity of the materials include: • Dimensions to be assessed: The materials and instructions must allow for the participant to exhibit behavior that is relevant to the dimensions to be assessed. For example, if one of the target dimensions is emergent leadership, then the LCD simulation should be given a somewhat open-ended task with minimal instructions for how the discussion should be conducted. With fewer constraints, individuals with effective leadership skills will be able to emerge and direct the efforts of the group. • Level of the target job: It is probably unrealistic to require candidates for entry-level positions to analyze a large amount of complex and detailed information and discuss it in a group setting. For lower-level jobs, it is more likely that the purpose of the LGD exercise is to observe how the participants interact with others. Therefore, the materials should be brief, including enough detail to foster discussion but not so much detail that the participants will have difficulty understanding their goals. • Participant roles: In an assigned roles simulation, the materials will necessarily be more complex because there will be some background information that everyone has access to, and there will likely be some specific information pertinent to each individual. For example, if the task is to
LEADERLESS GROUP DISCUSSIONS
**+a> 9 1
advocate for one's employee to be promoted, the participants will likely have some brief information about each of the candidates who are being considered. However, each participant will have more detailed, unique information about his or her employee. This arrangement simulates actual settings where individuals operate with varying levels of information and allows participants to demonstrate how they can strategically use this unique information to their advantage. Alternately, participants in assigned roles may each have the exact same information. In this instance, enough detail must be provided so that the participants have enough information to effectively play their roles. Participant materials may be very brief if there are no assigned roles. At a minimum, materials should include: • enough background information about the simulated organization for the participants to have the appropriate context in which to discuss the issue; • enough information about the issue or problem to be addressed so that they may have a meaningful discussion. In an assigned-role LGD, information that is specific to each participant will need to differ enough from the general information so that the participants have the opportunity to use this information to their advantage. However, care should be taken to ensure that the roles are equal and that no participant has an advantage merely because of the role that was assigned; • instructions for what the participants are expected to do (e.g., develop a written plan of action that everyone agrees to and signs); and • time limits for preparation and discussion. Table 7.1 lists several ideas for non-assigned-role LGD simulations and Table 7.2 lists several ideas for assigned-role LGD simulations. Appendix F contains an example of a non-assigned-role LGD designed for candidates for a manufacturing position. The organization wanted to identify workers who could interact effectively in teams because of a new emphasis on teamwork throughout the organization. Appendix G gives some of the materials prepared for an assigned-role LGD. It was designed for groups up to 6 participants. The exercise involves making a brief presentation of one's ideas, and then engaging in a discussion of how to spend excess profits of $500,000 to improve the organization.
Indiuidual Rank Order Forms Where appropriate, the developer can ask participants to fill out a form giving the individual's rank order of the options being discussed. The use
TABLE 7.1 Typical Leaderless Group Discussion Content—Nonassigned Roles Content Area
Description
Problem-solving
Participants are asked to solve one or more thorny problems, including morale problems, supervision issues, customer problems, and so on.
Staffing
Participants are asked to decide whom to hire or promote into various positions. No one is required to advocate for a particular candidate; therefore, the goal is to maximize the benefits to the entire organization.
Brainstorming
Participants are asked to generate ideas about a particular topic. For example, they may be asked to generate a list of what makes an effective supervisor. Then they may be asked to prioritize the list.
Budget/Grant proposals
Participants, not representing anyone requesting funds, study proposals and decide how to allocate a financial budget/grant to one or more organization.
TABLE 7.2 Typical Leaderless Group Discussion Content—Assigned Roles Content Area
Description
Employee salaries/ bonuses
A decision must be made about how to allocate raises or bonuses, and each participant is assigned to be an advocate for his or her employee. Typically, there is a fixed amount of money available to distribute for raises or bonuses. The participants have the goal of acquiring the most for their employee while serving the larger needs of the organization.
Giving an award
A decision must be made about who should receive a scholarship or award, and each participant is assigned to be an advocate for a particular individual (e.g, an employee or a student). Again the goal is to try to garner the award for one's own candidate while being fair to all the candidates.
Selection/Promotion
A decision must be made about whom to promote or hire, and each participant is assigned to advocate for a particular individual. Each participant's goal is to convince the other group members that his or her candidate is the best person for the job.
Budget proposals
A decision must be made about how to allocate money in the budget. Each participant may play the role of a department head who wishes to acquire the largest share of the money for his or her projects.
Grant proposals
A decision must be made about how to award grant money or resources. Each participant may play the role of a liaison that is advocating for a particular organization or individual to receive the grant.
92
LEADERLESS GROUP DISCUSSIONS
*^ 93
of such forms can facilitate scoring by providing the assessors with more behavioral evidence upon which to base ratings. For instance, this information may great help the assessors arrive at ratings regarding a participant's persuasion and influence, judgment, or analysis skills. Such a mechanism can be especially useful for those participants who are more reticent during the actual discussion. For instance, it may show poor judgment if the person ranks a trivial option as most important. Also, by knowing a participant's position before the actual group meeting, the assessor can evaluate the extent to which the participant changes position based on new information provided by the group, or the extent to which the participant is able to influence others.
Step 6. Prepare Support Materials for Administrators, Assessors, Resource Persons, and Role Players In a leaderless group discussion, support materials are generally minimal, and include administration guidelines, behavior observation checklists or behaviorally anchored rating scales, and post-discussion questions for the participants, if applicable. Administration guidelines should include scripted instructions to read to the participant that specify what participants are expected to do in the simulation and how much time they have to prepare and discuss the issue. The guidelines should also provide instructions for timing each phase of the exercise and for ensuring the discussion stops when the time is out. The administration guidelines will generally also include basic information on how the room should be set up. This detail is important because the assessors must be close enough to hear the discussion and to clearly see the participants, yet not so close as to distract or intimidate the participants. The participants must be able to clearly see each other, and no one should be a dominant position, for example at the head of the table. Figure 7.1 shows one arrangement that satisfies these conditions. Although all assessors must observe all interactions, Assessor A has primary responsibility for Participants 1 and 2, Assessor B for Participants 3 and 4, and Assessor C for Participants 5 and 6. This arrangement is especially helpful if the exercise is to be recorded on videotape. The camera can be placed directly behind the assessors. To ensure future identification, name tents should be placed on the table in front of the participants. Developers are cautioned that it is quite difficult to obtain a videotape recording of a group discussion with enough quality to enable assessors to identify who is speaking throughout the exercise. Developers are urged to seek professional help with the taping and to thoroughly pilot test the arrangement.
94 is*^
CHAPTER 7
FIG. 7.1. An arrangement for observing a LCD.
Behavior observation checklists and behaviorally anchored ratings scales will assist the assessors with scoring the participants' performance. Clear rating scales are essential for evaluations that are reliable and valid, which is of the utmost importance in a selection context. Table 7.3 presents a sample behaviorally anchored rating scale for the dimensions of group leadership and interpersonal sensitivity. Behavioral descriptors for these scales will come from the analyses conducted in Step 2. Assessors or administrators may wish to ask participants follow-up questions to challenge the group's decision. These follow-up questions can assess whether participants are able to defend their choices in the face of challenges and whether they will support their teammates in attempting to justify a position that they didn't initially agree with. Moreover, this phase can further evaluate leadership skills by noting which participants offered responses during the questioning phase. Sample questions that may be asked during this phase include: • Why did you choose Smith for the promotion instead of Jones? • Explain why you think your recommendation for the new safety procedure will help to reduce accidents. • On what criteria did you base your decision to hire Garcia instead of Jackson? Post-discussion questions may be in the form of a 1-2 page written reactions form. Sample questions might include: • Please rank the group members, including yourself, in their overall effectiveness as it related to their contribution to the resolution of issues. • What reasons did you have for selecting the person you ranked as most effective? Least effective?
LEADERLESS GROUP DISCUSSIONS
^^ 95
TABLE 7.3
Behaviorally Anchored Rating Scales for A Leaderless Group Discussion Group leadership 5
Redirected the group to focus on solving the problem at hand when needed; directed the conversation toward goal attainment without appearing domineering or overbearing; solicited others' ideas and opinions.
4
3
Occasionally made an effort to lead the group but was not always able to redirect the groups' effort when needed; some attempts to influence the group were either ignored or rejected; helped keep track of time.
1
Did not offer suggestions; attempted to inappropriately control or dominate the conversation; pushed ideas on others even when they were not well received; took a follower role, for example, responded to others' questions and comments but did not initiate any structure or ideas.
Interpersonal sensitivity 5
Restated and responded to others' views and concerns; solutions demonstrated an awareness of how these suggestions will impact other members of the group and in the organization.
4
3
Listened to others and acknowledged their contributions; suggestions were somewhat self-serving—they benefitted this individual and were at some cost to others.
2
1
Was rude or harsh with other participants; interrupted others' sentences; comments demonstrated that he/she did not listen to the previous discussion; the proposed solutions would adversely affect a segment of the organization but there was no concern expressed about this fact.
Step 7. Train Assessors In addition to being thoroughly familiar with the participants' materials, assessors must be aware of examples of effective and ineffective behaviors for each dimension. Behavior observation scales and behaviorally anchored rating scales (mentioned in the previous step) can assist in this effort. It is essential in a selection setting that assessors use a common frame of reference because participants must be evaluated in a standardized way (e.g., participants should be evaluated against established standards rather than evaluating their performance relative to others in the LCD). There is a tendency for less well-trained assessors to contrast group members' performance with each other. Assessors also need to avoid giving high marks to someone who
96 *xs>
CHAPTER 7
is very talkative but does not have good ideas. It is critical for assessors to recognize and avoid these and other rating errors. To achieve standardization, the brewery trained its assessors with a multistep process. After reviewing job descriptions, definitions of dimensions to be assessed, and the exercise content, assessors viewed a videotape of an LGD performed by several participants and rated the behaviors observed. Next, assessors discussed their ratings and explored any differences in ratings among the assessors. Then, four of the assessors participated in the LGD while the remaining assessors observed and rated their behavior. Again, differences in observations and ratings were discussed. Participants and assessors then traded places so that each person had the opportunity to practice observing and rating behavior. The assessors gained further experience by rating the participants in the pilot group (discussed next). A more detailed discussion of assessor training appears in Chapter 13.
Step 8. Administer to a Pilot Group As noted in Chapter 2, the purpose of the pilot exercise is to make an initial evaluation of how well the exercise works and if the content is acceptable to the participants. Job applicants who feel that an assessment procedure is unfair or not job related are more likely to have a negative impression of the organization, refuse a job offer if it is made, and even take legal action if they are not hired. Therefore, in a selection context it is crucial that the acceptability and job relatedness of any assessment be established before it is used in practice. Specifically, the tryout of the LGD can provide the following information: • • • •
Whether the time limits for preparation and discussion are adequate. If the background materials are clear If the simulation is perceived as related to the job. If the simulation allows participants to adequ ately demonstrate the dimensions to be assessed. • If the evaluation and scoring procedures are fair • If the assessors are able to observe enough information with which to assign a rating.
Step 9. Use Simulation for Selection, Promotion, Training, Organization Development, or Research In any selection system, decisions should be based on multiple assessments. Given the time and expense required to assess applicants with an LGD simulation, it maybe more appropriate to conduct this assessment af-
LEADERLESS GROUP DISCUSSIONS
***>
97
ter more inexpensive methods have been used first to screen out obviously unqualified applicants. In our brewery example, the organization used the LCD as the last phase in a multi-step selection process. First candidates were required to complete an application blank. Individuals who met the minimum qualifications (at least 18 years old, valid driver's license, no criminal history) were then given a battery of paper and pencil tests followed by a screening interview with an HR representative. Only candidates who passed this phase were invited to participate in the LGD. Individuals who scored at least "satisfactory" on the LGD were made employment offers.
Step 10. Solicit Participants' Reactions Beyond immediate reactions of participants in the pilot testing session, an ongoing effort should be made to gather the reactions of participants on a regular basis. Even if participants in the pilot study had favorable reactions, job applicants may have different perceptions of the exercise. Moreover, exercises become out-of-date and what was once viewed as acceptable and job relevant may no longer be viewed as such over time. As a practical matter, it may be difficult to obtain reactions from all job applicants who participate in the LGD. Applicants who are eventually hired can naturally be surveyed later during orientation or on the job. However, it may be impractical and unwise to attempt to contact applicants who were not hired to get their reactions about the selection system. One solution to this dilemma is to survey participants immediately after they complete the exercise (before they are rejected or offered a position). Participants should give their reactions anonymously so as to prevent them from feeling compelled to give unrealistically positive feedback. In addition, the organization may wish to keep track of questions or concerns raised by applicants during or immediately following the exercise. Questions to ask during this summative evaluation might include the following: • Did you think this simulation was an adequate test of your suitability for the job? • Do you feel the simulation was administered in a fair manner? • What suggestions do you have for improving this assessment in the future ?
Step 11. Conduct Psychometric Evaluation In a selection context, reliability and validity of assessments are essential. Participants' scores should be based on their actual performance in the exercise, and not on extraneous factors such as idiosyncrasies in scoring. More-
98 «v»»
CHAPTER 7
over, these scores should be predictive of future success on the job. A concern about the reliability and validity of the LGD is that the exercise is by nature somewhat unstandardized (Thornton & Rupp, 2003). That is, each group of participants is different and therefore no two participants have the same experience. For example, an individual may appear very dominant and talkative in a group where the other participants are very quiet, but then may appear quite passive and shy when placed in a group of highly dominant individuals. This problem can be minimized by ensuring that scoring is based on an objective standard, rather than a comparison of individuals within a particular group. Reliability of LGD scoring is typically assessed through inter-rater agreement. Inter-rater agreement is the extent to which different assessors assign scores in a consistent manner. The first step in achieving inter-rater agreement is to train the assessors to use a common frame of reference when assigning scores. Behaviorally anchored scoring scales can assist with this effort. The second step in achieving adequate agreement is to periodically check to ensure that raters are assigning scores in a consistent manner. This step can be accomplished having two assessors rate the same participant and then comparing their scores. Research on typical levels of inter-rater reliability in LGDs has shown agreement to be quite strong, ranging from .66 to .99 (Gatewood, Thornton, & Hennessey, 1990). Chapter 14 provides a more detailed discussion for establishing reliability Validity refers to the simulation measuring what it is intended to measure and to the appropriateness of using the scores to make decisions about whom to select for a particular job. Therefore, validity must be considered both when the exercise is first constructed and after it has been used in practice. Validity can be enhanced by choosing a scenario for the LGD that is similar to actual situations encountered by individuals in the organization (Thornton &.Byham, 1982). Moreover, designing the exercise so that it elicits behaviors related to the target dimensions will also improve validity. For example, if group leadership is a dimension that will be assessed, the exercise instructions should be vague enough to allow for leadership behaviors to emerge. Once the exercise has been used with actual candidates, criterion-related validity evidence may be gathered by correlating candidates' scores with future criteria such as performance ratings, promotion rate, and so forth. Chapter 14 also provides additional details about estimating validity.
8 One-on-One Interaction Simulations: Role Play Exercises
In a one-on-one interaction simulation (sometimes called an interview simulation or role'play exercises), the participant interacts with another person in a role-play scenario. The participant is typically "in charge" of the interaction and is responsible for initiating and managing the communication. For example, the participant could be assigned the role of supervisor who meets with a subordinate having a performance problem. In some interaction simulations, the participant meets with two or more people. For example, a p_articipant could be assigned the role of a manager who meets with two representatives of a client firm to resolve delivery and service problems. In contrast to the leaderless group discussion where several participants are assessed at one time, in the interaction simulation only one participant is assessed. The individual with whom the participant interacts is a trained role player who acts the part of any one of a number of roles (e.g., supervisor, subordinate, coworker, customer). The role player could also be a trained assessor, although this dual function significantly increases the complexity of the assessor's role and may limit effectiveness in evaluating performance (Zedeck, 1986). Typically, the participant is given some background information and time to prepare for the interaction (usually 10 to 30 minutes). The participant may be given very specific instructions about what is to be accomplished during the meeting (e.g., create a budget, solve a particular problem) or very 99
1OO
^^
CHAPTER 8
vague instructions (e.g., counsel the employee), depending on the purpose of the assessment. After the interaction, the assessor may then interview the participant and role player to get their reactions. Alternatively, the participant can be asked to fill out a self-evaluation form after the exercise. Appendix C contains some questions that can be asked. These procedures can be an important opportunity to gather additional information about how the participants view their own performance. The benefits of one-on-one interaction simulation are immediately obvious. First, this exercise can closely approximate the actual work environment and common problems that participants are likely to face on the job. Second, because interaction simulations are so realistic, they tend to have high face validity and are viewed positively by participants. An interaction simulation tends to be more standardized than a LGD, because the role player is trained to carry out the activity in the same way with all participants. However, there are a few drawbacks to this technique: namely, it takes time to develop interaction simulations, role players must be recruited, and role players may not behave consistently across participants (Thornton & Rupp, 2003). Despite these difficulties, interaction simulations can be highly reliable and valid if good procedures are used in their development. Moreover, after the initial development time, the actual time to administer an interaction simulation is relatively short.
Time Requirements Preparation time: 10 to 30 minutes Interaction with role player: 10 to 20 minutes Follow-up interviews with the participant and role player: 10 to 20 minutes BACKGROUND Various forms of interaction simulations have been used in practice for the last 60 years. Perhaps one of their earliest examples was a component the British War Office Selection Boards (WOSBs) in the 1940s. The interaction simulations of the WOSBs involved role playing in stressful situations and were designed to observe how the candidates dealt with difficult people in stressful situations. Another precursor of this type of simulation was Fishbein and Ajzen's (1975) role-play technique, which was used to foster attitude change. The method was expanded to provide a situation to assess and develop other dimensions such as problem solving and communication ability. The technique has been found to be useful in training supervisors and is an integral part of many programs using behavior modeling principles (Goldstein & Sorcher, 1974). Today, the one-on-one interaction is one of the most frequently used simulations (Kudisch et al, 1999; Spychaiski,
ONE-ON-ONE INTERACTION SIMULATIONS
«xs) 1O1
Quinoes, Gaugler, &. Pohley, 1997). The technique may be used for individuals across a wide variety of organizational levels and functions. STEPS IN CONSTRUCTING AN INTERACTION SIMULATION
Step 1. State the Purpose of the Simulation and Determine the Resources Available In this chapter, we highlight the use of an interaction simulation to diagnose training needs. A city government wished to enhance its new supervisor training program by devising a way to diagnose individual training needs. Although there was already a formal training program in place, much of the content of this program focused on learning policies and procedures. Human resources officials recognized that new supervisors received little formal training on actual supervision skills. Moreover, new supervisors often worked quite independently from other managers so there was little opportunity for mid- and upper-level managers to observe and coach new supervisors. It was determined that an interaction simulation would be ideal for observing how new supervisors interfaced with their employees. Based on the observations gathered during the simulation, supervisors could then be given detailed feedback on their strengths and weaknesses along with recommendations for obtaining additional training Along with the leaderless group discussion, interaction simulations are one of the best methods for evaluating how well an individual interacts with other people in a (sometimes) stressful situation. However, unlike the leaderless group discussion, participants can not sit quietly while others dominate the conversation. Rather, the one-on-one nature of the exercise requires a considerable amount of interaction, thus allowing a full opportunity to observe each individual participant's behavior. The ability to effectively interact with others on an individual basis is important for nearly any position from top executive to assembly-line worker, and the interaction simulation can be used for a variety of purposes ranging from training to selection.
Step 2. Conduct Situation Analyses Prior to developing the interaction simulation for the city, a detailed analysis was conducted to gather information to develop the simulation. First, organizational documents were examined. This examination included a review of first-level supervisor job descriptions in a variety of departments, organizational policies and procedures for supervisors, and the existing training program. Based on information gathered in these documents and interviews with human resources staff, a detailed interview protocol was developed.
102
'BX3>
CHAPTER 8
Several senior supervisors were then interviewed to gather information to develop the simulation. During the interviews, the supervisors described key challenges they faced when dealing with subordinates. The greatest challenge reported by the supervisors was working with employees to resolve performance issues. The supervisors gave several examples of critical incidents in which a situation was handled either very well or very poorly. They also described the common personality types of subordinates they often had problems dealing with on the job.
Step 3. Specify the Dimensions and Difficulty Level Through interviews with supervisors and human resources staff, the two most important dimensions that emerged for city supervisors were individual leadership and conflict management. Individual leadership includes skills such as using effective and appropriate methods for helping others to achieve performance objectives. Conflict management was defined as handling interpersonal conflicts in a constructive way by maximizing benefits for both parties. Because the program was designed for new supervisors, the city decided to set only moderate expectations for the levels of performance expected on these dimensions in the exercise. Additional dimensions that may be observed in a one-on-one interaction simulation include listening and interpersonal sensitivity, which is an awareness of how what one says and does impacts other people, and a consideration of others' feelings and needs. Other dimensions include problem solving (which may entail problem or decision analysis, judgment, creativity, and decisiveness), management/administration (which may include planning and organizing, delegation, control, and development of subordinates), stress tolerance, initiative, and adaptability
Step 4. Specify Features of the Simulation As with all situational exercises, setting an appropriate scene for the simulation is crucial to its success. Some issues to consider when selecting a scenario for the one-on-one interaction simulation include: • the similarity of the fictitious organization to the actual organization, • the similarity of the role player's character to members in the actual organization, • whether the scenario should be a problem that the organization is currently facing or if it should be a problem that is likely to occur in the future,
ONE-ON-ONE INTERACTION SIMULATIONS
**>a>
103
• the level at which the participants are expected to perform: whether this will be a relatively easy problem with one correct answer, or if it will be a relatively difficult problem. On the one hand, matching the content of the simulation to the current organization helps make the exercise more realistic. Greater realism leads to greater ability to predict how participants will actually perform on the job and higher participant satisfaction with the assessment process. In a training situation, higher fidelity will result in greater transfer of training, and participants can directly apply the feedback they receive to situations they deal with on the job. On the other hand, if the simulations are highly similar to the actual work environment, participants who are more familiar with that environment will have a distinct advantage. This situation might be undesirable when the simulation is used for selection. Fatal error 8 illustrates the potential problem of writing content that is too real and how this may put some participants at a disadvantage. Fatal Error 8 Content Was Too Realistic for Some Participants The one-on-one simulation exercise contained a situation almost exactly like a real situation that had recently occurred in the mediumsized organization that was using the exercise to guide promotion decisions. Whereas some participants had the advantage of knowing how the organization had handled the situation successfully in the past, other participants felt they were at a distinct disadvantage because they did not see the positive outcome. If pretesting with a pilot group had been carried out, the problem would have been apparent before implementation. Often times, the simulation will be similar to the actual organization in some ways but not others. The fictitious organization and its problems may be different, but the scenarios and role-player characters might be similar to those in the actual organization. For the city supervisor program, the simulation was set in a city government in another town. However, actual city policies and procedures were used as background materials, and the problems presented in the simulation were similar to actual problems encountered on the job. Interaction simulations can vary from as short as 30 minutes to as long as an hour and a half (or even longer). Specifying the time limits for the exercise will depend on the exercise's level of complexity—more complex exercises will take longer—as well as practical time constraints. At a mini-
1Q4
'BX3>
CHAPTER 8
mum, participants should be given adequate time to prepare for the interaction, including time to read the background materials and formulate a plan for how to handle the interaction. This may range from 10 to 30 minutes. The time allotted for the actual interaction should be long enough for the participant to discuss all the relevant issues thoroughly with the role player and arrive at some solution to the problem at hand. This may range from 10 to 20 minutes. Pilot testing as suggested in Step 8 will help determine the time limits. Monitoring the time the participant is taking to execute the interaction is crucial. Although, ideally, all participants should be given as much time as they need to complete the interaction, practical considerations may require some prompting to ensure timely completion. The participant may need to be told when his or her time is running short without interrupting the flow of conversation. To facilitate this, the assessor may hold up various cards with times printed on them to let the participant know how much time is left. Alternatively, when time is running out, a more natural, less obtrusive strategy is for the role player to indicate that he or she needs to leave because of other obligations (e.g., late to a meeting, need to head to the airport, etc.). Lastly, 5 to 10 minutes should be allotted at the end of the simulation to interview both the participant and role player (separately) about their impressions of the session or have the participant complete a post-exercise written reaction form.
Step 5. Prepare the Participants' Materials Participant information should include the following: • Background information that is sufficient to provide an overview of the simulated organization, which may include an organizational chart and job descriptions. • Information about the present scenario and the person with whom the participant will be talking; to provide greater realism, this information might be presented in several memos, e-mails, faxes, and so on from different people—all with slightly different information and written from a slightly different perspective. • Instructions for what the participant is expected to do, including information about time limits (for both preparation and the interaction) and the goals for the session. For example, the participant may be explicitly instructed to work with the role player until a solution to a problem is reached, or the instructions might be vague. Explicit instructions tend to provide a good assessment of maximal performance (i.e., what the participant is capable of). In contrast, vague instructions might give a better indication of what the participant would typically do in a similar
ONE-ON-ONE INTERACTION SIMULATIONS
's^g;
105
situation. Moreover, vague instructions allow for the assessment of how creatively the participant approaches the situation. A variety of scenarios can be used in creating a one-on-one interaction. Table 8.1 lists some of the more typical types of scenarios. TABLE 8.1 Typical Scenarios in a One-on-One Interaction Exercise Interaction Type
Description
Supervisor/ Subordinate interactions
The participant plays a supervisor who is meeting with his or her subordinate to discuss a discipline problem (e.g, poor work performance, poor attendance, conflicts with others, etc.). A variant on this situation is the participant/manager who must meet with a lower level supervisor to provide leadership coaching.
Coworker interactions
The participant meets with a coworker to discuss an issue, solve a problem, or work on a project together A variant on this situation is the cross department interaction where the participant must meet with a peer from another department (e.g., manufacturing supervisor meets with sales supervisor) to work out an issue or problem between the two departments.
Customer/Client interactions
The participant plays a customers service representative who must deal with a difficult customer The customer may be internal or external. The level of difficulty may range from entry-level customer service worker to professional consultant or advisor.
The participant plays an employee who meets with his or her Subordinate/ Supervisor interactions supervisor to propose a new idea, present information, justify a decision, and so forth. Community member interactions
The participant plays a member of the organization who must deal with concerned community members. For example, a stadium manager might have to deal with complaints from neighbors about noise, traffic, and so on.
Job applicant interactions
The participant plays a hiring manager or an individual in human resources who must conduct an interview with a job applicant.
Conflict resolution
The participant mediates a dispute between two parties.
Fundraising interactions
The participant attempts to solicit donations from the role player.
Vendor interactions
The participant must deal with a vendor who is not providing satisfactory service.
Special populations
The participant must interact in an appropriate and sensitive manner with persons with disabilities. These individuals may also be community members, customers, employees, and so on. continued on next page
1O6
/B
*^3)
CHAPTER 8 TABLE 8.1
(continued)
Interaction Type
Description
Government regulator interactions
The participant must interact with a government regulator who is conducting a records review, financial audit, safety inspection, and so on. A variant on this situation is a government auditor from an external funding source (e.g., in a grant or federal contract situation).
Sales interactions
The participant attempts to persuade the role player to purchase a product or service. The level of this interaction can range from retail sales clerk to professional sales executive.
Partner interactions
Organizations are increasingly partnering with other organizations that offer complementary products and services in order to provide a total solution to the customer Managing these relations has become more and more important. The participant in this scenario would work with an organizational partner to forge a new relationship, discuss problems in a current relationship, and so on.
Media relations
The participant must deal with a representative from the media regarding a problem with the organization (e.g, safety violations, legal difficulties, etc.).
Appendix H gives some of the materials included in set of simulations of one-on-one interactions between telephone customer service representatives and customers. The appendix shows partial materials for both the candidate and the role player acting as a customer calling to place an order
Step 6. Prepare Support Materials for Administrators, Assessors, Resource Persons, and Role Players Support materials for an interaction simulation needs to be developed both for both the role players and the assessors. Role-player materials The role-player information should include all the information given to the participant, plus detailed information on the background, thoughts, attitudes, motivations, and feelings of the character to be played. Background information should include personal and professional details such as length of time with the organization, performance history, family information, and personal problems. Information about attitudes, thoughts, and beliefs should be specific enough so that the role player can get inside the head of the character. Table 8.2 provides a partial list of difficult personality types. These types depict the personalities of people who over the years we, and
ONE-ON-ONE INTERACTION SIMULATIONS
ir**s)
1O7
the managers we have interviewed, have found especially hard to work with. These types can be mixed and matched to create complex and challenging individuals. In general, role-player instructions should: • be specific enough so that the role player behaves consistently across participants. Therefore, the instructions should contain detailed information about enduring attitudes and personality characteristics (e.g., "You resent the fact that you have been passed over for promotion several times and you blame your supervisor"); • be flexible enough so that the role player is free to adapt to the situation as needed. For example, with less skilled participants the role player may need to apply relatively little stress so that the participant TABLE 8.2 Common Personality Types of Role Players The Whiner
Complains about everything; nothing is ever good enough for this person. He or she tends to create morale problems by spending an inordinate amount of time bad-mouthing the organization to coworkers.
The Competitor
Tries to get ahead no matter who gets hurt in the process. Will steal others ideas and stab them in the back. Continually strives to make him- or herself look good while making everyone else look bad.
The Denier
Refuses to admit problems or difficulties, even when confronted with them directly. Will lie to try and stay out of trouble. Tends to blame others for his or her difficulties.
The Antagonist
Deals with people in a hostile and aggressive manner Is rude, insulting, and downright nasty. Tends to be suspicious of others' motives and resists any friendly gestures.
The Manipulator
Knows how to work the system and others to get what he or she wants. Will turn things around to make others look like the "bad guy." Pushes people to their limits but somehow manages to stay out of serious trouble. Gets upset frequently over minor issues (e.g, being asked to take on new responsibilities, changes in schedule). Has very low tolerance for stress and changes and may pout, cry threaten to quit, and so forth when upset.
The Emotional Wreck
The Passive Aggressive
Is angry and hostile, but keeps these feelings somewhat hidden. Will be passive and pleasant when dealing with others, but will sabotage things by working slowly, feigning incompetence, and so on. May make sarcastic and cutting remarks and then pretend that he or she "didn't really mean it."
The Slacker
Never gets his or her work done on time (if at all). Goes out of the way to avoid taking on new projects. Never seems to put much effort into anything—does the bare minimum to get by
108
•
•
•
•
^^
CHAPTER 8
does not become too frustrated. In contrast, very skilled participants may need more of a challenge. The role player should have the latitude to change his or her approach to meet the needs of the participant while still being consistent with his or her basic character. However, in assessment situations where standardization is essential (e.g., a police promotion program), the role player must interact the same way with all participants; provide some means for sharing novel developments as a series of role plays unfold with successive participants. When multiple role players are used, there is a need to ensure consistency over time. It is important for role players to share any information they made up in response to adapting their character to the situation at hand. For example, if a role player was asked what sports or recreational activities he or she is involved in and the role player responded by saying "basketball," such information should be shared with the other role-play team immediately after the conclusion of the exercise. Despite having well-scripted roles, such unplanned instances occur. By sharing information, standardization between role players can be maintained; contain typical things the character says and does. For example, the role player could be instructed to have poor eye contact, slouch in the chair, and mumble when speaking; provide guidance on how to react to what the participant says and does. For example, the role player might be instructed to avoid making commitments to improve his or her performance for as long as possible. Or, the role player might be instructed to react positively to praise and negatively to criticism. However, the role player should be instructed when to start allowing the participant to gain the upper hand. This may occur when the participant starts displaying effective behaviors; tips on how to bring the session to a close if the participant is taking too long.
The developer may wish to introduce a surprise element in the action of the role player. For example, in the simulation with a problem employee, a "curve ball" can be thrown at the participant early in the meeting. For example, when the participant begins to address the role player's poor performance, the role player is instructed to respond in a very surprised manner (e.g., "Wait a minute, I thought this was a meeting to merely get acquainted. I had no idea you wanted to talk about my performance."). The role player rebuttal usually catches participants by surprise. It also gives the assessor the opportunity to evaluate how well the participant adapts his or her position based on new information (flexibility), maintains poise (stress tolerance), empathizes with the role player (sensitivity), and responds to the rebuttal (confrontation, leadership).
ONE-ON-ONE INTERACTION SIMULATIONS
^^
109
The choice of what type of character the role player should play should relate to the purpose of the assessment and the dimensions being evaluated. If individual leadership ability is being evaluated, the character should be someone who has some significant performance problems. If conflict management is being evaluated, the role player should be instructed to challenge or argue with the participant. The character's personality should be challenging, yet realistic. For example, no one person has all the characteristics listed in Table 8.2, but real people are not one-dimensional and so characters in interaction simulations should not be either. The character should be allowed to have strengths as well as weaknesses; the more realistic the character, the better the experience for the participant.
Assessor Materials The assessors should have all the materials provided to both the participants and the role players. The assessors will need forms for taking notes and recording observations. While the participant is interacting with the role player, the assessor should be taking copious notes of what the participant says and does (additional guidelines for observing behaviors and recording notes are presented in chap. 13). After the exercise, the assessor can then review his or her notes and categorize the various behaviors into the key dimensions and then make a judgment as to whether the behavior was an effective or ineffective example of that particular dimension. A behavior checklist or a behaviorally anchored rating scale can be very useful in this process. A sample behaviorally anchored rating scale for the individual leadership and conflict management dimensions is presented in Table 8.3. Determining what behaviors are effective or ineffective examples of a particular dimension is an iterative process, beginning when the simulation is being developed. Senior managers, human resources professionals, and long tenured incumbents can serve as resources. If the simulation is patterned after an actual event an SME described in the critical incident phase of situation analysis, the list of behaviors can come from what actually did and did not work well. Additions and refinements in anchors will come during assessor training and pilot work with the exercise. In addition, the assessors should have a post-simulation interview questionnaire (if appropriate) and evaluation guidelines. Interviewing the participant and the role player briefly after the interaction can often provide great insights in addition to the observations gathered during the exercise. Typically, the participant is asked questions such as "How do you feel the interaction went? What do you think went well and what did not go well? Do you think you convinced (the role player) to see things your way? What do you think would happen next in this situation? If you could do this situation over again, what might you do differently?" The role player
1 10 *xs)
CHAPTER 8
TABLE 8.3 Behaviorally Anchored Rating Scales for a One-on-One Exercise Individual Leadership 5
Clearly outlined performance expectations for the role player; described where the role player has failed to meet these expectations; provided concrete suggestions for how the role player could improve his or her performance; was able to get the role player to agree to a plan of action for improvement.
3
Outlined performance expectations for the role player and what the role player must do to improve; the suggestions offered on how the role player might improve were somewhat vague; the role player did not clearly agree to change behavior.
1
Did not provide clear expectations for performance or what must be done to improve performance; did not get the role player to agree to change his performance or improve in any way.
Conflict Management 5
Calmly redirected the role player when he started to argue; kept the focus of the conversation on work-related issues; worked constructively with the role player to find a "win-win" solution to the conflict.
4
3
Was somewhat defensive when the role player started to argue; occasionally allowed the role player to steer the conversation off topic; was eventually able to resolve the conflict through compromise.
1
Was rude or harsh or argued with the role player on three occasions; brought up personal or irrelevant issues; presented only "win-lose" solutions in which the conflict could only be resolved if one party wins and the other loses.
might be asked questions such as "How did the participant make you feel? Do you think you might change your behavior as a result of this interaction?" These questions can help the assessor determine if the participant had a realistic perspective of his or her performance and if the participant's view of the interaction was congruent with the role player's view.
Step 7. Train Assessors and Role Players Because of the need to have both an assessor and a role player in an interaction simulation, it may be tempting to try and have one individual serve as
ONE-ON-ONE INTERACTION SIMULATIONS
both. However, we would urge extreme caution before adopting this strategy because serving dual roles increases the cognitive complexity of the task and may limit the assessor's effectiveness in evaluating behavior (Zedeck, 1986). When serving as a role player, it is difficult to objectively evaluate the situation. Moreover, the role player would not be able to take detailed notes during the simulation and would have to rely on memory during the evaluation. If this situation cannot be avoided, it is recommended that the interaction be video or audio taped so that the assessor can review the information when making an evaluation. Assuming that the assessor is not serving as the role player, the assessor should be relatively unobtrusive during the interaction. The assessor should take detailed notes of what the participant says and does and keep track of the time. The role player on the other hand, actively engages the participant. Therefore, adequate training is essential to the success of the exercise. Role players should be picked so that they could realistically be in the position depicted in the scenario. Age and relative age are important considerations here. For example, it may be unrealistic to have a young, inexperienced administrative aide play the role of an executive in a simulation where the participant is a middle manager interacting with the CEO of the company. By the same token, an older, middle manager may not be appropriate to play the role of a mail delivery employee having problems with sloppy dress and a radical hairstyle. (We do not wish to reinforce age stereotypes in these recommendations. The developer can use role players with a wide range in ages. We are just advising against a gross, unrealistic mismatch of the role player and the role to be played.) Role players need not have any acting training, but they should feel comfortable acting out a role. Many people are not comfortable in this capacity. It's essential that all the role players are motivated and willing to take on this responsibility, otherwise they will have a hard time playing the part convincingly. If the simulation will be used with a large number of participants, it may be necessary to have several individuals who can play the same role. If the simulation is written in a gender-neutral fashion, the role players need not even be of the same gender. When more than one role player is used, they should be trained to play the role in a similar fashion. One technique to accomplish consistency is to have role players watch each other enact the roles in training. The following list of tips will help role players consistently carry out their part of the exercise: • Adhere to the role play guide. • Don't lead participants; don't refer to topics unless the participant first brings it up.
1 12
*>J>
CHAPTER 8
• Remain in role despite what happens (e.g., in the event the participant starts laughing, crying, etc.). • Be consistent within and between role plays. • Be concise versus giving up a lot of information too soon. • Maintain your composure. • Prevent premature closure of the meeting (e.g., if the participant attempts to end the session in 5 minutes, bring up other issues as a means of continuing the simulation and generating observable behavior). Once the role players have been identified, they will need practice and feedback before actually acting in an exercise. During this practice and feedback session, role players should: • be given time to read all the relevant background materials; • have the opportunity to discuss the character and ask questions; • observe a practice interaction simulation with an experienced role player (this may even be the developer of the exercise or an experienced assessor); • practice playing the role with each other while being observed by an assessor; • be given feedback on their performance and allowed subsequent practice; • practice responding to questions and behaviors not anticipated in the training materials; • learn to maintain a balance between reacting to the individual participant and still maintaining consistency across participants. If during the training it becomes apparent that a would-be role player is just not up to the task, that person should not be used in the interaction simulation. It may be that the person just has a difficult time playing the role convincingly, or just may not be comfortable play-acting.
Step 8. Administer to a Pilot Group Once the materials have been developed and the role players have been trained, the interaction simulation should be tried out with a pilot sample. This process will help to refine the simulation and work out any kinks in the administration. In our example, the pilot sample consisted of several moderately experienced supervisors. After conducting the assessment, both the assessors and the supervisors provided feedback on the exercise and final changes were made before the simulation was used in the organization.
ONE-ON-ONE INTERACTION SIMULATIONS
*xa>
1 13
Step 9. Use Simulation for Selection, Promotion, Training, Organization Development, or Research When using a simulation for diagnosis of training needs, it is important to keep in mind that just participating in the simulation is probably not adequate training in itself. Participation and feedback may give a concrete diagnosis of strengths, weaknesses, and suggestions for improvement, yet not lead to skill development. True training is an iterative process that requires that the individual have a chance to practice what he or she has learned conceptually and then receive additional feedback. When used as a diagnostic tool, participation in a simulation exercise can be an important first step in the training process, but participants must have the opportunity to then implement what they have learned and get feedback about their efforts. In our example of the city program for new supervisors, the organizational culture was such that supervisors worked very independently with little direct oversight. Therefore, after participating in the simulation, supervisors were given very specific recommendations for how to get follow-up training and they were reminded that they needed to take the initiative to obtain this training. Additionally, the city planned to develop another simulation so that supervisors could participate again and gauge how much they had improved since the first assessment.
Step 10. Solicit Participants' Reactions In addition to reactions obtained during pilot testing, reactions to the simulation should be gathered on a regular basis while the simulation is in use. When the simulation is used in a diagnosis setting, it will be relatively easy to collect reactions from each participant. Reactions could be gathered either immediately after the participants receive feedback or at some point in the future after the participants have had the opportunity to implement what they have learned back on the job. Questions to ask the participants might include: • Was the exercise a realistic simulation of the types of individuals and situations that you find on the job? • Was the feedback that you received helpful? • Have you been able to implement the suggestions you received in your job? • If not, what are some of the barriers to implementing the suggestions? • What was most helpful about this experience? • What was least helpful? What do you think should be improved?
1 14
'*^
CHAPTER 8
Gathering these reactions can ensure that the simulation experience is helpful to the participants, and they will provide valuable insights into how to continually improve the exercise.
Step 11. Conduct Psychometric Evaluation Unlike a selection setting, in a diagnosis or training setting, there is less of a need to ensure the simulation meets rigorous psychometric standards. Nevertheless, reliability and validity are important in a training setting to ensure that participants are getting accurate feedback. In an interaction simulation, ratings and feedback are based entirely on the observations and judgments of one assessor. Therefore, interrater reliability (i.e., the extent to which ratings remain consistent across assessors) is key. Interrater reliability can be established during assessor training and pilot testing by having multiple assessors observe and rate the same participants. Each assessor should come up with essentially the same observations and feedback for a given participant. To the extent that the assessors agree, reliability is established. When assessors disagree, observations and ratings should be discussed until every assessor is using a common frame of reference and standards to assign ratings. In a diagnosis situation, the validity of an interaction simulation may be established by ensuring that the strengths and weaknesses noted in the simulation are accurate reflections of the participants' real strengths and weaknesses. This can be determined a variety of ways such as comparing assessment ratings to ratings from supervisors, peers, or subordinates or comparing ratings to a self-assessment. In our example of the new supervisor program, supervisors commented that the feedback they received was consistent with feedback they had received elsewhere. These reactions helped to provide validity evidence for the simulation. Additionally, validity evidence can be gathered by determining if participating in the simulation helped the individual improve his or her subsequent performance. This can be evaluated by comparing future performance ratings of participants to ratings of other supervisors who have not yet had the opportunity to participate in the simulation (note: the groups should be similar in other respects such as position level, experience, and education). If participating in the simulation had an impact on performance, the ratings of participants should be higher than the ratings of individuals who had not yet participated. When an interaction simulation is used for selection purposes, more rigorous methods of establishing reliability and validity should be employed. Chapter 14 provides additional information for establishing reliability and validity.
9 In-Basket Exercises
An in-basket exercise is a simulation where a participant must sort through, and act on, the contents of a typical "in-box." The contents of the in-basket may include notes, memos, letters, reports, or other materials and can be delivered strictly in paper form, electronically, or in some combination. The participant is asked to respond to the items in writing within a specified time period. This time limit can be a significant component to the exercise if the goal is to observe how the participant responds to deadlines and pressure. Assessors may wish to meet briefly with the participants after they complete the exercise to clarify responses and to gain insights into the thought processes the participant used when completing the exercise. There are several advantages to in-basket simulations. First, in-baskets can be administered in either an individual or a group setting. Second, once the materials have been developed, administration is relatively straightforward. Assessors need not observe the participants as they work through the exercise, and someone other than a trained assessor may administer the exercise. Assessment and scoring are done after the participant has completed the exercise by reviewing the participant's written responses. This arrangement allows the evaluation to be done at a later time, at the assessor's convenience, and even at a different location. Finally, a wide variety of dimensions can be observed in the in-basket, making it especially appropriate for management and administrative positions. Moreover, the in-basket is one of the few methods available for examining participants' administrative skills. The in-basket can also be linked with other exercises. For example, information in the in-basket can provide some of the background for a subsequent one-on-one interaction simulation. 115
1 16
*vs>
CHAPTER 9
Time Requirements Instructions: 5 to 10 minutes Completion of in-basket: 1 to 3 hours Evaluation of responses: 1 to 2 hours Follow-up interview: 30 to 45 minutes BACKGROUND A research team at Educational Testing Services first developed the in-basket technique in 1953 for the United States Air Force. The purpose of this technique was to evaluate the effectiveness of the Air College in teaching skills such as organization, problem identification, and decision-making. Through the work of Frederiksen and his colleagues (e.g., Frederiksen, Saunders, & Wand, 1957), the introduction of the in-basket opened up the way for the assessment of new administrative and managerial dimensions and stimulated development of new types of written exercises (e.g., problem-analysis exercises; Thornton &.Byham, 1982). The in-basket technique was then used as a component of the AT&T Management Progress Study conducted by Douglas Bray and his colleagues (Bray & Grant, 1966). This study was a comprehensive evaluation of the characteristics related to adult development and managerial success. The study used the assessment center method to evaluate these characteristics and included an in-basket exercise. Since the time of these early pioneers, the in-basket has been and continues to be one of the most frequently used simulation exercises (Kudisch et al., 1999; Thornton &. Byham, 1982; Thornton & Rupp, 2003). Although Schippman et al. (1990) found limited evidence for the reliability and validity in studies published in the prior 30 years, they concluded that scorer reliability and validity can be high when the in-basket is developed for a specific job and setting. This book is intended to give developers guidance in achieving these desirable results. STEPS IN CONSTRUCTING AN IN-BASKET SIMULATION
Step 1. State the Purpose of the Simulation and Determine the Resources Available The in-basket simulation may be used for a variety of purposes such as selection, training, development, and research. In this chapter we highlight the use of the in-basket in promotion situations. In many ways, the in-basket simulation is ideal for this purpose. It is often the case that line employees have very little administrative and management responsibilities, and when considering these individuals for promotions, it is inappropriate to base the
IN-BASKET EXERCISES
^s>
11 7
decision solely on ratings of present performance because the individual's present role is substantially different from the role he or she would have after promotion. Performance on the in-basket can be used to predict how these individuals might perform at a higher level in the organization. In-baskets are frequently used in police department promotion processes. The job of a police officer is often very different than that of a sergeant or lieutenant, and very good officers may not necessarily make good supervisors. Although the officer may have to do a fair amount of paperwork and report writing, she or he generally does not have management responsibilities. Therefore, the in-basket can provide a measure of how the officer might handle staffing issues, formulation and enforcement of policies and procedures, replying to written requests, or preparing other written documents. An in-basket requires a substantial amount of time to develop. There are often 30 or more separate items in the in-basket, each representing a problem that the participant must solve. Many of the items are interrelated, which requires a degree of creativity to ensure these relationships are plausible. Moreover, in the case of public safety promotions, test security is a major concern, and completely new exercises must be developed for each promotion cycle. Therefore, prior to undertaking development of an in-basket simulation, the organization must ensure it has the appropriate resources to develop a good exercise, time from assessors to score in-basket responses, and to update the content as needed. The In-Baskel as a Research Medium In this chapter we are emphasizing the application of the in-basket simulation for promotion purposes. The in-basket has also been used as a research tool. It provides a setting in which some "independent variable" can be manipulated in an experimental design which compares the responses of groups of individuals who see slightly different versions of the exercise. For example, assume a researcher is interested in studying gender bias among managers in the selection of employees for attractive assignments in an organization. The background of an employee being considered for a special assignment in the in-basket can be provided. Conflicting information about the employee can be presented throughout the in-basket, some supporting and some not supporting selection, so that the decision is not clear-cut. For half of the managers (subjects in the research study), the employee in the in-basket is given the name "Mary," and for the other half of the managers, the employee is named "Bob." Thus, the variable being studied is the gender of the employee. All other information about the employee is held constant. The participant is required to make the decision whether or not to select the employee for the assignment.
1 18
's*^
CHAPTER 9
The in-basket is an ideal medium for conducting this type of research. The managers' decision is embedded in a larger complex of actions being taken, and it is highly unlikely that the participant will recognize that the purpose of the study is to examine gender bias. Thus, this setting tends to minimize the artificial demand characteristics that may contaminate some experimental research settings. The situation is life-like (i.e., it involves making an important decision, albeit in a simulation, which will affect the employee's career). This method is more realistic than some laboratory experiments yet safer than conducting research in organizational settings.
Step 2. Conduct Situation Analyses When used in the promotions process, items in the in-basket must be relevant to the target job. Making the exercise as job-related as possible makes the exercise more face valid, and thus more acceptable to participants. This is especially important in police promotion programs because the promotion process is so often challenged in court. These challenges often arise from individuals who are not promoted and who claim that the promotion process was unfair or resulted in adverse impact. Basing the in-basket on a thorough job analysis is the best defense against such a claim. Conducting an analysis for an in-basket exercise should start with a thorough examination of the duties and responsibilities of people in the target job. For example, the job of a police sergeant may entail staffing issues, handling discipline, addressing citizen complaints, ensuring vehicles and other equipment are available, and making reports to the lieutenant or captain. The in-basket items should reflect a representative cross-section of these responsibilities, rather than focusing on any one duty The analysis should also include a review of relevant written documents such as job descriptions, performance appraisal forms, and policies and procedures handbooks. Additionally, individuals in the target jobs should be interviewed as well as individuals in higher-level positions who supervise people in the target position. Observations and job shadowing are good complements to the interviewing. During the data-gathering phase, the developer should be especially on the lookout for critical incidents, that is, real problems that have arisen in the past and the techniques used to resolve them. Critical incidents should include both problems that were handled well and problems that were handled poorly so that scoring guides can be written that include examples of effective and ineffective behaviors. The information gathered during this phase will be useful not only in developing the simulation materials but also in determining the dimensions to be assessed. The situation analysis should also reveal important elements in the organization climate and culture that can guide the development of the exercise
IN-BASKET EXERCISES
'«X3>
1 19
content. Fatal error 9 reveals how failure to understand a current challenge facing the organization, namely, an upcoming union certification election can lead to the inclusion of sensitive materials. Why Simulation Exercises Fail Fatal Error 9: Sensitive Materials Embedded in an in-basket for managers in a manufacturing firm was mention of supervisors holding meetings with representatives of employees to discuss problems on the work floor and scheduling. Higher management rejected the in-basket because they were afraid such meetings implied that the organization endorsed employees forming a union. This issue arose at a sensitive time when a certification election for union representation was about to take place. Management opposed the union and did not wish to have any implied approval of meetings with employee groups to discuss work rules. A thorough situation analysis would have revealed the important contextual factor of a pending union election.
Step 3. Specify the Dimensions and Difficulty Level The in-basket may be used to evaluate several dimensions related to management and administration, and is one of the few exercises (along with the case study) in which written communication skills can be evaluated. Other dimensions that may be evaluated include initiative, management control, interpersonal sensitivity, judgment, decision-making, and decisiveness. In general, the in-basket exercise is an effective tool for assessing the more cognitively oriented dimensions, such as problem analysis and judgment. Although the in-basket can be a medium for assessing interpersonal sensitivity, the reader will recognize that the participant is dealing with "paper people," and thus interpersonal exercises with other "live people" offer a better medium for assessing interpersonal skills. Two of the dimensions that were assessed in the police promotional program in-basket were planning/organizing and delegating. Planning and organising is defined as establishing a course of action for the self and/or others to accomplish a specific goal. This includes planning proper assignments of personnel and appropriate allocation of resources. Delegating is defined as effective allocation of decision-making and other responsibilities to the appropriate subordinates, and utilizing subordinates effectively Choosing the appropriate level of difficulty depends on the level of the target job and the purpose of the assessment. In a promotion situation, the participants generally will not have performed in the target job. Therefore, it is appropriate to target the difficulty level to that of a new person in the job.
120 'Bxs'
CHAPTER 9
It would be unrealistic and unfair to evaluate these individuals against the criterion of an experienced person in the same rote. Rather, expectations in the exercise should mirror expectations for what a new manager should know before receiving any specialized training
Step 4. Specify Features of the Simulation Once the target dimensions and the appropriate level of difficulty have been selected, decisions must be made about how to design the actual simulation. The content of in-baskets can range from the tactical (e.g., dealing with day-to-day operations) to the very broad-based and strategic (e.g., dealing with integration of many functional areas of an organization's operation). In designing an in-basket for the police promotion program, key decisions centered on the problems or issues to be presented in the in-basket, and whether or not these issues should be similar to real problems faced by the organization. Generally speaking, in a promotion setting it is best to have the simulation resemble the target job closely enough so that it is face valid and acceptable to participants, while at the same time being different enough not to give certain people an unfair advantage. For example, if real issues are used from the organization's past, individuals who happen to be familiar with these issues will be at an advantage. A better course of action would be to include problems that are similar to real problems in the organization but that differ with regard to the details. Additionally, it may be appropriate to set the simulation in a context other than the actual organization. Another issue to consider when making specifications for the exercise is the length of time that will be permitted to complete the in-basket. Naturally, this may depend on the organization's resources as longer exercises will require more staff time to administer and score, and in the case of promotions, more staff time for participants. However, longer in-baskets provide more opportunity for the participant to exhibit relevant behaviors. After a certain point, however, there is a decreasing rate of return, and the participants are likely to become fatigued. On average, in-baskets typically last 2 to 3 hours and this seems to be enough time to observe relevant behaviors without causing undue fatigue.
Step 5. Prepare the Participants' Materials Participant information should include the following: • Extensive background information that provides an overview of the simulated organization. This information may include an organizational chart, personnel policies, and a summary of past performance reviews. The background information must be detailed enough to give the par-
IN-BASKET EXERCISES
•
•
•
• •
***a>
12 1
ticipant sufficient information to draw from when responding to the in-basket items. It is also necessary to set the scene for the exercise by providing a rationale for why this batch of work must be handled in a brief time period. A common scenario entails the participant playing the role of a new manager who's predecessor had to leave unexpectedly (due to illness, death in the family, etc.) and now the participant must deal with immediate issues. Another common scenario is the participant who plays the role of a manager who will be out of the office for several days and must first take care of the in-basket items. Often the scenario is set on a weekend so that the participant will not be able to reach anyone by phone or have access to an administrative assistant. A calendar for scheduling meetings and appointments. The participant should also be told what day it is in the scenario so that he or she can schedule appointments appropriately. Instructions for what the participant is expected to do, including information about time limits and how the participant is expected to respond to the materials in the packet. It should be stressed that the participant is expected to make all responses in writing, including planned future actions. Additionally, participants may be instructed to write responses directly on the in-basket items, and attach additional notes if necessary. The packet of in-basket materials (described in detail below). To enhance the realness of the exercise, these materials should be of varying sizes, shapes, and colors. Some items may include (legible) handwritten notes or signatures. Blank paper or stationary that the participant can use to respond to the materials. A clock or timer.
The in-basket materials themselves should consist of letters, memos, phone messages, e-mails, reports, and any other written documents that are appropriate. The content of the items should also cover several areas such as staffing difficulties, financial data, and technical information, and have varying degrees of time-sensitivity such that some require an immediate response while others can be postponed or delegated. Table 9.1 lists several common issues found in in-basket simulations. The items should also come from a variety of sources such as supervisors, direct reports, peers, and customers. If the goal is to assess the willingness to act or motivation, then at least some of the items should be less demanding so that the participant has a choice of whether to respond. If the goal is to assess ability to act, then at least some of the items should include direct orders from people in authority (such as the supervisor or higher level executive). Both of these goals can be accomplished by varying the level of urgency of the items.
122
's*5^1
CHAPTER 9
TABLE 9.1 Typical Problems Included in an In-Basket Exercise Problem
Description
Interdepartment conflicts
One or more memos indicate a disagreement between the participant's department and another department. For example, patrol officers may complain about dispatch workers.
Customer complaints
One or more angry letters from irate customers threatening some action unless their complaints are resolved.
Legal difficulties
A letter from an attorney threatens legal action. For example, the letter could pertain to a former employee who is suing for discrimination.
Supervisory requests
The participant's supervisor has requested that the participant take some action on a particular item. The request could either be urgent or routine.
Citizen complaints
Especially appropriate in police promotion programs, a letter from a citizens' committee complains about officer behavior in the community (e.g., harassment, racial profiling).
Time conflicts
Two requests are made for meetings or appointments at the same time. The participant must decide what to attend and how to handle the event that cannot be attended.
Staff complaints
Direct reports may write memos complaining about nearly anything, including scheduling, pay, other staff, and so on.
Unimportant distracters
Several items should be included that are time-consuming, yet relatively unimportant such as routine reports to review or interesting articles to read.
Financial data
A department budget is included. This may be strictly for information, or the participant may be asked to reduce the budget by a certain percentage.
Purchasing requests
Requests may be made from staff to authorize them to spend a certain amount on hiring new people, buying new equipment, and so forth. Usually an immediate decision is requested.
Integrated problems
Several items are included which relate to a single problem. The common problem may not be obvious, however For example, several items may pertain to problems on a particular shift. The participant must recognize what the underlying problem is.
Some Suggestions, Tips, and "Tricks" for In-Basket Items The following tips for writing in-basket items have proven helpful: • Include enough information on some issues so that the reasonable person will believe he or she can take action. • Bury a critical item later in the sequence of items. An example would be a memo toward the bottom of the packet stating that a key em-
IN-BASKET EXERCISES
• •
• •
•
•
•
•
'^a>
123
ployee quit or is out sick. This strategy will adversely affect those participants who inappropriately handle the in-basket materials by addressing items sequentially rather than skimming and prioritizing all issues. Put an important meeting on the calendar and then have an important person set another meeting for that same time. Have a superior suggest a particular employee be delegated an assignment, yet have other evidence in other items or in the background information indicate that this employee is not performing well. Have a critical outsider (e.g., customer, citizen) say he or she is coming to visit at the same time an important meeting is already scheduled. In the background information, provide a summary of prior performance appraisals showing that some employees are doing well and others are doing poorly. Include a memo from a superior who requests that the assessee take some action that is ethically or professionally questionable (e.g., falsify a record or alter a report). Include a memo from a subordinate supervisor (e.g., for example, a lead person) saying that he or she is about to tell a machine operator he or she is fired for "insubordination." Include in the background information a policy statement saying all terminations must be approved by the department head and an HR representative. Include instances of inappropriate upward delegation to see how participants respond (e.g., include a memo from a subordinate requesting that the participant complete a task or assignment that should be handled by the subordinate. Make sets of items related to each other, for example, provide two or more items supporting evidence that an employee is a problem as well as a memo from a higher level manager who wants this person to be given a~prime assignment; provide items giving contradictory information about the quality of performance of an employee, with the "negative" evidence coming from another problem employee.
Computerized In-Baskets With the advent of technology, some recent in-baskets have been developed for administration via a computer. We wish to distinguish between two arrangements for the participant's responses to the in-basket materials: constructed-response format and multiple-choice response format. In the constructed-response format, some memos and letters come in the form of e-mail, along with the standard, paper memos and letters; and participants are expected to reply via e-mail or handwritten responses. This arrangement conforms to the traditional definition of a behavioral exercise because the participant must display overt behaviors, that is, actually write a re-
124
'Bxa>
CHAPTER 9
sponse. The second format, the multiple-choice format of the computerized in-basket presents the participant with several predetermined response options, and the participant simply chooses one of these alternatives. In our opinion, this format does not qualify as a behavioral simulation, because it does not require the participant to overtly construct the response he or she wishes to give. Thus, although a computerized in-basket may demonstrate some desirable features (e.g., ease of administration and scoring, reliability of scoring, and predictive validity), this format does not conform to the definition of a behavioral simulation as stated by the International Task Force on Assessment Center Guidelines (2000). Appendix I provides a sample of materials from an in-basket for administrators in a governmental agency. Included there is a set of instructions for the participant, and a few of the memos and letters appearing in the packet. Space does not allow the presentation here of all introductory materials (e.g., policies, calendar) and all items in the actual exercise.
Step 6. Prepare Support Materials for Assessors, Administrators, Resource Persons, and Role Players Generally, assessors need not be present while the participants are completing the in-basket; however, someone should be on hand to administer the session and answer any questions. The administrator's guide should include information about how to introduce the exercise to the participants, time limits, and how to address participants' questions. After the participants complete the exercise and the assessors have a chance to review the responses, there is sometimes a follow-up interview to gain further insight into the participants' rationale and thought processes for the responses. In this case, the assessors need an interview guide that is structured enough so that the interview is consistent across participants and flexible enough so that the interviewer can ask the participant about his or her individual responses. When asking about items the participant did not respond to, the interviewer should be aware that the participant may be led to believe that something he or she did not respond to was important, and the person may attempt to rationalize why a response was not given. Such rationalization may be informative. Alternatively, a post-exercise questionnaire can be used to gather information. Because of the large number of responses in an in-basket simulation and the interrelatedness of the items, scoring the in-basket can be a complicated and time consuming process. To aid in this processes, it is helpful to provide a checklist for effective and ineffective behaviors for the dimensions being assessed. For example, for the dimension of delegation, there are some items for which delegation is appropriate and other items where it is not. A checklist will help assessors keep track of how often the participant delegated
IN-BASKET EXERCISES
^^
125
work appropriately. Behaviorally anchored rating scales such as those displayed in Table 9.2 can be useful to help the assessors integrate information from the checklists and come up with overall ratings for each dimension. Although the in-basket simulation typically does not require a role-player, there are some situations in which a role player may be used. For instance, a role player may be used to interrupt the participant during the exercise to present an urgent new problem. Page (1995) cites an example in a police promotion process whereby a role player interrupts the participant with the news that two officers were just injured in a car accident on their way to work. In this situation, the assessee could be given the option to either "delegate" some tasks to the role player or handle everything on his or TABLE 9.2 Behaviorally Anchored Rating Scales for an In-Basket Delegation 5
Made maximal use of staff resources by delegating all tasks that are appropriate; delegates not just routine tasks, but tasks which provide an appropriate challenge or developmental opportunity to support staff and other direct reports; identified tasks that are best handled personally and did not delegate these tasks.
3
Delegated most tasks appropriately; chose to do some tasks personally that are more appropriately done by others; delegated some tasks that would have been more appropriate to handle personally.
2
1
Did not delegate any tasks; attempted to do everything—even tasks that are best handled by others (e.g., clerical or administrative assistants); attempted to delegate nearly everything—even very important items that the participant should handle personally; delegated to inappropriate personnel.
Planning and Organizing 5
Used calendar to schedule appointments and track deadlines; used a to do list or some other system to keep track of everything that must be done; prioritized appropriately so that most important items will be handled first; made a plan for how all items will be handled and for holding others accountable when tasks are delegated.
4
3
Used a calendar and/or to do list to keep track of appointments, tasks, and deadlines; showed no clear plan for how tasks were prioritized; made a plan for handling most items that require a response, although failed to respond to some items.
2
1
Had no system for keeping track of appointments, tasks, or deadlines; did not appear to have prioritized work tasks—approaches items in a haphazard manner; did not have a plan for how any tasks are to be accomplished.
126 -«xs>
CHAPTER 9
her own. Another example of role player use in an in-basket comes from the Israeli Air Force in which officer candidates are assigned an enlisted person to work with during the exercise (Thornton, 1992). The candidates are then assessed on their ability to delegate appropriately, their use of the staff person as a resource, and their interpersonal style with the subordinate. The choice of whether to use role players depends not only on what is being assessed, but also on staff resources available. A major benefit of the in-basket is that it may be administered in a group setting, thus saving staff time during administration. Providing an "assistant" to each participant will entail having enough resources to make this feasible. Moreover, if using "assistants," it would not be appropriate for the in-basket to be completed in a group setting. When role players are used, they will need training. A thorough discussion of training for role players can be found in Chapter 8: One-on-One Interaction Simulations. In general, role players in an in-basket have more of a passive rather than active role, as they are typically there to serve as a helper to the participant. However, the role players in this situation can provide valuable insights as to how the participant interacts with support staff and delegates work.
Step 7. Train Assessors In some ways the assessor's role in evaluating an in-basket is easier than for other exercises. The assessor does not need to simultaneously observe and record the behavior as it occurs, rather the behavior is observed from the participant's written responses (and in some cases verbal responses if an interview is used). This feature of the in-basket makes it less hectic for the assessor. On the other hand, evaluating the materials afterward can be time consuming and challenging due to the complex nature of the items in the in-basket. Moreover, because many of the items in an in-basket are related, the assessor must be tuned into whether or not the participant made these connections and responded appropriately. Because assessors do not make observations of behavior as it occurs, training assessors for the in-basket is slightly different than training for other exercises. It's a good idea to have the assessors begin by completing the in-basket themselves, under the same time restrictions as the participants. This is helpful in assuring that the assessors have a good understanding of how the exercise works and what participants are expected to do. Next, the assessors should be given some feedback on their performance along with specific examples of effective and ineffective behavior. If multiple assessors score the same in-basket, training should then focus on ensuring assessors will score the in-basket consistently.
IN-BASKET EXERCISES
^^
127
This last component to the training is best accomplished by having the assessors all score the same in-basket responses. These responses can either be from a previous assessment, or the developer may generate them to use as a training example. The assessors should each score the sample in-basket independently and then meet as a group to talk about the ratings each person gave. The purpose of this discussion is to generate a common frame of reference for effective and ineffective examples of each of the dimensions assessed. The checklists and ratings scales described in the previous section help immensely with this process.
Step 8. Administer to a Pilot Group The pilot-testing session is the final opportunity to ensure that all the materials in the simulation are appropriate and job-relevant. The pilot group should be asked to complete the in-basket within the appropriate time limits. This helps the exercise developer to ascertain if enough time has been allowed for the average participant to complete most of the items. It is not necessary to allow enough time for everyone to complete the entire in-basket, as time pressure is often an integral part of the exercise. A critical part of the pilot test should be an evaluation of whether the materials are clear and understandable to the participants. Because all the items are presented in written form, it is important that they be legible, well written, and free of typographical, spelling, and grammatical errors. Of course, the developer may wish to include some errors in the material to reflect on the abilities of the memo writer. Additionally, the messages in the items may be written to be precise or vague. For example, a memo that states simply that there is a "problem" with a particular employee may not provide enough information for the participant to make a meaningful decision as to how to act. The participant then needs to make a decision on whether or not to act or to seek additional information. Conversely, items that are too detailed may lead the participant to the correct response. In evaluating the responses to the in-basket items, the developer will be able to gauge if the items are too difficult or too easy. Ideally, there should be varying levels of performance across individuals with some doing well, some doing average, and some doing poorly. If all participants in the pilot group struggled with the same items, this is a sign that these items may not be clear and are in need of revision. Finally, the pilot simulation should provide the opportunity for the developer to gauge the job relatedness of the simulation. The participants can be interviewed afterward about their perceptions of the materials. During these interviews, the participants should be asked if the materials seem appropriate to the target job and if they are free from offensive or biased language.
128 "f^a>
CHAPTER 9
In a police promotion setting, conducting a pilot study may be difficult due to concerns about test security. Generally, promotiqn exams and simulations should not be revealed to a general police audience before they are actually used. Even if the pilot group does not reveal the contents of the exercises, the mere appearance of a leak or an advantage to a certain group may be grounds for the whole process to be challenged in court. Therefore, in these situations it is often advantageous to conduct the pilot study in a secure setting such as a university. Although this strategy may not necessarily help in identifying the job-relatedness of the exercise, it will certainly provide information about the clarity of the materials.
Step 9. Use Simulation for Selection, Promotion, Training, Organization Development, or Research In a promotion setting, the in-basket simulation is often used in conjunction with other assessments such as a structured interview, paper and pencil testing, and a portfolio review. The rich information gathered during the in-basket can be particularly informative in making promotion decisions. In addition to observing how the participant communicates in writing, the participants' skills in a variety of other areas such as planning and organizing, delegating, and dealing with supervisors, peers, direct reports, and customers can be assessed as well. These skills are generally hard to assess through interviews and traditional paper and pencil testing. Moreover, the in-basket results can be used in a developmental way to provide feedback to individuals who are not yet ready to be promoted. Because the responses to the in-basket are in writing, supervisors can provide direct, behavioral examples to the participant to illustrate his or her strengths and developmental needs. Additionally, specific suggestions can be given for addressing these needs.
Step 10. Solicit Participants' Reactions When the simulation is used in a promotion setting, reactions may be obtained from the participants immediately after the exercise as a part of the post-exercise interview. Or alternately, the participants may be surveyed anonymously either immediately after the exercise or at some point in the future. In either case, because participants' reactions might be influenced by their performance, it may be best to survey them before they get the results of the assessment. Questions to ask the participants might include: • Were the directions clear; did you understand what you were supposed to accomplish in this exercise? • Did the items in the in-basket seem relevant for the job to which you were applying?
IN-BASKET EXERCISES
*xa>
129
• Do you believe that using the in-basket as a part of the promotion decision process is fair? • Which part of the in-basket was the most challenging? Least challenging? • Do you have any suggestions that would improve this experience for future applicants? Gathering these reactions can ensure that the simulation experience is acceptable to the participants, and this information will provide valuable insights into how to continually improve the exercise.
Step 11. Conduct Psychometric Evaluation Past research on the psychometric qualities of the in-basket are somewhat mixed. Some reviews have been critical of the construct validity of the in-basket (Schippman, Prien, & Katz, 1990). Other research has demonstrated that interrater reliability, or the degree to which assessors are consistent in evaluation, is generally good for in-basket simulations (Thornton & Byham, 1982; Thornton & Rupp, 2003). In addition to interscorer reliability, internal consistency may be evaluated if there are several scores assigned for each dimension. For example, if there are 10 items in the in-basket that provide an opportunity for the participant to demonstrate planning and organization skills, a dimension rating can be given to each of these items. Then, the consistency of these ratings can be examined. However, the internal consistency of in-basket ratings within a given dimension has historically been low (Thornton &. Byham, 1982). Validity evidence for the in-basket is generally best when the simulation is created for a specific job (Thornton & Rupp, 2003). In this situation, content validity may be ascertained through expert judgments of whether the in-basket adequately reflects problems and tasks found in the target job. Criterion-related validity may be evaluated by comparing overall ratings or dimension ratings on the in-basket against some external criterion. For example, in a promotion setting, the most appropriate criterion is not current performance, but rather future performance in the new position. It is often possible to gather performance appraisal ratings "for research purposes only" on those participants who are promoted. Alternately, salary progression may be used as a criterion. Previous research has demonstrated that in-baskets generally show moderate to strong relationships with these criteria (Thornton, 1992; Thornton &. Byham, 1982). Additional information on evaluating the reliability and validity of simulation exercises is presented in Chapter 14.
1O Oral Fact Finding Exercises
In an oral fact finding exercise, the participant is given a short description of a situation that has occurred or a decision that has been made but is now being challenged. The participant is then given a fixed, and often limited, time to gather additional information by asking questions of a resource person. The resource person is someone trained to have extensive information about the situation, and will answer questions posed by the participant, if asked properly. The participant then must make a specific recommendation about what should be done, and finally may be asked to defend that decision. This technique is sometimes called the incident method, because a specific incident has occurred and someone is asking for a clear recommendation on what to do next. The technique is also sometimes called a decision-making exercise, because the participant is required to make a specific decision, provide a rationale, and sometimes defend the decision in the face of opposing evidence. This exercise is different from the case study method, because it is conducted verbally and information is gathered orally from the resource person. Two types of incidents or situations are frequently presented. In the first type, a specific problem has arisen recently and the higher-level authority is seeking advice on how to handle the situation. Usually two clear alternative solutions are being considered. For example, delivery schedules are not being met and the manager is considering whether or not to change the process of shipping out orders. In the second type, a person at a lower level of the organization has made a decision, that decision has been challenged or appealed, and a higher-level authority is seeking advice on whether to support or reverse the original decision. For example, a super-
130
ORAL FACT FINDING EXERCISES
13 1
visor has recommended that a staff member be fired, and the decision has been appealed higher up. The participant is given the opportunity to query a resource person to obtain information about the incident in order to render a judgment about what should be done. One example of a fact finding scenario is a situation in which a request to conduct an attitude survey has been turned down by the assistant manager of the plant. Other individuals in the organization are strongly urging the manager to conduct the survey. The plant manager asks an impartial outsider (the participant) conduct a review and make a recommendation. The resource person is a key element in this simulation, carrying out different functions throughout the exercise. Initially, the resource person is a neutral source for a vast amount of information that can be ferreted out by the participant. For this function, the resource person does not play any role; he or she just answers questions posed by the participant. Following the questioning period, the participant makes his or her recommendations. At this point, an optional stage of the exercise, the resource person can change his or her function and challenge the participant. The resource person can try to get the participant to reverse his or her decision. Referring to the previous example, if the participant recommends conducting the attitude survey, the resource person can give reasons for not doing so, perhaps by citing evidence that the participant did not uncover. Or, if the participant recommends not conducting the survey, the resource person can give reasons against this decision, again citing evidence not uncovered by the participant. The fact-finding simulation has some characteristics in common with the case study described in Chapter 5. Both call for the participant to analyze information, evaluate alternatives, and render judgments. In both exercises, the participant's ideas may be challenged—by a role player in the case study exercise (if an oral presentation is called for) and by the resource person in the fact-finding exercise (if this phase is included). Differences between the fact finding and case study exercises are summarized in Table 10.1.
Time Requirements Participant reads short description of the incident and prepares questions: 5-10 minutes Participant asks questions: 15 minutes Participant prepares recommendation: 5 minutes Participant presents recommendation and rationale: 5 minutes Resource person challenges participant: 10 minutes (optional)
132
-*X3>
CHAPTER 1O
TABLE 10.1 A Comparison of Fact Finding and Case Study Exercises Fact-Finding Exercise
Case Study/Analysis Exercise
Information obtained orally
Written information provided
Focused on one incident/problem
Usually has multiple problems
Short time for analysis and decision
Relatively longer time to read and think
Must ask to get information
All information is presented to everyone
Recommendations presented orally
Written recommendations given (oral report may be given)
BACKGROUND The fact-finding simulation has roots in the "incident process" described many years ago by Pigors (1976). The incident process is a teaching tool used to train students in methods of inquiry. It provides training in a combination of intellectual skills (e.g., the ability to think clearly about specific facts), practical judgment (e.g., the capacity to make a preliminary decision and then modify a conclusion in light of new information), and social awareness (e.g., the ability and willingness to adjust a decision so it is acceptable to others). The basic process is that students are given a short written description of an incident, then allowed time to work in groups to formulate questions for getting relevant information. They then ask questions of the instructor who has comprehensive information that will help the students learn the "what/when/where/how" about the incident. The group is then given time to discuss the "why" of the situation and formulate a conclusion. This conclusion can then be challenged by the instructor. This basic process was adapted to form an individual exercise for the assessment of oral fact-finding skills important for many staff members. STEPS IN DEVELOPING A FACT FINDING SIMULATION
Step 1. State the Purpose of Simulation and Determine the Resources Available For purposes of illustration in this chapter, we describe the use of the fact-finding exercise as a training technique for developing specific consulting skills. In this application, an information technology organization was training consultants who worked with companies to develop specialized software programs. These consultants typically met with IT directors who were in charge of setting up new management information systems in their organizations. After the general scope of the project had been agreed on, the
ORAL FACT FINDING EXERCISES
*^
133
consultants worked closely with other managers and staff in the relevant units to understand the business process to be managed by the tailor-made software program. The consultants spent considerable time with the client organization understanding the goal of the project and the specific information needs of the units. The purpose of the training program for the consultants was to enhance the consulting skills of the consultants working in the client organizations. Consulting skills were defined to include the ability to interact with the client to understand the organization's specific needs. The training program included conceptual materials such as a consulting model the organization wished to foster throughout the organization. Additionally, the organization wished to have a "hands-on, experiential" component in the training program. It was decided to use the fact-finding exercise as a method to practice some of the behavioral skills of consulting. Specifically, it was decided to emphasize the skills of actively asking questions and carefully listening to the client's responses. The use of a fact-finding exercise for this purpose is appropriate if certain conditions are met: • Assessment or development of this type of direct oral questioning is critical. • Persons are available to serve as resource persons. • The inherent unstandardized conditions arising from the diverse paths this exercise typically takes with different trainees are not an issue.
Step 2. Conduct Situation Analyses A solid source of information is needed to generate the background information for the resource person. Our discussion of how to develop a case study in Chapter 5 is relevant here. In fact, virtually all the same information is needed for both types of exercises. In the case study exercise, the information is presented in written form to the participant, who must read and digest the information. In the oral fact-finding exercise, the information is held by the resource person and must be obtained by the participant by asking clear questions of the resource person. The analysis of the IT organization revealed that consultants often had difficulty obtaining information from persons throughout the client organization. Some sources did not know what information would be useful in a software development project. Others were reluctant to divulge what they considered proprietary information. Still others were reluctant informants because they feared they would lose their jobs after installation of a new system. As a result, the consultant could not always trust that the source would completely and willingly provide all the available information. Thus, the consultants had to ask direct,
134
*X9>
CHAPTER 1O
specific questions, and had to listen carefully to be sure the informant was providing all available information. The fact-finding exercise was designed to simulate situations where there was a high degree of pressure put on the skill to do this form of oral fact finding.
Step 3. Specify the Dimensions and Difficulty Level The analysis in the IT organization revealed that good "consulting skills" involved two key dimensions: oral investigation and listening skills. The incident method is a particularly effective method for assessing and developing the skill of obtaining detailed information from people who may be somewhat reluctant to divulge it. The initial incident described to the participant is only a brief kernel of an idea. The participant must ask specific questions to gather relevant information. The situation is set up so the resource person is neutral, and does not help or hinder the inquiry. Thus, the participant must ask clear, direct, and specific questions, or the resource person will not divulge useful information. In addition to the two dimensions featured here, the fact-finding simulation provides the opportunity to assess other dimensions such as stress tolerance, oral communication, sensitivity, flexibility, and decisiveness. Stress tolerance can be assessed as a function of the normal course of events in this simulation. The participant has relatively little information to go on, the resource person gives information only if the proper questions are asked, there is limited time for the inquiry, and the instructions demand that a decision be rendered even if the person has insufficient information. Additional stress can be added if the resource person challenges the initial recommendation of the participant and presents new information not discovered. One important specification for the dimensions in a fact-finding exercise is the level of difficulty that should be built into the exercise. Once the developer determines what level of difficulty the job poses, the resource person can increase the difficulty level by: • requiring the participant to ask very specific questions; • offering more new information to refute the participant's recommendation; • criticizing or even belittling the participant.
Step 4. Specify the Features of the Simulation The information gathered at the analysis phase allows the developer to specify the key features of this simulation. The industry and job that are simulated must be one with which the participants can identify. The appropri-
ORAL FACT FINDING EXERCISES
*^a>
135
ate level of technical information must be incorporated. If most participants do not have a background in manufacturing technology, it would not be appropriate to ask for a decision on whether or not to set up a complicated quality control system. That same content might be included in an exercise for industrial engineering graduates. In the example of the IT organization, the incident involved a decision of whether or not to sign a contract with a vendor to train staff in the use of a new software package. An assistant director of IT has rejected a vendor's proposal, but some department managers believe external staff should do the training. The participant, in the role of consultant, is charged with gathering information and making a recommendation.
Step 5. Prepare the Participants' Materials The materials for the participant are relatively simple and brief. Typically they include: • A short paragraph describing the incident—for example, "The assistant director has recommended that the training not be contracted to the vendor. The director has asked you to investigate and recommend whether or not to hire the vendor" • A description of the position of the participant—for example, "You are a consultant asked to investigate and give your recommendation." • A description of the function of the resource person—for example, "The resource person has extensive information and will answer specific questions. The resource person is not playing any role." • The time allowed for each stage of the exercise—for example, "You have 10 minutes to prepare, 15 minutes to ask questions, then 5 minutes to give your recommendations and rationale." A wide range of content can be built into a fact-finding exercise to match the type of problems encountered in the target job. Incidents can be depicted for any level of target job and for participants with different levels of experience and skill. At a lower level, incidents involving employee behavior such as violation of safety regulation or reactions to customer complaints can be constructed. For professional staff positions where it can be assumed the participants have pertinent knowledge and skills, the incident can call for a somewhat more technical inquiry into incidents involving financial analyses, computer hardware and software, or production processes. At higher managerial levels, the issue may be whether or not to open a new bank branch in a neighboring town or whether to move a production facility to Mexico. Table 10.2 shows information about a variety of potential incidents. The instructions would say that the person named as the "Higher Au-
136
T**S>
CHAPTER 1O
thority" (listed in the left column) has asked you, the participant, to assume the "Role of the Participant" (listed on the right) and to investigate the "Incident" (described in the center). Appendix J presents some of the materials from an oral fact-finding exercise, including the instructions to the participant and the brief description of the situation. It also contains excerpts from the elaborate set of information available to the resource person. This exercise involves a decision about whether or not to continue work on the development of a computer program.
Step 6. Prepare Support Materials for Assessors, Administrators, Resource Persons, and Role Players Support for resource persons. The information provided to the resource person must be rather extensive, and include much more than most participants can cover in the limited time allotted for questioning. The information must include a balance of data supporting both sides of the issue. If the issue is to accept or reject a project proposal, the resource materiTABLE 10.2 Two Types of Fact-Finding Exercises: Appeal of Decisions and Recent Incident Higher Authority
Incident Depicted
Role of Participant
Appeal of decisions Marketing director
Campaign proposal rejected
Consultant
HR director
Request to send staff to training
HR specialist
City manager
Special project budget denied
Community member
Sales manager
Customer rejects proposal
Sales rep
Vice president of IT
IT manager has decided to switch accounting systems
Consultant
HR manager
Probationary trainee is released
Intern
Recent problem or incident Plant manager
Production rate has declined and manager is considering whether or not to install a new production system
Consultant
Quality manager
Customer complaints have increased and the organization is considering a new quality check
Engineer tech
Executive committee
Two potential locations are being considered for a new branch of bank
Financial analyst
Supervisor
Should an employee be disciplined for violation of a safety regulation
Manufacturing team member
ORAL FACT FINDING EXERCISES
*vs>
137
als must include substantial information indicating that in fact the proposal should be rejected, and substantial information indicating the proposal should be accepted. If the participant obtains a representative sample of the available information, this balance of information will make the decision difficult for the participant. The balance also provides the resource person ammunition to draw on during the challenge phase when trying to get the participant to reverse the initial recommendation. The resource person is initially a neutral source of information. He or she must be intimately familiar with the information available for dissemination when queried by the participant. The participant will probably be under some time constraint and will be asking many rapid-fire questions. Thus, the resource person needs to be able to give timely responses so as not to impede the participant's progress. The resource person must commit to memory most of the information, but can also refer to written documents to access obscure or detailed data. The resource person must have a gentle but direct way of saying that the participant's question was too general. If the participant asks a vague question such as "What was wrong with the vendor's training proposal?" the resource person might say: "That question is too vague. Please rephrase the question." If the resource person is going to challenge the initial recommendation of the participant, the resource person needs to have ready access to counter arguments. The simulation developer can prepare support materials. Table 10.3 shows an outline of counter-arguments in the example of the vendor's proposal.
Support for the Assessors At the very least, the assessor should be provided with definitions and extended definitions of the dimensions. Behavior checklists and behaviorally anchored ratings scales should be developed and will be helpful if the fact-finding simulation is used for any one of a variety of purposes. For training purposes, examples of highly effective behaviors can be used to coach participants on appropriate actions. The two scales for oral problem analysis and listening skills shown in Table 10.4 illustrate these support materials.
Step 7. Train Assessors and Resource Persons The assessor plays a relatively inactive role in the fact-finding exercise. The resource person handles the timing of this exercise. The assessor must be quite familiar with the content of the exercise so he or she can understand if the participant is asking a complete array of questions to uncover the potential information available. In addition to observing the participant, the as-
TABLE 10.3 Arguments Refuting Recommendations Participant's Recommendation
Counter-arguments
Do not accept the proposal (i.e., support the assistant director)
Much time and effort has been given by the organization to prepare the proposal Staff in the organization want to do the training The organization has already made a preliminary announcement to expect the vendor training
Accept the proposal for vendor training (i.e., do not support the assistant director)
This will undermine the assistant director, a person with high potential for executive rank If vendors do the training, the organization's own staff can do the other projects
TABLE 10.4 Behavioral Rating Scales for an Oral Fact Finding Exercise Oral Problem Analysis 5
Highly effective: Asked questions, each of which called forth multiple items of information from the resource person; sought to seek bits of information related to each other.
4
3
On-target: Asked clear questions in several content areas; used a follow-up technique.
2£. 1
Below expectations: Asked vague questions that yielded vague answers; did not pursue obvious leads thrown out by resource person.
Listening ' "ig Skills
5
Highly effective: Questioning techniques improved as the resource person responded to initial questions; checked own understanding when not sure about flip the resnnnsp response of of the the resrmrrp resource nersnn. person.
3
On-target: Nodded head and said okay to indicate he or she had enough information of that topic; recommendations revealed that he or she understood the information obtained in the inquiry.
2 1
138
Below expectations: Repeated questions which showed he or she did not listen to responses to earlier questions; did not make questions more specific even after the resource person requested this.
ORAL FACT FINDING EXERCISES
**^>
139
sessor must observe the resource person to note the leads that are offered but not followed up by the participant. For example, a typical technique in the exchange of information may proceed like this: • Participant: "What was the reason the Assistant Director gave for rejecting the proposal?" • Resource person: "One of the reasons was that the manager of finance wanted the internal staff to do the training ...." The participant should recognize that this comment indicates there may be other reasons for the denial, and follow up with a question such as "What were the other reasons?" If the assessor misses the resource person's initial evasive response, the participant's effective or ineffective response may go unnoticed. The above exchange shows that the resource person must be trained quite thoroughly. Some effective training techniques include: • studying and even memorizing the information in the resource materials, • answering questions rapidly from different mock participants, and • observing other role players answer questions to develop consistency in replying to specific questions.
Step 8. Administer to a Pilot Group A tryout of a fact-finding exercise with a pilot group can be helpful in shaping the exercise by providing the following information: • How much information can be gathered in the allotted time to complete the questioning. • Whether the support materials for the resource person are adequate. • Unanticipated questions asked by participants. • Whether the counter-arguments are adequate to dissuade the participant.
Step 9. Use Simulation for Selection, Promotion, Training, Organization Development, or Research The fact-finding simulation has been used for all the purposes discussed throughout this book. To continue the chapter example, we further discuss its use for training consultants. Three-person groups were formed in the training program. The three persons rotated through the following activities: One person practiced as the consultant, a second served as resource person, and the third served as observer/feedback person. After practice,
14O '*^
CHAPTER 1O
the three changed roles to give each a chance to experience the exercise from different points of view. This procedure is very much like the procedure described in the chapter on one-on-one interaction simulations. Why Simulation Exercises Fail Fatal Error 10: An Overly Aggressive Pilot Participant The Greenwood Hospital developed a fact-finding exercise that was to be used in a program to diagnose and develop oral investigation skills of nurses, aides, and technical support staff. In the pilot program a male intern was used as a practice participant. After the first practice session, the intern was highly critical of the entire process. He accused the resource person of being too slow in giving information and unfairly withholding information even when he asked direct questions. He claimed the assessors did not give him credit for behaviors he believed he displayed. He verbally attacked the assessor who gave him feedback. His rationale for these attacks was that assessors might encounter such resistance from regular participants when the program was implemented, and so this was good preparation. Nevertheless, the staff was so discouraged and devastated by this experience, they never implemented the program. The goal of this application of the fact-finding exercise was to enhance the consulting skills of all parties. The person acting as "consultant" learns from the direct practice experience and from feedback after executing the exercise. The "observer" learns by watching how the consultant conducted the inquiry. The "resource person" also learns by watching the consultant and by realizing how much more information could be turned up. Of course, as the persons rotate positions, the subsequent "consultant" has the advantage of earlier observations but that is acceptable in this training application. In fact, better execution in the later rounds is evidence that learning has taken place.
Step 1O. Solicit Participants' Reactions When the fact-finding simulation is used as a training device, the developer will want to know if the participant believes that the exercise is a realistic depiction of a job situation which might encountered and if the feedback was helpful in developing the skills to handle that situation. The developer may wish to ask:
ORAL FACT FINDING EXERCISES
^^
14 i
• Did the exercise give you a chance to show your level of consulting skills? • Did it help to observe another person carry out the exercise? • Was the feedback from the observer/assessor helpful? • Did the exercise show you ways to improve your skills? • Do you think you will be able to apply the skills learned from the exercise to your current job?
Step 11. Conduct Psychometric Evaluation For training purposes (as opposed to selection and promotion applications) the questions of reliability and validity take on different meaning. No longer is the issue one of predictive accuracy; now it is whether the exercise provides a meaningful venue for practicing and learning a new skill. Successful application is a function of whether or not: • the exercise is a challenging yet manageable experience, • the learner receives behavioral feedback, • participants learn new ways of conducting the consulting functions covered in the exercise. For skill development to take place, the practice sessions must be a part of a systematic training program. One of the most effective training designs is contained in the social learning paradigm described in Goldstein and Sorcher (1974). It is beyond the scope of this chapter to fully describe this process, but highlights are outlined here: • A conceptual framework for the new skill is provided. In the case of consulting skills, this might entail a model of consulting practices endorsed by the organization. • A list of specific steps for executing the skills is provided. • The trainees practice the skills in situations simulating the real life setting. They are initially given simple situations to deal with, and then given progressively difficult situations. • While some trainees are practicing, other trainees learn by observing how colleagues execute the skills. These observers learn by giving feedback. • Skilled trainers also observe and give feedback • The group of trainees and trainer discuss applications in situations that may be encountered on the job.
11 Business Games
Business games are some of the more difficult simulation exercises to develop, because of the complexities of this form of simulation. A business game simulates a wide range of situations and problems in a complex organization. It usually involves several participants interacting over a relatively long period of time and making a series of decisions that affect many different aspects of the organization's operations. The situations dealt with are usually quite unstructured and unconstrained. Business games differ from other simulations in a variety of ways, in that they typically involve: • • • •
several units of the organization, a longer length of time in the exercise, a larger number of participants interacting with each other. more unstructured directions on what to accomplish, for example, the participants may choose to reorganize, to change product mix, to institute new policies, to spend time clarifying organization objectives, and so forth, • a large, less structured physical space within which to participate, and • the inclusion of a great deal of complex physical materials in some games. Business games are usually designed to represent a large number of the aspects of a simulated organization. Whereas an analysis problem may contain information about a manufacturing unit or an office process, and an in-basket may contain a variety of information coming into the office of one general manager, a business game usually includes a number of people throughout the 142
BUSINESS GAMES
***>
143
organization simultaneously facing problems in many functional areas such as manufacturing, marketing, human resources, and finance. Whereas the other exercises reviewed in this book can be completed in 1 to 2 hours, business games usually require a minimum of 3 hours and can last up to 1 day or more. Leaderless group discussions and games are both group exercises, but the former typically involves four to six discussants, whereas games may involve 12 or more participants. Another difference is that whereas a group discussion usually involves the participants sitting around a table discussing ideas presented on paper, business games often call for the participants to move around the room (or several rooms) to deal with some physical objects. For example, some games call for the participants to design and "manufacture" some product to sell to buyers represented by the assessors or role players who are in some other location. A variant of the business game is the production exercise, in which participants physically work with fairly elaborate hardware, equipment, apparatus, materials, and supplies. These features of the business game pose special problems in the construction of the exercise, the training of assessors, the administration of the exercise, and the observation and evaluation of behavior. BACKGROUND The rudiments of business games as assessment techniques can be seen in the evaluation procedures used in the Office of Strategic Services assessment program in World War II (Office of Strategic Services, 1948). Candidates for selection as secret service agents were placed in a variety of complex situations simulating field assignments. In one series of games, candidates had to work with clumsy and lethargic confederates of the assessment team to overcome challenging physical obstacles, for example, move people and supplies over a "canyon" using boards and ropes. A business game was one of the exercises in the original Management Progress Study at AT&T (Bray & Grant, 1966). More recently, McCall and Lombardo (1978, 1982) developed Looking Glass, a simulation of the many demands on a manager is a large organization, and Streufert has built large-scale simulations of organizations to study managerial decision making (Streufert, Pogash, &Piasecki, 1988; Streufert &Swezy, 1986).
Time Requirements Participant preparation: 0 to 45 minutes Participant engages in game: 2 to 8 hours Post-game interviews: 20 to 30 minutes Review of results (for training and organizational development applications: 1 to 2 hours.
144 ^^
CHAPTER 1 1
STEPS IN THE DEVELOPMENT OF A BUSINESS GAME
Step 1. State the Purpose of the Simulation and Determine the Resources Available For the purpose of this chapter, we discuss the development and application of a game as an organizational development (OD) intervention in a large university system. OD is a diverse set of techniques used to foster learning among a large number of members within an organization about the relationships among individuals, groups, teams, and departments within an organization. Whereas training is designed to have an impact on individuals' knowledge and skills, an OD intervention is designed to have an impact on the relationships among units within the organization and the entire organization as a whole. An organizational game was chosen as the simulation exercise because it could depict the relationships among several units in the organization and it could be complicated enough to accommodate many more participants than other simulations discussed so far. The game was developed for a multi-campus university to foster cooperation among researchers at diverse locations. University officials wished to have a means of generating interdisciplinary research and development programs that involved different departments and researchers in different locations across the state and around the world. The university was planning to hold a number of meetings over a period of several months, and wished to have an activity to promote cooperative efforts among principal investigators, research technicians, and administrative staff providing support services. Organizers decided to include a business game as one of the first activities at the meeting among university staff members. There were a number of resources that the university could draw on to build and administer the large-scale simulation game that was envisioned. Faculty members in industrial and organizational psychology, management, and education had experience and interest in developing the organizational development effort, including the game itself. Games have been used for a variety of purposes, including: • Training individuals in principles and techniques of management (for example, marketing, finance, production). • Building trust among members of work teams. • Fostering organizational development across divisions and levels in an organization. • Conducting research on decision-making and interactions in groups.
BUSINESS GAMES
-«va>
145
Step 2. Conduct Situation Analyses To build an effective business game, the developers must gather a wide variety of information about many aspects of the situation to be simulated. Interviews with university officials wishing to encourage more interdisciplinary research throughout the university system revealed a number of objectives, issues to confront, and opportunities that could be capitalized on. The structure of formal departments, centering on traditional academic disciplines, made communication difficult among persons with related interests who happened to be in different departments. Researchers specializing in one discipline sometimes found it hard to understand the technical jargon of researchers in other disciplines. Additionally, there were instances of duplication of research programs, facilities, and equipment that were uneconomical and unproductive. Such duplication was present not only within each campus of the university, but also across the various campuses in different cities. The various campuses had different histories of research productivity and success in procuring grants with some of the older campuses having more perceived prestige among some faculty and staff, and some of the newer campuses having less perceived prestige because of their origin as "the aggie college" or " the teacher college." Top-level administration wished to break down some of these barriers and misconceptions through a team building and organizational development effort.
Step 3. Specify the Dimensions and Difficulty Level Business games provide the opportunity to assess a wide variety of performance dimensions, depending on the content of the exercise and the instructions given the participants. Some of the dimensions that have been assessed in prior games are: planning and organizing, decision analysis, problem analysis, teamwork, flexibility, organization sensitivity, and extra-organizational perspective. In this chapter we focus on two dimensions most frequently assessed in games: planning and organizing and teamwork. Planning and organizing is the ability to establish an appropriate course of action for oneself or others to accomplish a goal. It involves making the proper assignments of people and other resources, and setting time lines to accomplish intermediate steps leading the end result. The level, complexity, and time frame of planning and organizing differ from lower level to higher-level jobs. For a team of production employees, planning may involve the assignment of who will work on which phase of an assembly task for the next 3 hours. For an office supervisor, it may be a set of contingencies for getting orders filled if a machine goes down or a data entry person does not report to work some day in the next week. For a middle manager,
146
'BV3'
CHAPTER 1 1
planning may involve projecting the need for hiring new full time and part time staff members to meet seasonal demands for production and shipping of several products for the next 12 months. For an international governmental official, it may be launching a multinational campaign to gain support for a new initiative in delivering health care to developing countries over the next several years. The second dimension we focus on in this chapter is teamwork. Many jobs require employees to work together cooperatively. Tasks are often so complex that no one individual can accomplish the work individually. Teamwork involves exchanging information, jointly deciding how to accomplish a task, giving and receiving feedback to coworkers, and often subordinating one's own goals for the greater good of the group. Teamwork can be important for all levels in an organization. Production workers may need to share information on how to diagnose and repair a malfunction in machinery. Office workers may need to pool ideas on how to improve accuracy and speed of processing orders. Professionals from accounting, finance, and marketing may need to jointly figure out a new pricing policy. Higher-level executives from various functional areas may need to come to agreement on a new strategy for penetrating new foreign markets. These dimensions were some of the aspects of faculty and staff performance that the university administrators believed needed enhancement. They hoped that better planning and organizing would lead to better utilization of university resources across campuses and that teamwork among faculty and staff in related areas of research would lead to stronger academic and research programs.
Step 4. Specify the Features of the Simulation The business game for the university OD effort was to be used in the context of a 2-day seminar covering a variety of topics. Plans for the seminar called for the session to open with a statement by the chancellor of the university system to highlight the needs for more collaborative research and the administration's goal for increased integration across departments and campuses. The game was to be held in the next few hours throughout the first day. On the second day, the seminar would involve various small and large group meetings in discussions on how to break down the barriers and launch cross-disciplinary and cross-campus collaboration. The business game had to have appeal to persons from different disciplinary backgrounds ranging from the physical sciences to the humanities. Certain elements had to be conceptually difficult enough to be challenging to many individuals with advanced education, such as masters and doctoral degrees, and yet at the same time understandable to other participants with
BUSINESS GAMES
'^a>
147
less education, such as technicians and administrative personnel who are also critical to the collaboration in research projects. The simulation had to accommodate different numbers of participants for any one of a series of seminars the university planned to hold. The organization anticipated that as few as 8 or as many as 20 participants might attend a given seminar. Thus the game had to have the flexibility to be run with a variable number of participants.
Step 5. Prepare the Participants' Materials The game for the university project was built to simulate a large international contract research organization. The simulation presented an organization that conducted research for private companies and public agencies throughout the United States and around the world, including offices in European, Asian and South American countries. The simulated organization was made up of scientists and engineers in the physical, natural, and social sciences. It was facing a number of problems including overlap of duties, redundancies in staff and facilities, reduced revenues from research contracts, pressures to minimize costs, and failures to take advantage of opportunities for funding due to lack of communication because of professional barriers and distance between offices. Rivalries existed between offices around the United States and world, and among divisions in the same location. Certain areas had been more successful in the past and "fiefdoms" had been created, some of which were no longer viable because the research field was no longer garnering external support or the staff members were no longer successful in pursuing funding. There were even accusations of "dead wood" in some areas. The exercise materials for this simulation included: • Background materials about the contract research and development organization itself. • A mock brochure gave a history of the organization. • A list of locations, descriptions of research areas, a sample of current projects. • Brief biographical sketches of key staff. • A summary of financial data. • Organizational charts. • Personnel records. • Descriptions of the staff and projects in each office. • Personnel and organizational policy manuals. The various problems and issues facing the organization were presented to the participants in the form of reports, memos and letters, financial re-
148 «xs>
CHAPTER 1 1
cords, and historical analyses of trends in staffing, funding patterns, and profits. Information about funding opportunities was presented in news releases, statements by funding agencies on new directions, requests for proposals, and newspaper and magazine articles of developing areas of research. Participants were sent a packet of information prior to arrival at the seminar and asked to prepare for the simulation activity. At the start of the game, each individual was assigned a particular position in the organization, and given a job description and additional information about his or her job. Flexibility in the number of participants was provided for in the assignments. Some positions on the organization chart listed three or more incumbents in a given assignment. If the seminar included enough participants, all positions were filled; if fewer participants attended, only one slot would be filled. Participants were given time to study their roles, and to review the background materials.
Other Business Gomes Games can be built to simulate a wide variety of situations. Table 11.1 lists a small sample of the situations for which business games have been built, along with the target job for which they were intended. The content and level of conceptual difficulty will differ across these games, and should match the job requirements, as disclosed in the job analysis phase. The materials for the game should present complex problems both within the organizational unit being simulated, and complex relationships of the target unit and its environment. For example, a simulated crisis in the embassy facing attack involves the coordination of various agencies housed in the embassy such as diplomatic, cultural, economic, and military corps. In addition, internal units must respond to the threat by laying plans to work with various officials and agencies in the host country including the president's office, police, military, and other security forces, as well as the terrorist organization about the launch the attack. Another form of the game exercise is called a production exercise, because it simulates the activities of a production operation in a manufacturing or processing plant. This type of game became highly popular when organizations began to expect more teamwork and continuous process improvement from employees at all levels of the organizations, including assembly and craft workers. Table 11.2 lists a sample of production exercises.
Step 6. Prepare Support Materials for Assessors, Administrators, Resource Persons, and Role Players Many of the typical support materials for other exercises (e.g., clear definitions of dimensions, behavior checklists) are relevant for observing behavior in
BUSINESS GAMES
149
TABLE 11.1 A Sample of Business Games Target Job
Content of Game
Government officials
Crisis in embassy faced with potential attack
Executives
International marketing and expansion plans for a multidivisional, global organization
General managers
Planning a strategy for converting a division to a new product line
Supervisors
Revision of assignments and scheduling
TABLE 11.2 A Sample of Production Exercises Target Job
Content of Exercise
Manufacturing employees
Assembling toy blocks
Quality control inspectors
Detect defects in final products and recommend changes in QC processes
Assemblers
Build pumps following prescribed steps
Maintenance workers in brewery
Troubleshoot errors in power transfer unit and make adjustments
Warehouse workers
Pack boxes to match order forms
games being used for the OD example. On the other hand, for this OD application, it was not deemed appropriate or necessary to provide behaviorally anchored rating scales because in-depth, individual assessment, evaluation, and feedback were not a part of this program. Instead, examples of some effective and ineffective behaviors for the two dimensions, planning and organizing and teamwork were provided (see Table 11.3). These aids helped the observers focus on relevant actions of the participants. Additionally, it is important for assessors/facilitators of games used for OD purposes to have clear understanding of the issues the organization wishes to highlight through the use of the game. While the observers are watching the game unfold, they are watching for behaviors that illustrate the positive interactions the organization wishes to foster, and the negative interactions the organization wishes to eliminate. At strategic and nondisruptive times throughout the exercise, the assessors can call these behaviors to the attention of the individual participant. After noting an inappropriate behavior, the assessor can offer suggestions for alternative behaviors the organization wishes to foster. In addition to the immediate feedback to the in-
15O
CHAPTER 1 1
TABLE 11.3 Examples of Effective and Ineffective Behaviors in the Game Planning and Organising Effective behaviors: His suggestions for realigning all foreign operations revealed a new way of gaining efficiencies; her proposal to set up "problem' divisions rather than "disciplinary" divisions was adopted. Adequate behaviors: Suggested ways to divide up the time in the day of the exercise, suggested follow-up meeting to reinforce the new directions. Ineffective behaviors: did not have any response when asked to help reorganize the research programs. Teamwork Effective behaviors: Many of her ideas led to effective, coordinated efforts among the other players; he helped other groups while also being an effective member of his division. Adequate behaviors: Was cooperative with others on his team; carried out his duties willingly. Ineffective behaviors: Indicated reluctance to follow the team's decisions even after general consensus among the others was reached; spoke disparagingly about his subgroup's efforts to other participants in the game.
dividual throughout the game, feedback can be given to the entire team in debriefing and feedback sessions at the end of the game. Assessors need to have support materials to help them observe the dysfunctional behaviors and to coach the participants in the direction of more desirable behaviors. These support materials may include organizational mission statements or goals, training materials the organization is using to foster better communication and interaction, journal articles espousing the new norms the organization wishes to follow, and transcripts of speeches of key executive or leaders in the organization.
Step 7. Train Assessors As with any simulation exercise, the assessors need to be trained to observe, record, and evaluate behavior accurately. The principles and practices of assessor training described in Chapter 13 are certainly applicable to games, as they are to all simulation exercises. In addition, special care must be taken in the training of observers in games and other large-scale simulation exercises because of several unique circumstances. Games are usually more complex and unstructured than other simulations. The number of issues that participants face is usually greater in games. Games are usually less structured and the directions are more ambiguous. Because the number of participants is usually greater than other simulations, the dynamics of interactions is more
BUSINESS GAMES
"^
15 1
difficult to track. Thus, participants are likely to take actions that are not anticipated and have not been seen before. The physical arrangements also often make it difficult for assessors to make accurate observations. For example, two participants may huddle together and speak in low tones in one corner of the room far from the assessors and other participants. The assessor may miss highly significant examples of critical behaviors demonstrating communication skills, planning and organizing, or team leadership. Because the activities may take place in less confined spaces, and may, in fact, take place in different rooms, assessors often have difficulty making thorough observations of each participant's behavior. To overcome these differences, assessors need to be more active in ensuring they see and hear what is going on. They may need to move closer the participants, or "shadow" a participant as he or she moves around the assessment location. Fatal error 11 illustrates the seriousness of obstacles to observing behavior in adequate facilities. Why Simulation Exercises Fail Fatal Error 11: Inadequate Facilities—Difficulty in Observing Behavior A complex organizational game designed for an OD intervention was held in several rooms in a training facility throughout the day. The main activities took place in a large central room, but pairs and teams of participants could go to other locations such as small breakout rooms, lounges, and snack rooms to hold some discussions. The observers had a difficult time keeping track of participants and thus were unable to systematically observe critical interactions. Feedback later in the day was thus incomplete and participants reacted negatively when observers could not thoroughly and accurately critique their behaviors. Scoring standards are often less precise in business games, because developers cannot anticipate and classify the wide range of potential actions that may be demonstrated. Such limitations may not be a problem when the exercise is used for development purposes, but can be quite an issue when used for administrative decision making. These and other difficulties of making thorough observations of behavior in games make accurate assessment for decision-making purposes problematic. Thornton and Cleveland (1990) provide a critique of the use of games for evaluation purposes. However, this does not suggest that business games are ineffective for organizational development purposes (e.g., as applied in the university example). For the university application, the purpose was to give participants insights into organizational problems and to instill heightened awareness of others' points of view. Participation in the game may be used to
152
'Bva>
CHAPTER 1 1
foster a desire to work together more effectively in the future. Such participation may be helpful in sensitizing participants to the issues and make them more ready for subsequent training and development activities. In the university example, participation in the game was forerunner, or "warm up," for the next day's discussion of ways that the university researchers could be more productive and collaborative. To accomplish organization development goals, the assessors take on roles different from the roles of "rater," "evaluator," and "judge." The roles shift to "facilitator," "process observer," and "coach." The observers need to identify examples of behaviors in individual participants that will make the team more effective. They also need to identify the behaviors of the pairs and groups of team members that make the total organization more effective. In addition to feedback to individuals, feedback must go to the entire group of participants about the general dynamics of the team as a whole. The assessors need to be much more skilled in process observations regarding the dynamics of the group, in contrast to individual feedback in individual assessment applications.
Step 8. Administer to a Pilot Group Pilot testing of games is somewhat more difficult than pilot testing other, simpler simulations, because the developer needs much more time from a practice group. In the university example, it was not feasible to involve faculty and administrators. Thus, pilot tests were run with groups of graduate students from a variety of disciplines. These pilot participants were similar to the intended audience because they were working in the labs of university researchers and the pilot group themselves would soon be in the positions similar to their mentors. The pilot group was able to give feedback on the background materials pointing out significant gaps in the information provided, and identifying areas where the situations were not logical and believable. They provided suggestions for revising the issues presented in the game and for elaborating on the information presented. They also gave feedback to the assessors/facilitators on the adequacy of the feedback provided throughout the game and at the end. Assessors got to experience the difficulty of making detailed observations in the sometime hectic interactions throughout the facility where the simulation was conducted.
Step 9. Use Simulation for Selection, Promotion, Training, Organization Development, or Research The game developed for this OD effort was used at the semi-annual meetings among university administrators, researchers, and staff. The goals of these meetings were to:
BUSINESS GAMES
***>
153
• generate better understanding among participants of the barriers to interdisciplinary research and inter-campus cooperation, specifically in terms of their own beliefs and actions; • learn new ways of interacting with potential peers to foster cooperation; • introduce researchers from various departments and campuses to each other; • form a basis for the next day's sessions where specific, new coalitions were to be formed for future collaboration.
Step 10. Solicit Participants' Reactions The participants were queried about whether the game helped accomplish the objectives just stated: • Understanding among participants of their own beliefs and actions which interfered with or could foster interdisciplinary research. • Seeing new opportunities for collaboration. • Serving as an effective set-up for the second day's program. The participants were asked to give their reactions at the end of the second day, and then 3 to 4 months later. The first query served as an initial reaction to the role of the game, whereas the second query served as a more effective summative evaluation of the overall, long-range effectiveness of the program.
Step 11. Conduct Psychometric Evaluation Traditional psychometric analyses of the reliability and validity of this game were not appropriate for the evaluation of its utility as an OD technique. The organization was not interested in the accurate evaluation of the performance of each individual participant in the simulation. In fact, no individual evaluations of performance on the dimensions were recorded or reported. Evaluation was based on whether the game, as an OD intervention, helped the organization achieve its objectives. The second participant survey included questions similar to those in the first survey and additional questions about what specific actions they had actually taken to form new collaborations since the meeting. Respondents were asked what they had done to: • collaborate with colleagues on their own campus, • work with colleagues in other disciplines, • contact potential colleagues on other campuses.
154 -*>^>
CHAPTER 1 1
This follow-up survey was designed to evaluate the effectiveness of the overall program and the role of the game itself. Although the causes and effects were difficult to disentangle, the development team was interested in trying to learn about how the features of the game, separate from the second day's activities and other factors, may have contributed to enhanced cooperation. Therefore, the survey included questions about the insights they may have gained from participation in the game and the feedback they received. They were also asked if the game helped them understand and benefit from the program on the second day.
12 Integrated Simulation Exercises: A "Day-in-the-Life" Arrangement
In this chapter, we start with the assumption that the user wishes to obtain assessments from more than one simulation. Then we raise the question: When an arrangement of several exercises are set up, should these simulations be separate in the sense that each one is set in a different organization and has different names and problems, or should the several activities be integrated into one larger simulation set in a single organization? Up this point in the book we have, for the most part, discussed each exercise separately. In a few places we have referred to ways that the information from one exercise can be carried over to another form of assessment. For example, we have alluded to the arrangement in which the results of a case study can be subsequently be presented to an audience or used as the starting point for a group discussion. In this chapter we further pursue the idea of linking assessment activities into a much more elaborate format. In this chapter we discuss a way in which several exercises can be combined into one series of integrated activities. This has been referred to as "a day in the life" assessment method, because some argue the arrangement more closely resembles the situation that a person faces on in a real day in the life of a real organization on a real job in comparison with separate exercises. First, we offer one way of combining several assessment activities. Second, we discuss the strengths and weaknesses of this approach, in comparison with the arrangement that presents each exercise separately.
155
156
*xa)
CHAPTER 12
ONE SCHEDULE OF AN INTEGRATED SET OF ACTIVITIES A variety of schedules can be followed in arranging a set of integrated simulation activities. What follows is just one arrangement.
8:OO a.m. The assessment process could begin with the participant receiving a packet of materials that is typically included in the background information for an in-basket, game, or case study. This information typically includes: • A description of the organization and its business, including industry, products, locations, and so on. • One or more organizational charts, showing an overview of the entire organization and more details on the personnel immediately surrounding the participant. • The participant's position, along with a job description for the position, for example, a middle management position in a manufacturing organization. • A calendar, which displays some prior commitments, including a required committee meeting at 4:00 p.m. • A set of policies and practices. 8:3O
a.m.
The participant may then be given the in-basket and allowed to begin work on a set of materials that includes enough items to keep the participant busy for 2 to 3 hours. The in-basket can contain the typical wide variety of letters, memos, reports, and so on depicting relevant financial, production, service, personnel, and other problems. Embedded in the in-basket the participant will find information that is helpful for other interactions that will arise throughout the day. For example, one memo may be a "tip" from the boss that a staff member has information that will be useful in preparation for the 4:00 p.m. committee meeting. 9:OO a.m.
After the participant has been working on the in-basket for 30 to 45 minutes, various interruptions may start to occur. A message may come in via hand delivery or e-mail, which requests that a situation be analyzed much like a case study. The in-coming packet may have in it additional background information as one would find in a typical case study. The instructions may call for the participant to complete the analysis and attend a meeting at 2:00 p.m. with a higher-level executive. The instructions may
INTEGRATED SIMULATION EXERCISES
^^
157
state that the analysis should take only about 1 hour of the participant's time. The participant now must decide whether to continue pouring through the in-basket or shift attention to the request to write the case report and prepare for that presentation. The in-basket can have information relevant to the problem involved in the case study
1O:OO a.m. About this time, a phone call comes in from a subordinate supervisor who is having difficulty with a technician who has violated a safety policy, yet claims he was instructed to operate in this manner by a trainer. The supervisor wishes to talk with the participant before a noon meeting and get guidance on how to handle the controversy. This interaction is much like an interview simulation with a subordinate.
12:OO Noon A colleague calls to invite the participant to have lunch to discuss a coordination problem between their two units. The participant must choose to agree or continue to work on the remaining in-basket items and a report due at 2:00 p.m.
2:OO p.m. The meeting with the VP occurs. The participant is asked to present the analysis of the case study assigned at 10:00 a.m. After the presentation, the VP may ask several questions and challenge the recommendations.
3:OO p.m. The participant may wish to contact the "helpful colleague" who allegedly has information useful for the 4:00 p.m. meeting. Should the participant not have initiated this meeting by this time, the boss will call and encourage the participant to call the colleague. The interaction with the colleague may then unfold much like a fact-finding exercise. The participant then has more time to respond to in-basket materials and prepare for the 4:00 p.m. committee meeting. A memo in the in-basket can include a reminder of the meeting and specify the agenda and what role the participant is expected to play.
4:OO p.m. The meeting can be a group discussion among several participants who are asked to work on an assignment like one of those described in Chapter 1 on leaderless group discussions. As the meeting starts, the instructions may call
158
'Bxa>
CHAPTER 12
for participant to submit a written summary of the committee's deliberations, its conclusions, and any "minority report" if the participant wishes to comment differently from the consensus. ARGUMENTS FOR A "DAY IN THE LIFE" ARRANGEMENT This sort of arrangement has not been used extensively in organizations, but may be gaining popularity. Fully integrated exercises were virtually non-existent in the early years of assessment centers in U.S. organizations, and were not reported in Thornton and Byham's (1982) review of all programs up to 1980. In 1999, Kudisch et al. reported that fewer than 18% of their survey respondents used fully integrated exercises. With the advent of computers and high-tech applications to assessment processes, there may be an increase in integrated exercises. There are certainly strengths in this arrangement. The "day in the life" design may be advantageous when the organization wishes to assess the participants' ability to "multi-task," that is manage one's own time when faced with several heavy and conflicting demands, when integrative abilities are especially important for job and organizational success, or when participants will be faced with very dynamic environments on the job. "Day in the life" assessment processes certainly have a popular appeal because they seem to simulate the work of many managers more closely than separate exercises. Proponents of this sort of arrangement argue that it has several advantages: • A set of integrated activities more closely resembles most managers' jobs. • Introducing interruptions and requiring participants to shift back and forth among tasks induces a realistic level of stress. • Managers have to use information from multiple sources to solve subsequent problems. • Participants find the arrangement interesting, realistic, and acceptable. ARGUMENTS AGAINST A "DAY IN THE LIFE" ARRANGEMENT There are theoretical and practical reasons why this arrangement may be problematic. From a measurement perspective, multiple opportunities to observe behavior are typically more reliable and valid than single opportunities. With the "day in the life" arrangement, the participant has basically one, admittedly long, opportunity to do well or poorly. If the participant does not comprehend the initial background information, he or she will not be able to do well in all subsequent activities. If a critical item in the in-basket is not read thoroughly, the participant may not have the needed information
INTEGRATED SIMULATION EXERCISES
^^
159
to conduct other interactions. If the analysis is done poorly, the participant can hardly do well in the group discussion. By contrast, if the exercises are separate, the participant has the opportunity to do well or poorly on each exercise. Basically, the participant "starts over" with each exercise. In this sense exercises are analogous to separate, independent "items" on a test. The practical disadvantage of the "day in the life" arrangement is that it imposes a single order of administration of the activities, and several assessors must be able to conduct the same activity with all participants at about the same time. In the schedule previously laid out, several assessors must make a phone call at 10:00 a.m., meet in the role of boss at 2:00 p.m., and meet in the role of colleague at 3:00 p.m. These obligations mean that all assessors must be cross-trained to conduct and evaluate all assessment activities. It is, of course, possible to slightly vary the schedule for the participants such that the phone call comes at some point in the time frame of 10:20 a.m. to 10:40 a.m. This introduces some lack of standardization. In some applications, this slight variation is of little consequence and would be acceptable; in other applications, the lack of standardization may be a basis for a legal claim that the assessment technique was not the same for all participants. If the exercises are kept separate, they can be administered to the participants in any order. Thus, the fact-finding exercise can be conducted with Participant A at 9:00 a.m., Participant B at 10:00 a.m., and so on. In parallel fashion, the interview simulation with the boss can be conducted with Participant B at 9:00 a.m., Participant A at 10:00 a.m., and so on. This scheduling allows for "exercise specialists" who are highly familiar with only one exercise. Proponents of this arrangement argue that better assessments come when the assessor and role player can become more intimately familiar with his or her specialty. Specialization is especially important with some exercises where technical knowledge is essential. For example, it is unlikely that very many people could become competent to play the role of a VP of information technology in a bank who must interact with IT consultants inquiring about potential services for the bank SUMMARY The "day in the life" arrangement has many attractive features. If a developer wishes to use a set of several simulation exercises, an integrated arrangement like the one outlined here may provide an interesting and valuable assessment tool. Before launching such a program, the developer will want to follow all the steps outlined in previous chapters for the individual simulations. A particular challenge is to ensure that the exercises are compatible with each other, and no inconsistencies exist across the exer-
160 -sxaJ
CHAPTER 12
cises. Pilot testing is particularly important with this arrangement to make sure there is logical consistency in the whole set of simulations. The developer will also want to be aware of the potential "contamination" of performance from one exercise to the next, and take steps to avoid double penalizing a participant if he or she does poorly in an early exercise and then, by necessity, does poorly on subsequent exercises. Why Simulations Fail Fatal Error 12: Excessive Cost A highly sophisticated simulation of a long day in the life of a manager was developed. The simulation facility was a near replica of a modern office in a high-tech organization, including multiple telephone lines, fax, computer, e-mail, and audio-visual displays. The assessment facility possessed a high degree of fidelity with modern offices, but was very costly and not very flexible. Only one person could be assessed at one time, thus few organizations utilized the facility with the day in the life simulation.
Ill Implementation ^^^g^ In this section we discuss two important activities relevant to the implementation of simulation exercises: training assessors and complying with professional standards and guidelines. Assessor training is critical because the assessor plays such an important role in the effectiveness of a simulation exercise. Compliance with standards and guidelines is critical if the simulation will be used for personnel decision making, such as selection or promotion decisions. We conclude with a chapter that summarizes the key points in the development of simulation exercises.
This page intentionally left blank
13 Assessor Trainin-
Throughout this book, we have discussed the process of constructing simulation exercises. Although well-developed exercises are important to the overall success of the program, the role of the assessor is essential. Unqualified assessors and inadequate assessor training are two of the most common errors in assessment centers (Caldwell, Thornton, & Gruys, 2003). Using assessors who are not properly trained can lead to negative perceptions of the process by participants and other stakeholders, reduced utility of the program, and legal challenges. These difficulties can be overcome by ensuring that assessors are carefully selected and trained. This chapter provides a step-by-step guide for creating an effective assessor selection and training program.
Choosing the Assessors Prior to implementing the assessor training program, potential assessors need to be identified. When choosing assessors, the organization must first decide whether to use people who are internal or external to the organization. Internal assessors may be human resource professionals, trainers, organizational development specialists, or line managers. External assessors are often psychologists or consultants who have special training in observing and evaluating behavior: One advantage of using internal assessors is that they usually have greater familiarity with the target job. Additionally, internal assessors reduce or eliminate the need to hire external consultants, which may provide a significant cost savings. Serving as an assessor helps managers improve their professional skills such as interviewing and appraising performance on 163
164
^^
CHAPTER 13
the job (Lorenzo, 1984). A downside of using internal assessors is that they must take time away from their regular duties both to participate in the training and to serve as assessors. Furthermore, when the simulation is used for promotions, it may be difficult to find internal assessors who do not know the participants. Assessors who know the participants prior to the simulation may not be as objective as assessors who do not know the participants (Schmitt, Schneider, & Cohen, 1990). Moreover, future working relationships between participants and assessors may be jeopardized if a participant receives an unfavorable evaluation. External assessors are less likely to have preconceptions about the participants and maybe able to evaluate participant performance in a more objective manner. Additionally, external assessors are likely to have special experience and training in observing and evaluating behavior. Several studies have demonstrated that when psychologists are used as assessors in addition to or instead of managers, predictive validity is increased (e.g., Gaugler, Rosenthal, Thornton, &Bentson, 1987; Sagie &Magnezy, 1997). A disadvantage of using external assessors is that they may not be as familiar with the requirements of the target job. As such, external assessors are more appropriate when general skills such as leadership or communication are being evaluated rather than specific technical skills. Another disadvantage of using external assessors is the additional costs that are incurred. Naturally, the organization need not choose to use only external or internal assessors, but rather both types may be used to balance out the advantages and disadvantages of each. Why Simulations Fail Fatal Error 13: Assessors Lacked Credibility A police jurisdiction recruited assessors from other jurisdictions to evaluate a set of problems presented to candidates for promotion to captain as one phase of the biannual promotional exam process. The assessors coming from other cities, counties, and states were not highly qualified individuals. Some did not meet minimal standards for proficiency in training, several had difficulty completing the assessment tasks, they did not appear credible, and their assessments were not accurate. Candidates for promotion commented that the jurisdictions must have sent only the poorest administrators. Subsequently, the results of the promotional list were challenged in court, largely on the basis of the weak assessors. In considering the number of assessors, it is generally a good idea to train more people than will be needed. Invariably, some assessors will be unavailable to serve in the assessor role at times, and others may not be
ASSESSOR TRAINING
*^s>
165
able to complete the training. It is far easier to train a few extra people initially rather than additional people later. Once the potential assessors have been identified, attention can be given to designing the training program itself.
Components of an Effective Assessor Training Program Although assessor training programs may vary somewhat, the following six steps should always be included: 1. 2. 3. 4. 5. 6.
Determine how long the training will take. Decide on the format of the training. Design the actual content of the training Determine the training schedule and other logistics (e.g, location). Evaluate the trainees' performance. Evaluate the effectiveness of the training program.
Each of these steps will be discussed in detail in the sections that follow Step 1: Determine Training Time The time required to train assessors may vary from a few hours to several days. Generally, it is recommended that 2 days of training be provided for every day of assessment (International Task Force on Assessment Center Guidelines, 2000). This time may vary depending on the experience level of the assessors. Individuals who have served as assessors in the past and who have demonstrated competence in observing, recording, and evaluating behavior may need less time. On the other hand, inexperienced individuals might require more time. If some assessors are experienced while others are inexperienced, then the training should be targeted to the inexperienced group, with the more experienced individuals perhaps skipping portions of the training. Unfortunately recent survey research suggests that many organizations are not adhering to these guidelines (Spychalski et al., 1997; Kudisch et al, 1999). The design of the assessment program will also impact the length of training. When the simulation exercises are longer, more elaborate training is needed. Additionally, if several distinct dimensions will be assessed, more time is needed to become thoroughly familiar with each dimension. Assessors often serve multiple roles: role player, administrator, and communicator of assessment results and feedback. If assessors are to be involved in observing complex organizational games for an organizational development program, and in giving feedback to participants during the exercise, additional training in process observation and feedback will be needed. The more roles
166
CHAPTER 13
assigned to the assessors, the more training they will need, as each role typically entails a different set of knowledge and skills. There may be organizational pressures to keep training time to a minimum: cost of external assessors, demands on the time of internal managers, perceived expertise as evaluators of job requirements and performance effectiveness. We would urge caution, however, at skimping on the training for even the most seasoned of assessors. As Page (1995) noted, many organizations spend large amounts of money developing high quality exercises only to have the program falter because they were not willing to invest enough time to train assessors properly. If management balks at the length of the assessor training, one solution is to split the training up into several smaller components. In one of our client organizations, a full day of training was needed, but management was only willing to allow 4 hours of formal training. To ensure assessors were adequately trained, the session was split up into three components: 2 hours of background reading, 4 hours of formal training, and 2 hours of training "on the job" during the pilot testing phase of the exercises. Management was satisfied because assessors were only officially "in training" for four hours, and we felt comfortable with the adequacy of the training.
Step 2: Designing Training Format Adult learners differ from children and adolescents in several important ways (Knowles, 1973). First, adult learners tend to get less out of traditional classroom training and more out of experiential learning. Second, adults typically bring rich experiences into the learning environment, and they will generally learn better if they can relate the new materials to their past experiences. Third, adult learners generally prefer an interactional style of learning where they can share their own ideas and thoughts. When designing the training format, it is important to keep in mind the above characteristics. Therefore, the format should be a mixture of didactic learning, discussion, and experiential activities. There is evidence that learning is enhanced when training participants have the opportunity to practice their skills and then receive feedback (e.g., Goldstein &. Sorcher, 1974; Latham &.Saari, 1979), and so practice and feedback should be integrated into the experiential activities.
Step 3: Design Content of Assessor Training Program According to the International Task Force on Assessment Center Guidelines (2000), an assessor training program should, at minimum, ensure that assessors have a thorough knowledge and understanding of the following:
ASSESSOR TRAINING
*va>
167
• The organization and job being assessed as well as an understanding of the characteristics of the participants who will be assessed. • The dimensions to be assessed, including their definitions, their relationship to job performance, and examples of effective and ineffective performance for each dimension. • Assessment techniques relevant to the dimensions being assessed. • Evaluation and rating procedures. • Assessment policies and practices of the organization, including how the results of the assessment data will be used. • Feedback procedures (when appropriate) and the ability to deliver effective and accurate oral and written feedback (if the assessor performs this role). Additionally, assessors should demonstrate the ability to observe, record, and classify behaviors into dimensions, and when assessors will also act as role players, they should demonstrate the ability to play the role objectively and consistently. Each of these content areas will be discussed in detail below.
Knowledge of the Organization, Job, and Participants When internal assessors are used, they may already have a thorough understanding of the organization and the target job. However, in some cases, internal assessors may be new to the organization and/or may not be familiar with the target job. In addition, strategic job analysis may have revealed that the organization is expanding the role of incumbents in the job. All assessors must become familiar with any new expectations. External assessors are likely to lack detailed knowledge of the organization and the target job. Therefore, it may be appropriate to include some preliminary background reading and discussion to ensure all the assessors have an adequate understanding of these factors. This background information should include a description of the target job, where the job fits in the larger organization, and the expectations of individuals in the job. Additionally, information about the organization and its core values and mission will help the assessors to evaluate the participants in the appropriate context. An understanding of the characteristics of the participants is also necessary. This understanding helps the assessors to know what level of performance to expect and to make ratings accordingly. For example, a greater level of ability for some dimensions would be expected of senior executive participants than of lower- and middle-level managers. It is also important for the assessors to know what skills the participants are expected to have already and what skills they will be expected to learn in the future. For example, in a promotion setting, participants may be expected to have good problem solving and interpersonal skills, but the organization may not ex-
168 -*^a>
CHAPTER 13
pect them to have finely honed leadership skills. However, the organization may be interested in assessing the extent to which the participants have the potential to develop these skills in the future.
Knowledge of ihe Assessment Dimensions Training the assessors on the dimensions to be assessed in the simulations should entail a thorough review of the definitions of the dimensions. This can be accomplished via background reading, lecture, and discussion. Information gathered during the job and organizational analysis phases will be particularly helpful. However, it is not enough for assessors to simply know what the dimensions mean, they must also develop a shared understanding of how various behaviors relate to the dimensions and how these dimensions impact overall job performance. Training for assessors will consist of familiarity with behaviors that illustrate the dimensions across a variety of settings, but more importantly, with behaviors that can and should be demonstrated in the specific simulations being used. One technique that has been effective in developing this shared understanding is known as "frame of reference training." Frame of reference training can help to eliminate idiosyncratic standards held by assessors and replace them with a common frame of reference for assigning ratings (Schleicher, Day, Mayes, & Riggio, 2002; Sulsky & Day, 1992; 1994). Pulakos (1984) has outlined the steps involved in frame of reference training: • Teach the assessors about dimensions that are important to the job and the behaviors that are indicative of each dimension. This would entail generating precise definitions of performance dimensions and thoroughly discussing the behavioral exemplars of each dimension. For example, for the dimension of interpersonal sensitivity, specific behaviors might include asking team members for their opinions on issues, showing empathy to others, and not interrupting others when they are speaking. • Discuss specific behaviors that would indicate various levels of performance. This would entail generating examples of behaviors that would earn a particular rating, which helps to provide assessors with appropriate standards for evaluating performance. For example, on a rating scale of 1 to 5, interrupting others while they are talking might earn a score of 1 while listening attentively and asking questions for clarification might earn a score of 5. • Practice making ratings with the new frame of reference and receive feedback on the accuracy of the ratings. These steps are discussed in more detail next.
ASSESSOR TRAINING
"^
169
The overall goal of this process is for assessors to end up with a prototype of behavior for each level of performance for each dimension. Although this process takes some time, frame of reference training has .consistently been shown to improve rating accuracy and therefore, the validity of the assessment process (e.g., Schleicher, Day, Mayes, & Riggio, 1999; 2002; Woehr, 1994). When applied to simulation exercises, frame of reference training consists of understanding the behaviors indicative of effective and ineffective behavior for each dimension to be measured in each simulation by participants in the target job and organization. Thus, behaviors indicative of effective adaptability in an in-basket may differ from those in a one-on-one exercise. Adaptable behaviors expected of customer service representatives may differ from those expected of supervisors. Likewise, the level and type of adaptability expected among software engineers in a dynamic high-tech organization may differ from what is expected of claims administrators in a medical insurance company.
Knowledge of Assessment Techniques Assessors need to be thoroughly familiar with the simulation they will be working with, including instructions to the participants and role players. Perhaps the best method of teaching this information is to have the assessors first read all the relevant exercise materials and then complete the exercise. Participating in the simulations helps the assessors to get a sense of what the experience will be like for the actual participants. Participating also gives the assessors a chance to receive feedback on their own strengths and weaknesses. Once the participants feel comfortable with the content of the simulations, they can move on to practicing observing and assessing others' performance in the exercises. Time should also be allowed for questions and discussion about the exercises.
Ability fo Obserue and Record Behavior This phase of the observation training can begin with a presentation of the potential errors in the process of observing human social interactions (Campbell, 1958; Thornton & Zorich, 1980). These errors are discussed in detail in Table 13.1. Explaining the process that may naturally take place, giving the assessors examples that may occur in the simulation exercise, and later pointing out instances of the error if they occur in subsequent practice, can minimize the effects of these errors. During a typical simulation exercise, the participant is exhibiting complex behaviors and the assessor must observe and record this behavior as accurately as possible. By far the best way for assessors to learn this skill is to
17O
CHAPTER 13
^^s>
TABLE 13.1 Errors in Observation Systematic Error
Example in observation in simulation
Loss of detail through simplification
Failing to note the words a person used to persuade others, and just noting "he is persuasive"
Over dependence on a single source
Making snap judgments before seeing all the relevant behaviors
Middle message loss
Failing to observe behaviors half way through an exercise
Categorization error
Failing to note the errors in arithmetic made by a person who did many correct calculations; failing to see "shades of gray" in a person's performance
Contamination from prior information
Knowing the person has a reputation as a good managei; focusing on her good behaviors and failing to note her ineffective behaviors
Contextual error
Letting the situation influence observations
Prejudice and stereotyping
Taking particular note of a woman's criticism of others because of a preconception that women are "cattie"
Halo effect
The influence of a rater's general impression on ratings of specific characteristics
practice observation and then to receive feedback on the accuracy of the observations. Practice can be achieved by having the assessors either watch a videotape or a live-acted version of a simulation. If a videotape of the simulation is available, the training technique of behavior modeling can be used. As the behavior unfolds on the monitor, the trainer can show on a screen the behavior observation notes which the trainees should be writing down. Behavior modeling gives the trainees exact instruction on the types of notes they should be recording. During the simulation, assessors need to watch and listen carefully to what the participant says and does and take notes on these behaviors. Later, the assessors will be able to use the notes to complete ratings scales. At this phase, the assessors should merely record the participant's behavior and not make any attempt to classify the behaviors into dimensions or evaluate the effectiveness of the behaviors. Research has demonstrated that assessor evaluations are more accurate when judgments about effectiveness are delayed until all the facts are considered (Hedge &Kavanaugh, 1988). Assessor notes should capture the following information: • What the participant says—the assessors should write down word for word what the person says when possible, and when this is not possible, they should closely paraphrase what the participant said.
ASSESSOR TRAINING
^^
171
• The nonverbal behavior of the participant (e.g., "The participant pointed her finger at the role player;" "The participant did not say anything during the last 15 minutes of the group discussion exercise."). • Reactions of other participants in group discussions and games (e.g., "The other participants endorsed three of her recommendations and included them in the final report;" "On three occasions her recommendations were either rejected or ignored by the other group members."). • The role player's reactions to the participant's behaviors (e.g., "The role player leaned back in his chair when the participant yelled at him."). • Changes in behavior during the role-play situation (e.g., "The participant started the conversation in a quiet voice and then raised it as the discussion continued. By the end of the discussion, the participant was shouting at the role player."). Assessors should also be given the following guidelines to ensure the observations are effective: • Position one's self so as to be able to see and hear the participant clearly. In games, this may require the observer to move around the room. The observer must be close enough to hear yet not be intrusive. • Record the notes in behavioral terms (e.g., He said, "I'm going to document this incident."). • Write down specific observations of behavior. The more specific the observations are, the better. Later, assessors will have time to review their notes to look for general patterns of behavior • Keep the notes in descriptive, rather than evaluative terms; doing so will help forestall a rush to judgment. • Assessors should ask themselves if another observer could confirm what they are recording. If the answer is yes, then they know that what they have written is behavioral, specific, and non-evaluative. On the other hand, if the answer is no, assessors are likely arriving at inferences (i.e., assumptions or conclusions about behavior observed) that often lead to rating errors. Another problem with inferential statements is that when an individual receives this type of feedback, the individual will not know what to do to improve performance. In addition, assessors should be trained to avoid the following in their observation notes: • Statements that over generalize (e.g, "He was over-sensitive."). • Statements that attempt to interpret the participant's actions (e.g., "She rolled her eyes because she was annoyed with the role player."). • Statements that impart feelings to the participant (e.g., "He was upset.").
172
<sxs>
CHAPTER 13
• Statements that attempt to describe the participant's underlying personality (e.g., "She is an introvert."). • Trivial behaviors (e.g., the participant sneezed during the session). Immediately after each simulation, the assessors should take a few moments to finish their notes. Later, when they have chance to review their notes carefully, assessors should code their observations by classifying the behaviors into relevant dimensions and indicating their effectiveness. These steps involve the following: • Write the name of the dimension next to the behavior that best describes it (e.g., a coaching behavior might be: "The participant asked the role player how he could help the employee improve her performance"). • Place a positive (+) or negative (-) sign next to each behavior to indicate if it is an effective or ineffective example of that dimension. Several techniques have been proposed to help facilitate the often onerous task of observing and recording behavior. Some organizations choose to videotape the exercises so that assessors can watch the tapes and evaluate the behavior at their leisure. Organizations report that both assessors and participants generally react favorably to videotaping exercises (Kudisch et al., 1999). Although videotaping can lead to greater numbers of observations, it does not generally lead to more accurate observations (Ryan et al., 1995), and observing and recording behavior from a videotape typically takes longer than observing a live performance because of the propensity for assessors to review sections of the tape multiple times. Another commonly used technique is the behavioral checklist. A sample checklist is shown in Table 13.2. Though checklists can lead to reduced variability between the observations of different assessors, they do not necessarily lead to more accurate ratings (Hennessy, Mabey, & Warr, 1998). A checklist is often helpful in conjunction with the traditional note-taking technique. Assessors can take notes on plain paper during the simulation and then use the checklist as a follow up to assist with classifying the behaviors. Some assessors may find the task of note taking noxious and believe it is unnecessary. Inexperienced assessors may need to be persuaded to take thorough notes. The following reasons may be presented to assessors to convince them to take thorough notes: • Note taking improves accuracy of recall of information and increases judgment accuracy for those taking conventional notes (Middendorf &Macan, 2002). • Behavioral notes increase the accuracy of ratings.
TABLE 13.2
Behavioral Checklist for the Coaching Dimension in a One-on-One Interaction Simulation (+) Gives role-player a chance to express his or her views ( + ) Listens while role player is speaking (+) Clearly outlines what the performance issues are to the role player (+) Explains how the role player's behavior is negatively affecting the team (+) Uses examples to illustrate the performance issues ( + ) Refers to performance expectations in the job description (+) Clearly outlines what the role player must do to improve (+) Gives suggestions to the role player as to how to improve (+) Gives positive feedback to the role player as appropriate (+) Asks probing questions to try to find out why the role player is acting out (+) Obtains the role player's commitment to take action (+) Assists the role player with developing a plan for improvement ( + ) Does not take personal responsibility for the role player's improvement (+) Clearly outlines how he or she will follow up with the role player in the future (+) Keeps conversation focused on performance issues without being harsh or unsympathetic (+) Has realistic perceptions of how the meeting went in the post-interview follow-up questions .(+)
(-) Allows role player to steer the conversation off topic (-) Acts impatient or abrupt with role player (-) Brings up personal issues (e.g., role player's family) not relevant to the discussion (-) Does not address one or more of the role player's performance problems (-) Takes excuses at face value—does not confront role player about the performance issues (-) Fails to recognize that the performance issues are part of a pattern of problem behavior (-) Does not refer to standards listed in the job description (-) Does not clearly say what the role player is expected to do in the future to improve (-) Takes responsibility for the role player's improvement rather than letting the role player take responsibility (-) Is not able to get the role player to commit to improving his or her performance
.(-)
173
174 «va>
CHAPTER 13
• Thorough notes enhance the credibility of an assessor when discussing evaluations with other assessors. This is especially important if information from multiple assessors and simulations are to be integrated. • Clear notes allow a team of assessors to diagnose differential patterns of behavior in participants' responses across different situations. • If a candidate for promotion receives a low rating, thorough notes of ineffective behaviors make the feedback more defensible and hopefully more acceptable. • In the event of a legal challenge in an employment discrimination suit, the notes provide a documentation of the reasons for low ratings. • Participants are more likely to accept feedback from assessors when the feedback is accompanied by strong supporting evidence in form of specific examples (Goode, 1995). • When the assessment is used for developmental purposes, behavioral observation notes help the participant gain insights into what behaviors are effective and ineffective in solving problems and interacting with others.
Ability to Evaluate and Rate Performance Once the behaviors have been classified, the assessor is ready to evaluate the performance of the participant and provide overall dimension ratings. To facilitate the rating process, a behaviorally anchored rating scale (BARS) may be helpful to ensure ratings are based on a common framework of behaviors exhibited by the participant. Examples of BARS are included in each of the chapters devoted to the types of exercises. Additionally, the assessors should be trained to avoid common rating errors as described in Table 13.3. Increasing assessor awareness of common rating errors is a step toward reducing these errors. Prior research has demonstrated that these errors can be further reduced when assessors are given training that allows for practice and feedback (e.g., Latham, Wexley &. Pursell, 1975).An effective way of training assessors to provide accurate ratings is to have them independently generate ratings based on the observations made previously. Then, the ratings can be discussed in a group setting. During the discussion, each assessor should describe the process he or she used to generate the rating and the specific behaviors that were cited as effective and ineffective examples of the dimension. This discussion will highlight potential biases, and the assessors will receive direct and immediate feedback from the group on their ratings. This process of generating independent ratings and then discussing them should be repeated until the assessors are all providing ratings in a consistent manner. Ideally, assessors should have the opportunity to practice assigning ratings to participants at the high, low, and middle ends of the performance spectrum.
TABLE 13.3
Common Rating Errors Rating Error
Definition
Possible Reason
Examble
Halo
Ratings are based on a global impression rather than individual evaluations of the participant's performance in each different dimension.
Relying on An assessor believes that a general participant has effective impression rather leadership skills and rates than specific that person highly on behavioral events communication and interpersonal sensitivity as well.
Recency
Ratings are based on the most recent behaviors rather than giving a rating that reflects all earlier behaviors.
Giving excessive attention to behaviors at the end of the exercise.
Similar/ Giving higherflower Wanting to Different ratings based on certain support one's own from me bias qualities or characteristics point of view. of the participant that are similar to or different from the assessor.
A participant performs
adequately throughout the exercise, but then makes a few obvious mistakes at the end of the simulation. The rater then gives the individual low scores overall.
How to Awoid Remind assessors to evaluate each dimension independently and to remember that a person can be high on all dimensions, low on all dimensions, or high on some dimensions and low on others. During training, have assessors discuss examples of individuals with different profiles of traits. Caution assessors against letting performance on one dimension influence the score they give on another. However, keep in mind that some people really are very good, or very poor, or average in every area, and so sometimes, similar ratings are accurate. Remind assessors to take thorough notes during the exercise and review all the behaviors noted when assigning a rating. Check sheets may help to provide a quick picture of overall effective and ineffective behaviors.
A female assessor gives The best defense is to he aware of this potential higher scores to women than bias. Assessors should make a conscious effort to men, based on the similar ignore similarities or differences to themselves gender of the participant. and assign ratings based on performance. BARS ratings scales can help ensure ratings are based on performance and not on biases. continued on next page
c 4
*
TABLE 13.3 Rating Error
Definition
Stereotyping Ratings are based o n membership in a particular group rather than on performance.
Possible Reason Relying on one’s general assumptions and Dersonal biases about sex, race, religion, culture rather than specific behaviors.
(continued)
Example An assessor believes that
HOWto Avoid A frank discussion of stereotypes can often
women are less assertive than men and therefore gives women lower scores on leadership.
illuminate these biases. Additionally, assessors should be aware that even positive stereotypes could be harmful. Check sheets and BARS rating scales can help to ensure ratings are based on actual performance.
Over emphasis on negative or positive
Greater importance is placed on one aspect of a participant’s performance, which overshadows other aspects.
Some behaviors are so dramatic that they influence the assessor unduly.
A n assessor weights one instance of negative behavior more heavily than all of the other positive behaviors the participant has displayed.
Remind assessors that all of the participant’s behaviors are important in providing ratings. Use check sheets and BARS to give assessors a comprehensive view of the participant’s entire performance. EXCEPTION: singular “knockout” observations may be set forth: for example, blatant racial, ethnic, or sexist statements.
Central tendency
The propensity to give average ratings to all participants.
Assessor is unwilling to commit to an evaluation. Desiring to “play it safe.”
Using the middle score on the rating scale too often.
During training, practice assigning ratings to high, medium, and low performances. Use frame of reference training to ensure assessors have a common understanding of what defines high, medium, and low performance.
Severity
The tendency to give low ratings to all participants.
Being uncomUsing the low points on the fortable giving rating scale too often. positive feedback, even when deserved. Having unusually high standards.
See above.
ASSESSOR TRAINING
*^a>
177
Knowledge of Assessment Policies and Practices Assessors should have an understanding of the policies and practices of the organization with regard to the simulation exercises. Specifically, assessors should know how the simulations will be used (e.g., selection, promotion, training, development, or research) and how the simulation fits into the larger system for that process. For example, if the simulation is used for selection, assessors should know if the exercise will be used early on in the process or if it will be the final hurdle before job offers are made. Additionally, assessors should be aware of who will have access to the results of the simulation exercises and how participants will receive feedback. Confidentiality should be emphasized so that the assessors clearly understand that they should not discuss participant performance with anyone other than individuals who are authorized to have access to that information.
Ability to Deliver Effective Feedback When the simulation is used for promotion, training, or development, participants will expect feedback on their performance, and the assessor who evaluated the performance may be in the best position to deliver this feedback. The manner in which the feedback is delivered has a significant effect on how the feedback is perceived and implemented by the participant. Moreover, feedback can be difficult to give and receive, especially when performance problems were observed. Therefore, training assessors to deliver feedback effectively is crucial. Feedback training should start with a review and discussion of the key components of effective feedback. That is, participants are trained that feedback should be: • Timely: The feedback meeting should occur as close as possible to the actual simulation so that the information is fresh to both the participant and assessor. Also, waiting for feedback can be stressful and so it is important to reduce the participant's anxiety as much as possible. • Objective and behavior oriented: Feedback is maximally effective when it is focused on behavior rather than personal characteristics. For example, rather than telling a participant that he or she has poor interpersonal skills, it is more effective to give the person examples of ineffective interpersonal behaviors. Even positive feedback that is focused on the person can be detrimental (Kluger & DiNisi, 1996; Larsh, 2000, 2001) and so positive feedback should be behavior oriented as well. • Specific and constructive: The feedback should be based on specific examples that illustrate the participant's strengths and weaknesses on each dimension. Direct quotes or examples of nonverbal behavior will help in this effort. For example, an assessor might say: "In the one-on-
178 '*^3>
CHAPTER 13
one interaction simulation you told the role player 'I don't care about your personal problems.' This was ineffective because . A more effective way of handling this might have been to • Supportive: The participant should view the feedback process as an opportunity for growth and development. Therefore, the feedback should be presented in a supportive manner, by focusing on positive behaviors and avoiding harsh and negative criticism. There is evidence that participants who perceive assessors as caring, personable, and concerned about their professional development are more likely to accept and act on their feedback (Goode, 1995). • Developmental (where appropriate): The feedback session should also help the participant plan for development by setting specific goals for change along with a timeline. Ideally, the participant will have a supervisor or mentor who can discuss progress toward these goals. Additionally, the assessor could provide resources (e.g., suggest additional classes, reading materials, etc.) that relate to the participant's developmental needs. • Delivered verbally and in writing: The feedback should include a description of the participant's performance in each dimension and examples of effective and ineffective behaviors in each assessment activity. Initial, verbal feedback can provide the opportunity to discuss strengths and weaknesses, but the participant may not be able to fully comprehend the feedback at that time. A written report provides documentation and the opportunity for the participant to study the assessment more carefully at a later date. • Open to response from the participant: The participant should be allowed to respond to the feedback, such as asking questions for clarification. It is important for the participant to understand the feedback and perceive it as being an accurate description of his or her performance on the activities. Once the assessors have discussed the principles just mentioned, they can be solidified through practice. Practice is probably best accomplished by pairing up the assessors and then having them role play delivering feedback to each other. Ideally, the role plays will include delivering both positive and constructive feedback. This technique provides insights into how participants may react to the feedback and how they might feel on hearing an evaluation of their performance.
Ability to Be an Effectiue Role Player In some assessment contexts, the assessors will be expected to serve as role players as well as assessors. It is important to keep in mind that individu-
ASSESSOR TRAINING
***>
179
als who make good assessors may not make good role players, and vice versa. Therefore, the organization may have to face the possibility that not every assessor will have the skills or desire to be an effective role player, and perhaps only a subset of the assessors may be asked to also serve as role players. The jobs of role playing and assessing are each demanding, and so it is not recommended that one individual try to fulfill both duties at once. Additionally, learning to be an effective role player is quite different than learning to be an effective assessor, which adds to the training time required. We have discussed various aspects of role-player training in previous chapters (especially chap. 8). Therefore, that information is not repeated here except to emphasize that the most effective training entails practice and feedback
Step 4: Scheduling and Logistics Although it seems like a minor detail, attending to the logistics of the training is crucial to its success. First, training dates and times must be set. Although the schedule often depends on the availability of the assessors and trainers and organizational constraints, other factors should be taken into consideration. For example, if the training is lengthy, it may be better to break it up into several smaller sessions rather than planning a single, longer session. However, these sessions should occur relatively close together so that information is not forgotten in the interim. Additionally, the training should not be scheduled close to other key events when there is a high likelihood that the trainees will be distracted. In one situation encountered by a colleague, assessor training was scheduled for the same day that the shareholders of the company were due to vote on a large-scale merger. The vote was very close and contentious, and our colleague reported that the trainees were quite distracted. In addition to ensuring the training schedule is appropriate, the location of the training is also important. The facilities should be easily accessible to the trainees, they should be comfortable, and they should be similar to—or ideally the same as—the facilities that are used during the actual assessments. Restrooms and water fountains should be easily accessible, and the organization may want to consider providing snacks and soft drinks during the training. The location should also be free from distractions. If managers are used as assessors there is the temptation to return to their offices during breaks to check voice and e-mail. Therefore, it often helps to use a remote location such as a training center or hotel where distractions are minimized.
Step 5: Evaluating Trainee Performance Throughout the training session, the performance of the potential assessors should be evaluated to ensure that they have mastered the skills necessary
ISO *xs>
CHAPTER 13
to evaluate participant performance. The evaluation may be formal or informal. Formal evaluations often involve a series of performance activities to ascertain whether or not the assessors can carry out their functions. These activities might include: • A test that asks the person to identify whether a list of observations are behaviors. • A test of whether the person can accurately classify a list of observations into the appropriate dimensions. • A task of observing a videotape, taking notes, classifying behaviors, and rating performance effectiveness. Some assessor training programs go so far as to make a formal determination of whether each individual is qualified to serve as assessor, and to present a written certification of qualification. This evaluation need not be in the form of a formal written exam or performance test. However, the trainers should observe the assessors during the practices sessions and review their practice evaluations to ensure that they are making accurate observations and ratings. At minimum, performance expectations should include (adapted from the International Task Force on Assessment Center Guidelines, 2000): • The ability to rate behavior in a standardized fashion. • The ability to recognize, observe, and classify the behaviors into the appropriate dimensions. • The ability to administer the simulation (if the assessor will serve in this role). Assessors who are having difficulties mastering these skills should be given feedback and additional practice. If after this extra coaching it becomes apparent that some of the assessors are not performing adequately, they should not be allowed to assess actual participants. This potential attrition can be mitigated by training more assessors than will be actually needed. Having more assessors than needed will also prevent the organization from feeling pressured into using assessors who are unqualified.
Step 6: Obtaining Feedback on the Training Once the training is complete, feedback on the training process itself should be obtained. The newly trained assessors may not be able to give good feedback until after they have actually served in an assessor role. Once they have the chance to apply the skills they have learned, they are in a better position
ASSESSOR TRAINING
**a>
181
to comment on how well the training prepared them for this role. Questions to ask might include: • How well did the training prepare you to observe participant behavior in the simulation exercise? • How well did the training prepare you to classify and evaluate participant behaviors? • How well did the training prepare you to provide participants with feedback on their performance? • Provide a list of the components of training and ask: What components of the training were most helpful? What components of the training were least helpful? • What suggestions do you have to improve assessor training in the future? In addition to these questions, participant reactions to the simulation (discussed in more detail in chaps. 5-12) can provide valuable insights into the effectiveness of assessor training. If the participants consistently cite a given area as problematic (e.g., feedback) then this is an area to attend to in future assessor training sessions.
14 Compliance With Professional Guidelines and Standards
Developers of simulations have several goals when building simulation exercises: reliability, validity, job relatedness, and fairness; face validity and acceptance by participants and other stakeholders; compliance with professional and legal standards; and defensibility if challenged in an employment discrimination claim or lawsuit. This chapter explores many of the challenges of complying with professional and legal guidelines governing personnel assessment techniques when constructing simulation exercises. GOVERNING DOCUMENTS All personnel assessment techniques used to make employment decisions must comply with guidelines and regulations contained in the following documents: • American Educational Research Association, American Psychological Association, American Council on Measurement in Education. (1999). Standards for educational and psychological tests, Washington, DC: American Psychological Association. • Society for Industrial and Organizational Psychology. (1987). Principles for the validation and use of personnel selection procedures (3rd ed.). College Park, MD: Author. • Equal Employment Opportunity Commission, Civil Rights Commission, Department of Labor, & Department of Justice. (1978). Uniform
182
COMPLIANCE
^^
183
guidelines on employee selection procedures, Federal Register, 43(166), 38290-38309. • International Task Force on Assessment Center Guidelines. (2000). Guidelines and Ethical Considerations for Assessment Center Operations. Public Personnel Management, 29, 315-331. These documents contain many principles, regulations, standards, and prescriptions for designing, developing, evaluating, and implementing personnel assessment techniques. At the risk of over-simplification, we can summarize these requirements into three categories— standardization, reliability, and validity. • Standardization: Any assessment technique must be administered in a uniform manner such that all participants have the same instructions, environment, and stimulus conditions; scoring must also be standardized such that a given behavior consistently receives the same evaluation. • Reliability: The score from the assessment must be consistent and repeatable. • Validity: There must be evidence that the assessment technique is measuring the intended attributes and inferences from the scores are supported by evidence.
Unique Challenges for Simulations Although compliance with these regulations in the development and implementation of any personnel assessment technique is often difficult, developers of simulation exercises face special challenges. The very features of simulations that make them attractive for many applications present special difficulties in establishing that they comply with testing regulations. In this section, we describe those challenges and point out ways for overcoming them. One argument for developing homegrown simulations for a new assessment situation is that the exercise is unique and tailor-made for the organization. If done well, the exercise will have high face validity and acceptability. The downside of this arrangement is the difficulty of carrying out the necessary extensive psychometric evaluation on each new exercise. Few developers and organizations have the resources to provide the same level of extensive evaluation research that is often carried out on large-scale published tests of cognitive abilities, personality, and interest offered by the national test publishing organizations. So, what is the developer of homegrown simulation exercises to do? We believe that reasonable, responsible, and adequate steps can be taken to establish the adequacy of simulation ex-
184 *xs>
CHAPTER 14
ercises. Much of what we recommend has already been covered in the preceding chapters of this book. In essence, our recommendations are to follow the systematic steps for simulation development offered throughout this book, and document, document, and document each of the steps and findings as you go along. Why Simulations Fail Fatal EiTor 14: Failure to Document An otherwise effective assessment process was challenged and found wanting by a government compliance officer because of deficiencies in the documentation of the development and evaluation of the process. The development team had done all the right steps, but had not made contemporaneous records of the various steps of situation analysis (e.g., the names and qualifications of the subject matter experts), exercise development (e.g., how the content of the simulation corresponded to job activities), assessor training (e.g., how the assessors were trained and certified to be competent), and how the observations were recorded and scored (e.g., what paper trail was created leading to the evaluations of individuals). Because the organization was unable to adequately reconstruct all these steps in the process of development, it had to abandon an otherwise effective operation. We now summarize these actions in terms of the three key areas of standardization, reliability, and validity. Although there is no guarantee that the assessment resulting from use of the simulation will satisfy professional and legal standards, following these suggestions will increase that likelihood.
Standardization Standardization of conditions in simulations is particularly challenging because of the integral role played by human beings at several stages of the assessment process with simulations. The administrator of the exercise can influence the behavior of participants through the way in which instructions are given. Because some simulations are potentially ambiguous about the way in which participants are to respond, participants may be looking for some hint as to what is really important. They may seek clues from the administrator, and if the administrator is not consistent from one participant to the next, participants may be led in different directions. Developers can minimize the potential for unstandardization and maximize the consistency of exercise administration by writing clear, verbatim instructions for the administrator. This is, of course, common practice in
COMPLIANCE
*xa>
185
test manuals, and is highly recommended for simulation exercises also. Another technique employed by some administrators is to provide the instructions on a videotape that is shown to for each new participant or group of participants. The assessors are another potential source of variation in the administration of a simulation exercise, and thus subject to challenges of lack of standardization. For example, during presentation exercises, sometimes awkward or funny incidents occur. How should the assessor respond? One may try to be helpful or laugh, whereas another may remain stone-faced. Assessor training must address these sorts of issues and provide guidance on how all assessors should react when unusual or untoward events occur Role players present another source of potential variation in the execution of a simulation exercise. If role players are not consistent in interacting with participants, then standardization may suffer. When role players are used, two types of inconsistency can occur. First, a given role player can behave differently over time across participants. Second, one role player can behave differently from another role player. Consistency can be maximized by providing role players with guidance on many anticipated situations and by thoroughly training role players in a group. Training should include the opportunity for all role players to see each other enact a variety of situations. Consistency over time can be controlled by not overworking role players so they become fatigued or stressed. A final potential source of unstandardization is the behavior of other participants in group discussions and games. A mildly reticent participant may behave rather openly in one group of quiet and inarticulate colleagues, but not speak up at all in another group of vocal and assertive companions. Hopefully, random assignment of participants to groups ensures some variety of dispositions in groups. In addition, instruction in the group discussion exercise can call for each participant to voice his or her ideas before open discussion. These steps may minimize concerns for standardization, but some assessment applications are so susceptible to charges of unstandardization that the some organizations have decided not to use group discussions or games because of the inherent variation in the behavior of different groups of participants. Standardization is particularly important for some applications where the organization has some strong indication that an administrative or legal challenge may ensue. For example, some police jurisdictions have a history of legal challenges to each selection or promotional program. One common charge is that candidates were not treated in a fair manner, which often translates into a question of standardized administration of the screening process. If the administrator gives different instructions, or an assessor, role player, or resource person answers a question in a slightly different way, the
186 t^
CHAPTER 14
allegation of differential treatment is hard to refute, even if the alteration is minor and inconsequential.
Reliability Some of the traditional methods of assessing reliability of paper-and-pencil tests are not appropriate for simulation exercises. Simulation exercises do not have several discrete "items" which can be statistically analyzed for difficulty level, discrimination value, and correlations. Thus many of the techniques to examine internal consistency cannot be applied. Specifically, split half and coefficient alpha cannot be computed. In addition, test-retest methods are often inappropriate because it may be difficult to test the participants under the same conditions two different times. Fellow participants in a leaderless group discussion or business game may not behave similarly, and surely the group will remember ideas discussed in the earlier session and start with a new understanding of the issues involved. The content of exercises like the in-basket and interview simulation is often unique and participants may remember actions they took in dealing with selected problems encountered. Studying the equivalence of alternate forms may be a viable strategy for evaluating reliability, but slight alterations in the setting, specific problems embedded in the exercise, or the way that a role player responds may render the alternate exercise different enough to make the assumption of parallel forms untenable. Another practical problem is that most exercises are so time consuming that it is infeasible to gather enough data on appropriate samples of participants. So, what avenues are available for establishing reliability of the assessment process? It must be remembered that for most simulation exercises, the assessment process involves assessors taking notes of behavioral observations, classifying behaviors into dimensions, and rating performance. The final "scores" are the ratings of performance dimensions made by the assessors. Therefore, providing evidence that assessors can consistently carry out these functions is evidence of reliability. In assessor training programs, it can be determined whether or not assessors have reached a standard level of proficiency in observing, classifying, and evaluating behavior. These standards can be met by providing a series of "tests" in the training program before assessors are "certified" to be competent in their various roles. Assessors can be required to show that they can: • recognize whether a statement about performance in an exercise is a true behavioral statement, rather than a judgmental statement; • watch a standardized stimulus of behavior portrayed on video-tape and write down behavioral statements; • classify behaviors into appropriate dimensional categories;
COMPLIANCE
-*xs>
187
• rate performance effectiveness accurately. Lists of behavioral statements that have been determined to represent defined dimensions of performance can be presented to assessors who are asked to classify the behaviors into appropriate categories. Accuracy of classification can be determined. Finally, multiple assessors' ratings of the level of performance on dimensions can be compared with each other and with "true scores" rendered by experts. The latter test might also be considered evidence of validity, and is discussed as such in a later section. It is relatively easy to informally study whether assessors agree in their rat' ings, and thus it is surprising that more organizations do not do so. Surveys (Kudisch et al., 1999; Spychalski et al., 1997) have found that only about one-half the respondents evaluated reliability. When they did so, they examined agreement among assessors.
Inter-Rater Reliability and Inter-Rater Agreement Formal, statistical evidence of inter-rater reliability and inter-rater agreement can also be gathered. Inter-rater reliability and inter-rater agreement are similar but different concepts. Inter-rater reliability refers to the correlation of ratings given by two or more raters to a set of ratees, whereas inter-rater agreement refers to agreement in the absolute level of ratings. Pattern A in Table 14.1 presents a set of ratings that show a high correlation but low agreement by two raters who have rated five candidates. By contrast, Pattern B presents ratings that show both high correlation and high agreement. Inter-rater reliability is an indication of consistency and consensus in ratings. It examines the similarity of rank order of the ratees across the raters. It can be calculated by computing the Pearson Product Moment correlation of the ratings given to a set of candidates by a pair of two raters, or by computation of the intraclass correlation of ratings from a set of three or more raters TABLE 14.1 Two Patterns of Ratings of Five Candidates by Two Raters Pattern B
Pattern A Candidate
Rater A
Rater B
Rater A
Rater B
1 2 3 4 5
7 6 5 4 3
5 4 3 2 1
1
1
6 5 4 3
6 5 4 3
188
^^
CHAPTER 14
(Guilford, 1954; McGraw & Wong, 1996; Shrout & Fleiss, 1979). This process is comparable to computing alpha to examine the consistency among a set of test items (Cronbach, 1990). More complicated methods to examine inter-rater reliability are possible with generalizability studies (Cronbach, Gleser, Nanda, & Rajaratnam, 1972). In fact, Standards for Educational and Psychological Testing (AERA, 1999) emphasizes that when tests scores are a function of judgment of raters, a generalizability study should be conducted. The Standards advocates the use of generalizability studies to provide estimates of amount of variance due to different raters, as well as different tasks or different occasions of measurement. Methods for conducting generalizability studies are beyond the scope of this book, but the interested reader can consult other sources (Crocker & Algina, 1986; Shavelson & Webb, 1991). Inter-rater agreement examines the similarity of the absolute level of ratings given by multiple raters. Inter-rater agreement can be calculated in a number of ways, several of which are described in Tinsley and Weiss (1975). When the data are ratings on an ordinal or interval scale (which is typical of exercise ratings), Tinsley and Weiss recommended the method described by Lawlis and Lu (1972). This approach allows the investigator to distinguish between important and unimportant disagreements and then provides a chi-square test of the significance of interrater agreement. Still other methods of estimating interrater agreement are presented by James, Demaree, and Wolf (1993). Validity Organizations using any testing device to make employment decisions about applicants or employees must provide evidence that the techniques are valid, job related, and nondiscriminatory. Validity evidence can come in a variety of forms, depending on the inference being made from the test scores. For selection and promotion purposes, the organization typically is making the inference that test scores are predictive of future job performance or some other work outcome. If ratings from simulations are used to help make decisions about whom to hire or promote, evidence of their ability to predict subsequent performance is critical. We subscribe to the modern, unitary view of validity described in Standards for Educational and Psychology Testing. A variety of types of evidence can support the validity of inferences made from test scores. Evidence can include past research on the assessment technique, documentation that the technique was developed carefully, evidence the scores can be derived consistently, evidence that the technique measures relevant attributes, and correlations with related measures and measures of job performance. Methods on how to generate many of these lines of evidence are described in throughout this book, and in the next section.
COMPLIANCE
'«^
189
Euidence of Correlations Wilh Criteria For some purposes, the statistical correlation of test scores and onthe-job criterion measures (e.g., job performance) is the ultimate evidence of validity. Ideally, criterion-related validity should be demonstrated for simulation exercises. In fact, considerable evidence of criterion-related validity has been collected on simulation exercises (Thornton, 1990; Thornton & Byham, 1982; Thornton & Rupp, in press). But, this requirement is not a reasonable expectation for the majority of homegrown simulation exercises. It is not feasible for every organization to conduct a predictive validity study for each new simulation exercise it develops. It is unlikely that a compliance agency (such as the Equal Employment Opportunity Commission or the Office of Federal Contract Compliance Office) or a judge hearing an employment discrimination suit would require predictive validity data. Other forms of evidence may be quite satisfactory.
Evidence of Representative Content Central questions to be answered before delving into the types of content validity evidence that might be gathered are: • "Is content validity evidence adequate to validate simulation exercises for selection and promotion purposes?" • "Does this type of evidence meet the EEOC Uniform Guidelines and other professional standards?" We believe the answer to both these questions is yes. The EEOC interpretation of the Uniform Guidelines states that Whatever the label, if the operational definitions are in fact based on observable work behaviors, a selection procedure measuring these behaviors may be appropriately supported by a content validity strategy. (No. 75 of Official Questions and Answers)
Thus, the process of operationalizing the dimensions to be assessed in simulation exercises is a critical step in satisfying the validity requirement for simulations. In practice, evidence of the content representativeness of the simulation to the target job is the most frequently cited evidence to support the job relatedness of simulation activities. What we have said in the chapters on the development of each simulation exercise describes many of the processes of generating content validity. Evidence of content validity of simulation exercises can be shown by:
19O 'svaJ
CHAPTER 14
• analyzing the job and situation thoroughly, • matching the content of the exercise to the problems encountered on the job, • operationally defining the dimensions to be observed, • building helpful aids for the assessors, • training and certifying the assessors. All these procedures go together to provide support for the contention that the ratings from the exercises are related to job performance on the target job. In this section we explore in more detail some of the specific content-oriented evidence that can be gathered to satisfy professional and legal standards for exercise validation. Figure 14.1 shows a model of the relationships that can be established in the process of content validation of simulation exercises. Five important linkages are displayed and evidence should be marshaled to substantiate each one. Links 1, 2, and 3 are established during the formative evaluation phase when exercises are being constructed and pilot tested; Links 4 and 5 are established during the summative evaluation phase after the exercises have been finalized and implemented. Linkage 1 is supported if the content of an exercise is selected to represent the important situations and problems carried out in activities of the target job. Linkage 2 is also established with job analysis and competency modeling. Job experts should have provided information to substantiate that certain attributes/dimensions are critical for effective job performance currently and in the future. Evidence supporting Linkages 1 and 2 should be documented in the situation analysis phase of simulation development.
j FIG. 14.1.
A Model of Content Validity Evidence of Simulations.
COMPLIANCE
'«X3>
191
Linkage 3 is established through the process of specifying what behaviors in the exercises demonstrate effective performance on the dimensions. Through the process of developing behavior checklists and/or behaviorally anchored rating scales, the simulation developer has taken an important step in establishing Linkage 3. Certifying that assessors can reliably observe and rate behavior of participants in the exercise on the specified dimensions provide further evidence supporting Linkage 3. Linkages 4 and 5 can be supported by asking subject matter experts (SMEs) to provide judgments of the relevance of the problems, situations, and challenges in the simulation exercise to the job (4) and judgments of the importance of the dimensions being evaluated to effective job performance (5). Persons qualified to serve as SMEs at this point may be experienced managers or persons trained as assessors. For the judgments of SMEs to be credible evidence, the SMEs must have both experience with the job and knowledge about what the simulation exercise is capable of measuring. Therefore, it may be necessary to collect these judgments after assessor training. At the very least, persons knowledgeable about the job must be given a thorough explanation of the exercise, and probably will need to see participants complete the exercise. SMEs may be asked to judge the relevance of the content of the exercise to the job activities in the target job. One method of systematically gathering this evidence is to ask the SMEs to indicate which task or responsibility in the job description is represented in the problems contained in the exercise. Figure 14.2 shows excerpts from a job description of a position in a brewery and parts of the questionnaire to solicit the judgments of similarity of the exercise to job tasks. In the example shown, the average rating for the SMEs is indicated by an underline. In this example, SMEs judged that the content of the exercise was highly similar to tasks 1, 2, 3, and 5. Second, SMEs can be asked to provide judgments of the relevance of the dimensions evaluated in the exercise to the attributes needed for effective job performance. Figure 14.3 shows excerpts from a form to obtain such evidence. Dimensions 1,3, and 4 were judged to be very important or essential. The process of establishing content validity evidence is not easy. The value of this type of evidence to the evaluation of simulation exercises used for human resource decision making in selection and promotion is hotly debated. One argument against the process is that evidence of content relatedness is relevant only for highly objective forms of testing (e.g., multiple choice formats) that measure concrete characteristics such as knowledge and specific skills. An argument for the process is that this evidence is relevant to any measure so long as the attribute being measured can be defined specifically, behaviors illustrating effective and ineffective performance can
192 **o>
CHAPTER 14
Instructions 1. Examine the content of the simulation exercise: "Marshall Planning Group." 2. Think about the activities involved in the tasks listed in the job description for Brewhouse Specialist. 3. Rate the similarity of the content of the exercise to the activities in each of the tasks in the job.
FIG. 14.2.
Illustration of SME Judgments of Content Similarity.
Instructions 1. Examine the definitions of the dimensions being assessed by the exercise "Marshal Planning Group." 2. Think about the tasks and activities involved in the job of Brewery Specialist. 3. Rate the importance of the dimensions listed below to the job.
Fig. 14.3.
Illustration of SME Judgments of Dimension Relevance.
COMPLIANCE
*xa>
193
be observed, and the performance dimensions can be reliably evaluated. It is our position that evidence of content relatedness can be gathered by following the steps of exercise construction outlined here and by gathering systematic judgments from credible subject matter experts. Construct Related Validity Euidence Considerable research has been carried out on many of the simulation exercises described here (see Thornton & Byham, 1982; Thornton &. Rupp, 2003, for reviews of much of this literature). This research has shown what specific constructs are typically measured in many of the simulations described in this book. For example, research shows that the group discussion technique measures emergent group leadership, and case studies and games measure problem-solving skills. Organization simulations are often used in assessment centers. Along the way the reader may have been wondering whether and when we address the construct validity of assessment centers—a controversial topic that has occupied the assessment center community for over 20 years. We do not provide an in-depth discussion of this topic because our focus is on the construction of individual organizational simulation exercises. Such exercises can be used in combination with other simulation exercises or in conjunction with other assessment techniques (e.g., tests or interviews) in assessment centers, but also the individual exercises we describe here can be used singly for assessment purposes, for training purposes, or as vehicles for basic research investigations. These other applications do not require one to confront the controversy of "the construct validity of assessment centers." Even though we do not provide an extensive review of the controversy over the construct validity of assessment centers, a brief summary of the major points of view is appropriate. Two positions have been staked out, both of which are supported by considerable empirical research. The first, "traditional," view is that dimensions (i.e., human attributes) form the building blocks of the assessment center process, exercises provide opportunities to observe behavior relevant to the dimensions, assessors can observe behavior and classify these behaviors into the defined dimensions, and assessors can render ratings of performance effectiveness in the dimensions with adequate reliability and validity. Research supporting this view can be found in numerous sources (Ladd, Atchley, Gniatczyk, & Baumann, 2002; Lievens, 1998, 2002; Schleicher et al., 2002; Thornton, 1992). The second, opposing view is that exercises form the building blocks of assessment center ratings. This position asserts that the exercises are low fi-
194
***>
CHAPTER 14
delity work samples and judgments by assessors reflect performance in exercises and not dimensions. Supporters of this position point to evidence that assessors' judgments of performance in several dimensions in several exercises tend to show larger correlations among dimensions within an exercise than evaluations of the same dimension in different exercises. Thus, this position asserts that assessment centers do not show convergent and discriminant validity, that is, there is a lack of construct validity. Research supporting this view can be found in numerous sources (Kolk, 2001; Lance et al., 2000; Lance etal., 2002; Sackett & Dreher, 1982). Proponents of this view have advocated that assessors make ratings of alternative constructs such as roles, activities, or overall exercises. We subscribe to the traditional explanation that dimensions are the basic building blocks of assessment centers. Our reasons are many. First, we believe the preponderance of research evidence supports the position that assessors' evaluations of dimensions demonstrate a wide variety of evidence of validity, as that term is used in modern psychometric literature (AERA, et al., 1999). This evidence includes evidence that has previously been labeled "construct validity." Second, there is a lack of a systematic body of evidence that supports other approaches to the structure of assessment centers. For example, there is little evidence that ratings of alternative constructs demonstrate reliability, validity, or usefulness. More specifically, there is little evidence of correlations of ratings of these constructs with independent performance criteria or evidence to support the usefulness of assessments and feedback structured around something other than dimensions. Third, dimensions, as we have labeled and described them here, are a part of the natural language of human beings in describing work performance. Systematic assessment of these dimensions advances managers' understanding of organizational performance. Fourth, behavioral feedback to participants after participation in an assessment center structured around dimensions provides guidance on areas of weakness that can be generalized to a wider range of organizational situations, compared to feedback structured around tasks in specific exercises. Even if one does not agree with our position on this admittedly complex and controversial issue, this book will still be helpful. If the developer wishes to use simulation technology, he or she is still faced with the task of constructing a meaningful exercise. Following the alternative view may lead one to call for assessors to evaluate observed behaviors in terms of some constructs other than dimensions. The investigator may ask assessors to evaluate the roles that participants carry out, steps or tasks that are carried out in the exercise, or the overall performance effectiveness in the entire exercise. The latter has been called an "exercise rating."
COMPLIANCE
*va>
195
Summary of Validity Discussion Establishing the validity of a testing device involves gathering a wide variety of evidence of its effectiveness. When validating a simulation exercise, this evidence includes: • • • • • •
Documenting all the steps of analyzing the target job. Carefully constructing the stimulus materials in the exercise. Preparing clear observation and evaluation aides. Thoroughly training assessors and certifying their competence. Administering the process in a standardized manner Accumulating the historical evidence of past research on the type of exercise. • Building the new exercise to closely resemble previous exercises in this genre, so that the accumulated evidence from earlier validity research can be used to support the validity of the use of the new exercise.
15 Summary
Developing a simulation exercise can be quite challenging and rewarding, and even enjoyable. It calls on skills to apply many principles of measurement and assessment, a fair amount of creativity to think up interesting situations that are challenging, realistic, and fair, and a good deal of conscientiousness to work through the many details of writing exercise materials for the participants and the supporting materials for administrators, assessors, and role players. We believe this book provides guidance to people undertaking this exciting aspect of human resource management. The key principle that governs all the work in developing simulation exercises is to create a situation that elicits behavior from the participant on dimensions of performance relevant to organizational effectiveness. The developer must create a situation that elicits behavior relevant to at least some, but certainly not all, performance dimensions. At the same time, the developer must not create a situation that elicits irrelevant behaviors. If formal written communication skills are not relevant to effective job performance, the developer must avoid an exercise in which writing skills may influence performance. If the job situation does not impose strict time pressures on performance, the developer must avoid highly speeded exercises. The situation to which the participant is responding in a simulation exercise goes far beyond the written case materials handed out. Many other aspects of the assessment situation can influence the results, including the instructions given by the exercise administrator, the comments and behavior of the assessors, and the actions of the role player or resource person, if these persons are involved. 196
SUMMARY
'«xs>
197
The model and steps outlined in this book help the developer build an effective simulation exercise. However, even the most well-developed exercises are useless if they do not meet the needs of the individuals actually using the results or various stakeholders who may be affected by the outcomes. The developer will want to clarify the objectives of the program that uses the exercise and the expectations about the exercise from the users. It is important to obtain clear understandings with key stakeholders such as the executive committee, managers, human resource specialists, and other people who will use or be affected by the assessment results. The developer will want to conduct thorough analyses of the organization and job that will be simulated in the exercise. Do not shortchange these important steps—gather extensive information at the outset. These investigations will be time well spent in the long run and will result in a better simulation exercise. Before the actual exercise materials are created, the developer should write down a number of specifications for the exercise and get buy-in from important stakeholders. The specifications should cover the time involved, the industry in which the situation will be set, the types of problems to be embedded in the exercise, whether written, or oral, or both behaviors will be elicited, the nature of the dimensions to be assessed and their difficulty level. Only after securing agreement on these basic parameters should the simulation materials be written. The materials should be clear and concise. Avoid topics or materials that give advantage to certain groups, that may offend some people, or appear to be biased in any way The developer should conduct pilot work to confirm that the exercise is working before using it to make important decisions about personnel. Some sort of formative evaluation is critical. At the very least, ask colleagues to read over the materials. Better yet, try out the exercise on people who are similar to the intended audience. Admittedly, this is sometimes difficult in situations where the content of the exercise must be kept highly secure. The developer should keep in mind that the results of the assessment may be challenged in the future. Therefore, some evaluation of the reliability and validity of the process must be conducted to protect oneself and the organization. Because the assessors are critical to the success of simulation exercises, they must be chosen carefully, trained thoroughly, and evaluated for their abilities to carry out assessor activities. The developer should make thorough documentation of all the activities involved in developing the simulation exercise: who was consulted in the job and organization analyses phase, who created the case materials, what pilot testing was carried out, who served as SMEs providing judgments of the content validity of the exercises, how were the observations made and
198 's^3)
CHAPTER 15
recorded. Our strong advice here is to document, document, and document some more. Following these steps will lead to a simulation exercise that is both useful to the organization and is defensible if challenged. Many judgments must be made in the process of developing a simulation exercise. The developer and the organization represented will be well served by recording the rationale for making the many decisions along the way
Appendix A Examples of Attributes Assessed by Simulation Exercises
DECISION MAKING Problem Analysis: Effectiveness in identifying problems, seeking and obtaining pertinent data, relating data from different sources, recognizing important information and identifying possible causes of problems. Decision Analysis: Developing alternative courses of action and making decisions that reflect factual information, are based on logical assumptions, and take organizational resources into consideration with an unbiased and rational approach. Judgment: Making decisions based on logical assumptions that reflect factual information. Creativity: Generating and/or recognizing imaginative solutions and innovations in work-related situations; identify radical alternatives to traditional methods and approaches. Decisiveness: Readiness to render judgments, make decisions, state opinions, take action, or commit oneself. Risk Taking: Taking or initiating action that involves a deliberate gamble to achieve a recognized benefit or advantage. Numerical Analysis: Ability to analyze, organize, and present numerical data, for example, financial and statistical.
199
200
*xa>
APPENDIX A
COMMUNICATIONS Oral Communication: Effective expression in individual or group situations (includes organization, gestures, and nonverbal communication). Persuasive Oral Communication: Ability to express ideas or facts in a clear and persuasive manner. Convince others to own expressed point of view. Oral Presentation: Effective expression when presenting ideas or tasks to an individual or to a group given time for preparation (includes organization, gestures, and nonverbal communication). Written Communication: Ability to express ideas clearly in writing, in good grammatical form, in such a way as to be clearly understood (includes the plan and format of the written work). Technical Translation: Converting information from scientific or technical documents or other sources into a communication understandable by lay people. Sales Ability and Persuasiveness: Using appropriate interpersonal styles and methods of communication to obtain agreement with or acceptance of an idea, plan, activity, or product from clients. MANAGEMENT/ADMINISTRATION Planning and Organizing: Establishing a course of action for self and/or others to accomplish a specific goal; planning proper assignments of personnel and appropriate allocation of resources. Delegation: Effective allocation of decision making and other responsibilities to the appropriate subordinate or resource, using subordinates effectively. Control: Establishing procedures to monitor and/or regulate processes, tasks, or activities of self, subordinates and job activities and responsibilities; taking action to monitor the results of delegated assignments or projects. Development of Subordinates: Developing the skills and competencies of subordinates through training and development activities related to current and future jobs. Organizational Awareness: Use of knowledge of changing situations and pressures inside the organization to identify potential organizational problems and opportunities. Organizational Sensitivity: Actions that demonstrate awareness of the impact and implications of decisions and activities on other parts of the organization.
APPENDIX A
***> 201
Commercial Awareness: Actions that demonstrate one understands the key business issues that affect the profitability of an enterprise and takes action to maximize success. Extra-Organizational Sensitivity: Action that indicates an awareness of the impact and implications of decisions relevant to societal and governmental factors. Extra-Organizational Awareness: Use of knowledge of changing societal and governmental pressures outside the organization to identify potential problems and opportunities. LEADERSHIP Leadership: Utilization of appropriate interpersonal styles and methods to guide individuals or groups toward task accomplishment. Individual Leadership: Use of appropriate interpersonal styles and methods in guiding individuals (subordinates, peers, superiors) toward task accomplishment. Group Leadership: Use of appropriate interpersonal styles and methods in guiding a group with a common task or goal toward task accomplishment, maintenance of group cohesiveness and cooperation, facilitation of group process. INTERPERSONAL RELATIONS Listening: Ability to pick out important information in oral communication, questioning and general reactions indicate active listening Interpersonal Sensitivity: Awareness of other people, the environment, and one's own impact on these. Actions indicate a consideration for the feelings and needs of others (but are not to be confused with "sympathy"). Impact: Makes a good first impression on other people and maintains that impression over time; showing an air of confidence. Sociability: Ability to mix easily with other people. Talkative, outgoing, and participative. Teamwork: Willingness to participate as a full member of a team of which he or she is not necessarily the leader; effective contributor even when team is working on something of no direct personal interest. Conflict Resolution: Willingness to deal with differences of opinion among others (colleagues, subordinates) in the work group.
2O2
^^
APPENDIX A
Confrontation: Ability and willingness to disagree or tactfully express opposing viewpoints; willingness to assert and defend one's position even when challenged. MOTIVATION Work Standards: Setting high goals or standards of performance for self, subordinates, others, and organization; dissatisfied with average performance. Resilience: Demonstrate actions to maintain effectiveness in situations of disappointment or rejection. Energy: Creates and maintains a high level of appropriately directed activity; shows drive, stamina, and the capacity to work hard. Commitment: Expresses belief in own job or role and its value to the organization, makes the extra effort for the company although it may not always be in own self-interest. Self Motivation: The importance of work in attaining personal satisfaction; actions show a high need to achieve successfully Customer Service: Exceeding customer expectations by displaying a total commitment to identifying and providing solutions of the highest possible standards aimed at addressing customer needs. Initiative: Originating action and maintaining active attempts to achieve goals; self-starting rather than passive acceptor; takes action to achieve goals beyond what is necessarily called for; sees opportunities and acts on them. Tenacity: Ability to persevere with an issue or problem until the matter is settled or the objective is no longer reasonably attainable. Range of Interests: Demonstrates breadth and diversity of general business related knowledge—is well informed. Tolerance for Stress: Performance is stable under pressure or opposition. Adaptability: Maintains effectiveness in varying environments with various tasks, responsibilities, or people. Independence: Takes action in which the dominant influence is one's own convictions rather than the influence of others' opinions. Tenacity: Stays with a position or plan of action until the desired objective is achieved or is no longer reasonably attainable.
APPENDIX A
***a>
2O3
Job Motivation: The extent that activities and responsibilities available in the job overlap with activities and responsibilities that result in personal satisfaction. Career Ambition: The expressed desire to advance to higher levels with active efforts toward self-development for advancement. INTRAPERSONAL Independence: Taking actions where the dominant influence is one's own convictions rather than the influence of others' opinions or reactions. Risk Taking: Initiating action that involves a deliberate gamble. Integrity: Ability to maintain social, ethical, and organizational norms in job-related activities. Stress Tolerance: Stability of performance under pressure or opposition. Makes controlled responses in stressful situations. Flexibility: Ability to modify own behavior (i.e., adopts a different style or approach) to reach a goal. Adaptability: Ability to remain effective within a changing environment such as when faced with new tasks, responsibilities, or people. MISCELLANEOUS Attention to Detail: Seeks total task accomplishment through concern for all areas involved, no matter how small. Technical and Professional Knowledge: Has high level of understanding of relevant technical and professional information. Recognition of Employee Safety Needs: Expresses concerns about condition that affect employees' safety and takes actions to resolve inadequacies and discrepancies. REFERENCES Assessment and Development Consultants, (undated). Exercise catalogue. Surrey, England: Author. Byham, W. C, Smith, A. B., & Paese, M. J. (2000). Grow your own kaders. Pittsburgh, PA: DDL Thornton, G. C., Ill, &Byham, W. C. (1982). Assessment centers and managerial performance. NY: Academic.
This page intentionally left blank
Appendix B Expanded Definitions of Two Dimensions
Planning and Organizing: Ability to efficiently establish an appropriate course of action for self and/or others to accomplish a specific goal; make proper assignments of personnel and appropriate use of resources. Most of the planning and organizing required of a first-level supervisor in the Packaging Division is on a rather short-term basis. This type of planning is quite critical to the successful operation of the plant. A first-line supervisor from the machine shop stated that his first priority every morning was to line his people up for the jobs that needed to be accomplished. It was often necessary to move people around from job to job to make sure the people were experienced in the different types of jobs and to relieve boredom. One supervisor Talked about the changing priorities which occur during the day requiring him to reassign jobs. He said it was quite possible for an operator to work on three jobs during the day and never complete any of them. A first-line supervisor from the chemical area stated that his first priority when he comes to work is to see if his complete crew is in. If not, he has to switch people from their normal duty stations. He said this was often complicated because of the fact that not everyone under his supervision can perform every job. The critical incident discussions with the managers also yielded a number of examples where planning and organizing were critical to a supervisor's success. A story about the first-level supervisor who called the scheduling supervisor on Friday night and arranged for a trucker to have all the material moved out very early Monday morning in time for the Monday morning shift would be a good example of effective planning. Another example told about supervisors who have 22 to 25 people who must be sched-
205
2O6
'S'>3)
APPENDIX B
uled equally over a number of different machines. This requires a very complex record keeping system which must be kept up to date on a daily basis. One particular supervisor was very good in doing scheduling, which was at least partially responsible for this person's high level of job performance. Judgment: Ability to develop alternative solutions to problems, to evaluate courses of action, and to reach logical decisions. First-line supervisors have to make decisions on a large variety of different topics. Some of these are actual technical decisions having to do with the product and the manufacturing process. It is imperative for the first-level supervisor to know when he should make these types of decisions and when these decisions should be referred to his manager. For instance, a first-level supervisor from the drying area discussed the decision as to whether or not to continue making casing if it is not perfect. He stated that if the casing is not absolutely within specification, it is important he consult his supervisor to see what should be done. He said it often depends upon how badly the customer wants the casing. If the customer is not in a great hurry., then the casing can be thrown out. However, if the customer is in great need, then the manufacturing should continue. The first-level supervisor stated that since he did not have that type of information, it was important that he give these decisions to his manager. Another major area concerning judgment has to do with setting priorities. A first-level supervisor in the machine shop said that very early in the morning he looks at the jobs to be accomplished for the day and identifies the ones which have to be done immediately and the ones which can be postponed. He said that sometimes he does not know all the variables which impact the decision and contacts the people who are placing the orders if there is a question. From the first-line supervisor's perspective, a difficult area requiring judgment has to do with personnel problems. Reprimanding an employee seems to be high on the list of the difficult areas under this heading. Although a first-level supervisor cannot fire an employee, it is the responsibility of the first-level supervisor to recommend this if he or she feels it is necessary. A number of instances of this dimension were generated from the critical incident component of the job analysis procedure. One situation described the relieving of a supervisor. The supervisor who was being relieved, in addition to writing in the log any problems which occurred in his shift, should have also described potential problem situations to the relieving supervisor. In one situation which was described, the supervisor being relieved did not tell the relieving supervisor of a problem they were having with a motor on a piece of equipment. As a result, when the equipment broke down the supervisor was not prepared to take action. Another situation was described where a supervisor was transferred. When the new supervisor looked through the over-
APPENDIX B
***a>
2O7
time records he noticed that one employee had no record of overtime, but had not refused overtime either. When this situation was investigated the hourly employee said that he had not been asked. Further investigation revealed that the hourly employee had another job at night and had asked the previous supervisor not to request him to take overtime. In this situation, the supervisor had made a bad decision in going against company policy Source: Thornton, G. C., Ill, &. Byham, W. C. (1982). Assessment centers and managerial performance. NY: Academic.
This page intentionally left blank
Appendix C Post-Exercise Forms
A sample of the following statements can be used to gather information from participants after a simulation exercise is completed. The developer may wish to obtain the responses to the items evaluating the exercises anonymously so that the participants feel free to offer criticisms and suggestions without fear that the responses will influence performance ratings. With the exception noted, participants can be asked to respond to the following statements using a 5-point Likert scale with anchors ranging from 1 (Strongly Disagree) to 5 (Strongly Agree).
Items to Gather Self Evaluation The simulation exercise gave me a fair chance to demonstrate my job-related skills. This simulation accurately identified my skills, abilities, and so on. The other participants in my (group/game) hindered my ability to demonstrate my true skills. The artificial constraints in this simulation exercise prevented me from showing my real abilities. I believe my performance in the simulation exercise was not greatly different from what it would be in a real life situation life this. How would you rate your performance in this exercise: Poor, Fair, Adequate, Good, Excellent.
209
2 10 ^^
APPENDIX C
Rank the performance of the members of your group (including yourself) in terms of their contribution to accomplishing the goals of the simulation exercise. 1 = best contributor to 6 = least contributor Nominate the two persons who contributed the best ideas in your group discussion. You can nominate yourself.
Items to Gather Participants' Reactions to the Exercise The
simulation measured important qualities required of
I felt the simulation was difficult/challenging/stressful. I found this exercise offensive. The pressures and demands of the simulation exercise were realistic and reasonable. The exercise simulated important situations and problems encountered on the job. I clearly see the relationship between this tool and what is required on the job. The simulation exercise was run efficiently. The exercise was executed in a professional manner I believe the instructions were clear. I was provided adequate materials to complete the simulation exercise. The role player depicted realistic character and situation in the exercise. I learned some things about my skills as a result of just participating in the simulation. As a result of (participating in this exercise/receiving feedback after this exercise) , I gained a better understanding of the behaviors and dimensions needed for success as a . I understand the feedback provided. The observations and feedback regarding my performance were accurate. I received balanced feedback, including an assessment of both my strengths and weaknesses. The development recommendations I received were useful.
APPENDIX C
^g>
21 1
I plan to follow-up on the feedback and engage in activities to develop my areas of weakness. I plan on taking training in order to improve my skills as a result of participating in this exercise. I would recommend a colleague to participate in this simulation exercise.
This page intentionally left blank
Appendix D Case Study
In this exercise, you are to role play a respected businessman or businesswoman whose advice is sought by Don and Helen Small, a young couple just out of college, who have been offered the opportunity to purchase a business for which Don has previously worked. You will have 2 hours to analyze the material provided and to reach a decision about whether or not Don and Helen should buy the business. You must make a "yes" or "no" decision. You ( ) will () will not make a 7-minute presentation of your decision and reasons for it. Notes may be used in your presentation, but you should not read from a script. You () should () should not submit in writing your recommendations and supporting data. All the information and assumptions used in reaching your decision should be explained. If an oral presentation is to be given, the written recommendations must be submitted prior to the presentation. BACKGROUND The month has been a hectic one for Don and Helen Small. First Don and Helen graduated from college—Don from the Business School with a bachelor's degree in Economics and Helen from the School of Education with a bachelor's degree in Elementary Education. Then, as they had planned, they were married the day after graduation. Don, as yet, has not accepted a full-time job. He has had several offers, however, and the last offer was rather good at $ 14,000 per year. The $ 14,000 per year job is as a management trainee for a large retail discount chain. Don is still considering this job and also an offer from his present boss, Mr. Madi-
213
2 14
's*sa>
APPENDIX D
son. Mr. Madison is offering to sell to the couple the Chester Nursery and Hardware Shop. Don has worked at the Chester Nursery and Hardware Shop part-time for Mr. Madison for 4 years. Mr. Madison has considered retiring for several years and now, with Don graduating from college, he feels it is an appropriate time for him to consider selling the business to Don before he has to hire and train someone to replace Don. While working for Mr. Madison, Don has averaged about $200 per month. Mr. Madison has offered to sell the business to Don for $40,000. Mr. Madison feels the land and the small building are worth about $25,000 and the inventory of plants and hardware is worth at least $15,000. Don thinks $15,000 is too high for the inventory, but he estimates that the land and the building with its location on the edge of town are worth more than $25,000. The business is located on a fairly well-traveled road at the city limits of a city of 400,000. The area just inside the city limits is slowly becoming commercial. Bordering the city and the garden shop is a middle-class suburb enjoying a moderate rate of growth. Don and Helen like the idea of going into business for themselves where they can be together and work together. However, they are not sure this advantage will offset the many headaches associated with ownership. In addition, they are not certain just what level of income they could expect from the business. Mr. Madison does not pay himself a salary as such; any profit after expenses is considered Mr. Madison's salary. Mr. Madison told Don that his profit last year averaged about $1,000 per month. Mr. Madison has no other help besides Don; however, he does pay Mr. Cereen, a CPA, $100 a month to keep his books and prepare his income tax forms at the end of the year Don and Helen's biggest stumbling block is naturally the $40,000. Just out of school, they have saved nothing, and they have no home to mortgage. The bank will allow a $40,000 mortgage on the Chester Nursery and Hardware. However, since the mortgage would be repaid over a 30-year period, they realize that during that period they will have paid the bank about $60,000 in interest (at 7.5%) in addition to repaying the $40,000. Don and Helen are uncertain as to what they should do. That is why they have come to you, a respected businessperson, for help. Source: Development Dimensions International. This exercise is out-of-date, and thus the economic data are not relevant to the present time.
Appendix E Oral Presentation Exercise: Extemporaneous Presentation
Management consultants in our branch of XYZ are called on to give numerous presentations to potential and existing clients. Sometimes you will have the opportunity to plan your presentation in advance. These presentations are relatively formal, and you will be able to prepare written handouts and visual aids. At other times, you will not have the opportunity for advance preparations. At these times, you will have to give extemporaneous presentations with no visual aids. For example, at a meeting with a client on one topic, you may be asked to present your thoughts on another topic. In a few minutes I will give you a topic and ask you to give a 3- to 5-minute presentation. After your presentation, I may ask you some questions about your presentation. Any questions? OK, please stand up at the podium. What do you see as the three most crucial issues facing your profession in the provision of consulting services to organizations? [If the participant asks for more information, say "Your profession is accounting/management information systems/engineering/computer science. Is that correct? What are the three most crucial issues facing your profession when providing consulting services to organizations?']
215
This page intentionally left blank
Appendix F Leaderless Group Discussion: Non-Assigned Roles
You are associates on the same work team. Your team leader has asked you how to handle each of the three situations described below. Your assignment is to: • • • •
Discuss each problem. Discuss possible solutions. Anticipate possible problems with each solution. Discuss what could be done to correct new problems that might come up as a result of the solution (e.g., if the solution you suggest upsets someone or possibly creates new problems, how would you deal with these potential problems?) In short, make sure "all bases are covered." • Reach an agreement, as a team, on a solution and plan of action. You will have 30 minutes to discuss all three situations. Do you have any questions?
Situations to Be Discussed 1. Bob Street is a new member in your work group. In general, his work attitude and the quality of his work have been good. However, he tends to wander around and talk with people in other work areas when he should be working. He also seems to be taking advantage of the company's flextime schedule by working less on Mondays and squeezing in his hours on Fridays. Output from your work group is down. What should the group do? 217
2 18 *xs>
APPENDIX F
2. Chris James and Pat Turner are associates on your work team. Both are popular and well liked. Until recently they were considered to be good workers and close friends. However, because of personal problems between the two, they have stopped talking to each other. A recent argument between the two came to physical shoving. There has been some taking sides by others in the group and communication within the group has worsened. Production by the group is also beginning to slip. Last month an important order the group was responsible for missed an end-of-month shipping date. What should the group do about this situation? 3. Jane Church is an experienced associate on your work team. She has been with the company several years. She is generally well respected and has been helpful on group projects such as changing manuals and training. Lately, Jane's attitude toward safety and its importance on the job have not been good. For example, she rarely attends group safety meetings. Even worse, some of the newer team members who look up to Jane are beginning to not pay as much attention to safety. Some team members have tried talking to Jane about the importance of safety and the bad impression she is making on the newer employees, but so far nothing has changed. What should the group do about this situation?
Appendix G Leaderless Group Discussion: Assigned Roles
After meeting operating costs and target profit goals for the last fiscal year, Outdoor Outfitters (OO) has realized excess profits of $500,000. Moe Reynolds, the company founder, has asked each store to generate ideas about the best use of these profits. He would like to maximize the potential benefits of the money by using all or most of it on one project; however, he has left the final decision for the dispersal of these profits to the employees. Additionally, he has offered a $200 bonus to the store that develops the winning recommendation. The bonus money may be donated to a charity, used to throw a party, or to buy store equipment such as a microwave or refrigerator. Employees from each of the six stores have met many times over the past months discussing and researching the different possibilities. As one of the assistant managers, you have been asked to represent your store at the meeting today, where a final decision must be reached. ¥)ur tasks are to: 1. Make a short, persuasive presentation (3 to 4 minutes) to the other assistant managers about your store's recommendation. You represent Store # . 2. Answer questions from the other assistant managers about your recommendation. 3. Listen to the other presentations and ask questions that will help the group decide upon the best use of the money. 4. Reach a final decision that all group members agree upon about the allocation of profits.
219
22O
'Bxs>
APPENDIX G
The seven recommendations are briefly described below. The attached page contains a more detailed description of your store's recommendation. Store 1: Profit-Sharing Distributing profits among all OO employees. Store 2: Child Care Establishing day care centers for children of OO employees. Store 3: Staff Training Training and cross-training for employees. Store 4: Store Renovation Remodeling and upgrading each of the six . stores. Store 5: Recreation Passes Purchasing yearly recreational passes for OO employees. Store 6: Management Training Training in leadership and team management for company leaders. You will have 10 minutes to review your store's recommendation, study the recommendations of other stores, and generate potential questions for the other assistant managers. The group will then have 50 minutes for presentations and making the final decision. Store 1. Profit-Sharing: Each employee has contributed to the success of Outdoor Outfitters by their hard work, knowledge and excellent customer service skills. Thus, those who created the profits should share the profits. Each employee would receive a lump sum that reflects his or her share of the profits. This may be dispersed on an equal basis (each employee receives the same amount) or on a salary-based level (higher salaried employees receiving a greater share). Store 1 is proposing that a profit-sharing plan be implemented for future years, also. Each year, a target profit goal would be set. Any profits over that threshold would go into the profit-sharing plan. Some years there would be excess profits; some years there may not be any. Advantages: • Will increase worker's consciousness about the company's welfare. • Will reinforce worker commitment to Outdoor Outfitters. • If the plan is implemented, over a period of time quality and effectiveness of the organization will improve. • Employees will be more concerned about operational costs. Additional Facts: • Currently, Outdoor Outfitters employs 350 people. • Employees are at no risk under profit-sharing plans. Their salaries do not decrease if the company does not reach its target profit goals.
APPENDIX G
'SV0 22 1
• Past years have shown a steady increase in profits Store 2. Childcare: Store 2 is proposing that the profits be spent to establish two childcare centers for employees of Outdoor Outfitters. The profits would be used to buy, equip, and pay salaries for the first year for two high-quality childcare centers. Outdoor Outfitter employees would be able to use these centers at a reduced cost. Non-Outdoor Outfitter employees would also be able to enroll their children; however, they would pay full price. These full-price fees would pay for future salaries and equipment for the centers. A preliminary market analysis has revealed that there is a demand for quality childcare in the area and that the childcare centers should be self-sustaining after several years. Many employees work evening or weekend shifts and are dependent upon a patchwork of care for their younger children. Most rely upon a combination of relatives, neighbors, older children, limited-hours care facilities and after-hours facilities. Advantages: • Will decrease worker turnover and absenteeism arising from undependable childcare. • Will decrease stress in the workplace that is a consequence of a large number of employees being overwhelmed by off-the-job worries. • Will decrease the financial strain on many employees. • A voucher system may be used to provide other benefits for nonparent employees (e.g., increased medical coverage, health club memberships, etc.) Additional Facts: • Currently, Outdoor Outfitters employs 350 people. • Of all Outdoor Outfitters employees, 45% have children under the age of 13. Half of these parents are heads of single parent households. • Currently, the communities in which Outdoor Outfitters stores are located have limited and/or poor quality childcare facilities. • Either of the two centers would be within 30 minutes of any given OO store. Store 3. Staff Training. Store 3 is proposing that the profits be spent to offer extensive training seminars to employees. Currently, almost all training is conducted by supervisory or managerial level Outdoor Outfitters employees. This training would be intensified to thoroughly train new and current employees, and to cross train those interested in learning about other aspects of the company. Additionally, funds would be sent to send employees
222
APPENDIX G
to training offered by representatives of the various clothing lines Outdoor Outfitters carries, experts in sales techniques and resort industry retail work, instructor workshops, safety seminars, and advanced guide and expedition training schools. Advantages: • Money invested in training will be returned to the company in increased productivity and sales. • Will increase motivation of employees and prepare them to assume greater responsibilities with the company. • Cross training will allow more employees to provide better service; the customers will always have access to well trained and knowledgeable employees. • Quality training accelerates learning and enhances retention. Additional Facts: • Approximately 250 people are currently employed at the staff level with Outdoor Outfitters. • Clothing and equipment manufacturers often give training seminars free of charge and will send a representative to the store site. • Other training seminars (sales, workshops range from $25 to $500 per person, depending upon the length of the training, among other factors) . Store 4. Store Renovation: Store 4 is proposing that the profits be spent to remodel and renovate all seven Outdoor Outfitters stores. Many of them were built some time ago, and design flaws have been noticed in most. Some of the flaws are impacting the customers, such as too few dressing rooms or boot-fitting stations. Other problems are impacting the employees, such as difficult-to-reach storage shelves or a lack of employee break rooms. In addition, the stores are looking worn and outdated. Most are located in what started as rural, relatively unknown communities that have grown into expensive, more luxurious resort towns. The stores have not kept pace with the communities in which they are a part. Advantages: • Updated stores will be more appealing to customers, thus preserving the reputation of Outdoor Outfitters as quality stores. • Renovations will create a safer, more pleasant work environment for employees.
APPENDIX G
223
• Needed renovations such as increasing sales floor space for a popular line of goods (e.g., tee-shirts and souvenirs), while decreasing space used for an unpopular one, should increase profits. Additional Facts: • Stores 1, 2, and 3 were all built in the 1970s. Stores 4, 5, and 6 are 15 years old, and the newest store is 13 years old. • Aging stairways and carpets have led to employees and customers having tripping accidents. So far, no one has brought suit against Outdoor Outfitters. • Some stores are not accessible to persons with disabilities. Store 5. Recreation Passes: Store 5 is proposing that the profits be spent to purchase Recreation Passes for each full time Outdoor Outfitters employee. These passes are issued by the Association of Resorts, which owns the five local ski areas, three golf courses, and the lake. Passes may be used for all of these activities, as well as for river fees, swimming pools, admission to the nearby National Park, and so on. Virtually all local outdoor activities charge admission or ticket fees, which add up to a large expense, especially for year-round residents and their families. If this proposal is accepted, OO employees would not have to pay for their individual passes; those employees interested in family passes would pay the price difference themselves. Advantages: • Outdoor Outfitter employees and their families will be able to enjoy the benefits of the community in which they live. • Visibility of OO employees (using OO clothing and equipment, of course) using the same facilities and amenities as their customers may increase sales for the store. • This "bonus" for working for OO will increase commitment to the company. Additional Facts: • Individual recreation passes cost $800. • Family passes cost $1000. Store 6. Management Training: Store 6 is proposing that the profits be spent to offer extensive management training seminars to managerial and supervisory level employees. Currently, almost all managerial and supervisory personnel have come up from the ranks of sales staff. Their customer service
224
*xs>
APPENDIX G
skills or technical ability made them valuable to the company, but few have training in management, leadership, or supervision. Store 6 proposes that an expert consultant in training management be hired to conduct a training needs analysis and to provide ongoing training for the company's leaders in problem solving, people management, conflict resolution, and business management skills. Additional seminars and college courses may also be needed, which may be located in other parts of the state or country Advantages: • Money invested in training will be returned to the company in increased productivity and sales. • Strong leadership will filter down to staff levels, making the entire organization a better place to work. • Knowledgeable leaders will be able to coach and mentor the remaining staff. • Quality training accelerates learning and enhances retention. Additional Facts: • Eighty employees at the managerial or supervisory level are currently employed with Outdoor Outfitters. • Preliminary research has indicated that about $50,000 is the going rate to engage a training specialist to conduct the needs analysis and preliminary training. Additional costs would be encountered as employees miss work, attend and travel to seminars, or take paid time off to attend college courses. (Adapted from Larsh, 1996)
Appendix H One-on-One Interaction With Customer (A Partial Presentation of the Case Materials)
Candidate Instructions You are Pat Stevens, a sales associate with Cottontail Publications. Your ability to interact successfully with customers will be a deciding factor in Cottontail's success. For this exercise, you will engage in a simulated phone conversation. The caller will be a Cottontail customer. Your objective is to respond to the caller in the best way possible in order to meet Cottontail's goals of quality customer service and increased sales. Please treat this simulation as if it were an actual phone conversation with a customer. Before the simulation begins, please take 15 minutes to review the materials you have been provided. These include: specifics of this case, a description of Cottontail's business and core values, a description of your position, and a partial product list for Cottontail.
Specifics of This Case You are about to return a call to Sam Johnson, director of a preschool who regularly purchases books from Cottontail's workbook series every semester. Sam is a good customer but does not purchase anything other than the workbooks. You would like to increase your sales to this customer. Sam is very friendly and you want to maintain good rapport with this customer. But, you also realize that it is very easy to talk to this customer to the exclusion of making other sales calls. Given Sam's interest in and concern for a quality curriculum, it is likely that Sam is aware of what the competition has to offer. You would like to
225
226
*xa>
APPENDIX H
help Sam become familiar with all that Cottontail has to offer and ensure that Sam's curriculum needs are met solely by Cottontail's products.
Cottontail Publications, Inc. Cottontail began in the late 1970s as the brainchild of C. J. Smith, a children's book author. C. J.'s initial book became a bestseller and was hailed by educators as an example of how children's books could be smart, fun, and educational. After meeting some like-minded writers and animators, C. J. began a small publishing company to meet the growing need for educational materials in the early childhood education market. Cottontail is committed to the principles of participative education and active learning for children. The products are not only intended to teach children, but are designed to help kids "learn how to learn." Cottontail now has not only books, but also curriculum with accompanying hands-on learning toys, videotapes, and music CDs. These materials are designed to develop children's cognitive skills and promote self-esteem. The products are marketed towards children in preschool through second grade. • Cottontail has grown rapidly over the years and now employs 150 employees. The staff includes a creative team of writers and animators, a sales team (to which you belong), a packaging and shipping department, as well as an administrative staff • Cottontail sells its products directly to individuals, as well as to elementary and preschools, and educational bookstores. • Cottontail has strong company values that the company's CEO (C. J. Smith) strives to demonstrate in all aspects of running the company. In turn, C. J. expects all employees to aspire to these values in their everyday work.
Core Values • Model the type of behavior you are trying to teach children by showing your commitment to friendliness and higher principles. • Strive to be innovative in product development, internal relations, and customer service. • Always maintain the high quality of Cottontail's products and services. Do everything right the first time. • Emphasize learner involvement and application of learned principles. Our products focus on learning, not just teaching. • Do what it takes to meet the needs of customers and co-workers, but be sure to take into account the health of the company.
APPENDIX H
'*^s)
227
Characteristics of Your Sales Position • You work in the sales department. Sales associates are responsible for identifying potential customers, building and maintaining relationships with customers, providing information about all aspects of products to customers, making sales to customers, maintaining customer account history and up-to-date account information, and entering orders in the computer system. You are not responsible for product development, order fulfillment, or shipping. However, you may have to interact with other departments in order to solve customer problems and/or to learn about the products you are selling • Your job offers autonomy so that you can meet customer needs. For example, you can offer different shipping options, such as overnight or 3 day. You can send out product samples and always have a money back guarantee. You can give discounts to repeat customers who use your "automatic ordering" program every semester. This program lets you automatically ship each semester's new curriculum for a 10% discount. • You interact with your customers only via phone, fax, and e-mail. • In the past few years, the competition concerning children's educational products has become more intense. Although the company wants to maintain its reputation for high quality personal service, it has become clear that sales levels must continue to grow to maintain the company's competitive advantage.
Partial List of Publications What Does Rabbit Do? Teaches children moral and ethical lessons. Asks them what they would do in Ricki Rabbit's place and then shows some possible outcomes of different decisions. A different series is available for each age group. What Does Rabbit Say? Teaches children social skills and polite behavior with other children and adults. A different series is available for each age group. Rabbit Learns to Read. Beginning reader for preschool and kindergarten age children. Covers the alphabet and simple sentence construction in a series of ten books. Read on Rabbit! Continuation of the Rabbit Learns to Read Series. Several volumes are available for 1st- and 2nd-grade children. A teacher's manual, interactive toys and crafts, and CSs are available for each Cottontail book.
Video Series ABSee Rabbit This is a series of 15 videotapes that are designed to be used throughout the school year. Rabbit and his friends teach the alphabet and basic reading skills while traveling through a series of adventures. The videos also integrate moral and social skills lessons. This is the most expensive product that
228
***>
APPENDIX H
the company sells and comes with companion workbooks so that the children can follow along with related lessons. The videos are the most interactive of the products.
Role Player Instructions You are Sam Johnson, director of a preschool. You regularly purchase books from Cottontail's workbook series every year. You have been interested in adding other Cottontail products but are not sure what is available. In addition to reordering your books, your call is partially social. Some specifics of your situation include: 1. You have additional money in the budget for new material but will not spend it without being convinced of the merit of new material. Specifically, you would be interested in interactive toys and/or videos. You have seen samples of this type of product offered by other companies, but would like to see what Cottontail has to offer 2. You will be renewing your regular order. You initially ordered Rabbit Learns to Read. You were impressed with this product so you have ordered from the What Does Rabbit Do series at the beginning of each semester for the last few years.
Your Thoughts and Beliefs 1. You love talking about children's education and recent developments in educational material. You look forward to calling to renew your order every semester. 2. There is a cheaper publisher ("A+") that you have been considering. But, you like Cottontail because they always seem to have time for you. You appreciate the customer service Cottontail has extended to you in the past. 3. You like talking about your family (e.g., your kids or grandkids, or nieces/nephews) and often get off track from the purpose of the call. You feel that if the sales associate does not have the time to invest in a relationship with you, you do not have time to in vest your money in his or her products.
Behavioral Issues You want to create plenty of opportunities for the candidate to offer suggestions. You do not need to accept them all or accept any right away. Try to balance talking so you can help the candidate determine your needs and allowing the candidate to speak so you can assess his or her behavior. Talk about generally needing educational materials but include a lot of unnecessary information about the teachers, school, the weather, and so forth. Ex-
APPENDIX H
-«^9>
229
press your interest in buying additional types of products, but balk initially at the thought of buying the more expensive video and CD products. Be sure to study the candidate's materials, so that you understand what information he or she has available. Do not ask the candidate something that he or she does not have the specifics to answer (e.g., What is the specific price of that book?). Also be sure to manage the time so that you do not run over the allotted time. It is alright to run a minute or two over the allotted time, but you should cut the conversation off at that point. Finally, not every possible detail is presented here, so feel free to add in details based on your past experiences. Just be sure to not bring in information to which the candidate cannot realistically respond (e.g., A+ books charges $9.95, how much do you charge?).
This page intentionally left blank
Appendix I In-Basket Exercise
You are Chris Taylor, Director of ICU. Today is Friday, September 19. You are leaving for a week-long conference tonight, and will be busy in meetings for the rest of the day. Therefore, you must review all of the items in your in-basket before you leave. You have 75 minutes before the beginning of your next meeting to read the following instructions and accomplish this task. There is no one available in the office at this time to assist you. You must work alone, and you have access only to the materials that are on your desk, including the in-basket, a calendar, stationery, and paper clips. There is no phone, e-mail, or help from anyone when you are completing your in-basket. Your time is limited, but because you were counseled last week by your supervisor on your failure to respond to an important memo in a timely manner, it is important that you effectively review all of the materials in your in-basket before you go to the conference. However, you may not have enough time to take action on all of the items. In order to successfully complete this exercise, please keep the following tips in mind as you go through the in-basket items: 1. You must let your associates know what you plan to do with each item in the in-basket. Indicate exactly what you want done with the item when you are gone. 2. Every action you take, you plan to take, or want someone else to take must be in writing. You do not have access to e-mail, telephone, or an assistant at this time. 3. When going through the in-basket, it is recommended that you write as many of your responses, suggestions, and justifications as possible directly 231
232
'Bxa>
APPENDIX I
on the in-basket items. You can write responses on additional paper; clip any additional notes to the items to the relevant items. When using the provided stationery, be sure to use paper clips to attach notes, memos, letters, etc. that you write in response to relevant in-basket items. When you are finished, put all items on your desk into the clasp envelope. Later in the day, you will be asked some follow-up questions regarding the in-basket exercise, and you will have the opportunity to give your reactions, ask questions, and clarify the approach you used.
Sample of In-Basket Items Date: September 19 To: Chris Taylor From: Fran McCoy, Director, PMD Re: T J Maxwell As you know, I also received the letter from Li Xiang expressing concern about T J Maxwell's behavior. I know you are leaving town tonight, but I need to know your plans for dealing with this matter. Please place a specific action plan in my box before you leave. This should include: your interpretation of the seriousness of this issue, copies of your responses to T J and Li Xiang, and any planned follow-up activities.
To: Chris From: Kelly Date: September 19 I'm looking forward to having a one-on-one meeting with you as soon as possible. I've completed all of the assignments that you, Pat, and T J have given me. I'm feeling sort of restless, and would like some new and challenging work. I feel that I am not contributing to ICCCU as much as the other members of the unit. I'm hoping that there is a project that I could take the lead on that will have real impact on our unit. When I was hired for this position, it was implied that I would be doing more than secretarial work and that I could possibly be promoted to another position soon. I'm beginning to take exception to the fact that I haven't been able to learn or grow in this position. I hope that we can work out some solution to this problem that is acceptable to everyone.
APPENDIX I
*va>
233
Just so you have an idea of what additional tasks I would like to work on, I have provided two suggestions. 1. I would like to help on developing the next external customer survey. 2. I could learn how to do simple analyses in Excel to help with the next survey analysis.
To: Chris Taylor From: Chris Johnson Date: September 19 Hello. We haven't met yet, but I'm the new data analysis person in the Boulder County Department of Human Services. I'm told that my predecessor (Jamie) relied on you quite a bit for expertise and assistance in data management. I know this was partly out of your personal friendship with Jamie and that this is really outside of your job description. But, I was wondering if you would mind getting together with me sometime so we could discuss the measurement system you're using. I'm supposed to meet with my supervisor 3 days from now to discuss my initial framework and plans for doing my analyses. Is there any possibility we could talk before then to help me iron out a tentative plan? I'd also like to take a more thorough look at your overall measurement system at some point if this is ok. Any and all help you can provide would be appreciated... what do you think you'd be able and willing to provide? Thanks, Chris Johnson
Date: September To: Chris Taylor From: T J Maxwell RE: Furniture I heard that the administrative assistants are meeting to talk about getting new furniture. I want to go on record saying that I requested new furniture over 6 months ago and no one has responded to me. Why are the administrative assistants being shown favoritism? Li Xiang has known that I asked for new furniture and is probably the one leading the administrative assistants' efforts just to make me mad. I cannot tolerate being shown disrespect in this way.
234
*xa>
APPENDIX I
To: Chris From: Kelly Date: September 18 I was attempting to set up the meeting you requested with the five county employees from Grand Junction who will be visiting on October 16. I haven't been able to find an hour that all six of you can meet on that day due to their conflicting schedules. Could you please help me to find an hour time slot that will work? They will arrive from Grand Junction at approximately 10:00 a.m. Taylor Johnson will be in meetings with other divisions from 10 to 12 and 4 to 5. Corey Klimowski will be busy from 11 to 1 and 2:30 to 4. Shannon Saunders has meetings scheduled from 10 to 11 and 12 to 1:30. Kim Lui is available from 10 to 11 and 1 to 3. Frances Blackerby will be busy from 11 to 12:30 and 3 to 4:30. Your only scheduled commitment on this day is a meeting with Fran McCoy from 3 to 4.
To: Chris From: Terry Pierce, Executive Director, CDHS Date: September 18 The next quarterly meeting for executive directors has been set for November 28,2003. Among the topics to be discussed are impending legislation affecting CDHS and current coverage of the organization in the media. We are sending this information to you because we have an open time slot from 2 to 2:45 p.m. during the meeting. This could be a good opportunity for you to publicize the accomplishments and recent developments within ICCCU. There are three other people that we are also targeting to fill the spot, so please let us know if you are interested.
September 18 ABC Medical Group Dear Chris, It has come to my attention that several of the employees in my organization have experienced a negative attitude on the part of Kelly Parks in your unit. It seems Kelly is not forwarding messages to members of your unit and is causing us to be unable to get the information that we need in a timely man-
APPENDIX I
*^ 235
ner. This is not the first time I've heard complaints about your unit or Kelly Parks. CDHS has paid a lot of lip-service to the idea of "customer service." However, the professional relationship between our organization and your unit is being harmed by these actions. Is this the message you want to send to your customers? Is this the way you plan to do business? Perhaps Kelly needs some training or an attitude adjustment. Or perhaps the entire unit needs a fresh look. Please let me know what you plan to do to remedy this problem. Jesse Cornwall, Director
Date: September 17 To: Chris Taylor, Director, ICCCU From: Li Xiang, Administrative Assistant, PMD CC: Fran McCoy, Director, PMD RE: TJ Maxwell I am extremely concerned about an employee in your Division. T J Maxwell has displayed considerable hostility toward me over the past few months. There are three specific incidents about which I am concerned. The first occurred in the break room in July. I was conversing with a co-worker and T J was reading the paper. Suddenly, T J stood up and screamed that we were being too loud. T J used profane and inappropriate language and stormed out of the room before we could respond. In the second situation, I was accused of spreading defamatory rumors about T J and his family. T J then knocked all of the supplies off of my desk and made racial slurs against me. The third incident occurred this past Monday. I was walking to my car after work, and as I was crossing the parking lot, T J was leaving. Inadvertently, I got in T J's way. Rather than slowing down to allow me to cross the parking lot, T J accelerated his car and yelled that I would be killed if I didn't move. I felt extremely threatened. I feel awkward complaining to you about another employee. However, T J's anger has escalated and I fear the consequences of letting this behavior continue.
This page intentionally left blank
Appendix J Oral Fact Finding Exercise
Instructions for Participant After this introduction, you will be given 5 minutes to study a brief description of a situation that has arisen in a small software development organization, and to prepare to ask a resource person questions about the situation. You are to play the role of an independent consultant who has been hired to recommend a future course of action. The resource person has considerable information available and will answer your questions. The resource person is impartial and is not playing any role. You will have 15 minutes to ask questions and to make a recommendation. The resource person may then ask questions regarding your decision.
The Situation Medcomp, a small software development company, has developed a number of programs for small and medium sized medical group practices. John Baumgartner has been working on a new program to track medical records of patients. His boss, Dave Evans, has decided to terminate work on the medical records program and shift John to another project. John has appealed this decision to the owner of Medcomp. The owner has now asked you to investigate the situation and make a recommendation on whether to continue work on the medical records program. 237
238
239
Other Projects. Dave wants John to shift to another project that needs additional programmer time. This project deals with insurance claims, is much larger, and has the potential to earn substantially more profit for Medcomp. John would be one of three developers on this project that is being led by someone else who John does not care to work with. Dave is unaware of the conflict between John and the other developer. Moreover, John believes he will be assigned to work on one phase of the insurance claims product that John is not totally competent to handle. John does not wish to divulge his limitations in this area.
This page intentionally left blank
Appendix K Organization Charts
£ t^j
ORGANIZATION CHART 1: ADVANCED TECHNOLOGY President Oliver Turner
I
I
Vice President Administration Bob Evans
Vice President Manufacturing Gary Dutton
(
Vice President Marketing Susan Erman
|
|
Vice President Engineering Michelle Malin
Vice President Human Resources John Baumer
I
| Vice President Finance Janice McCoy
^_
Plant Manager Airplane Governors Felicia Benson
Plant Manager Power Plant Governors Jamie Garcia
Administrative Aide Jim McCoy
Department Head Supply Mary Getsum
Department Head Production Bill Maykum
Plant Manager Electronic Governors Herb Meyer
Plant Human Resources Manager ~")ohn Braun Department Head Distribution Gary Forth
ORGANIZATION CHART 2 Department Head - Production Bill Maykum
Production Supervisor: 1 Nathan Mondragon
I
fo -£-
OJ
Production Supervisor: 2 Les Brown
I
I
Production Supervisor: 3 Sara Fleming
I
I
I
Associates
Quality Control
Associates
Quality Control
Associates
Quality Control
Betty Morton Carl Carlson Deb Sanchez Jamal Jones Brian Harper Sally Brady
Stu Black Jane Corey
Enso Spielo Gayle Mendez Jake Town Ted Shore Amy Bennett
Howard Marks Joan Ramsay
Al Smith Mary Ramos Randy Harris Jim Washington
Cam Caldwell
This page intentionally left blank
References
Adkins, D. C. (1974). Test construction: Development and interpretation of achievement tests (2nd ed.)- Columbus, OH: Merrill. Ahmed, Y., Payne, T., & Whiddett, S. (1997). A process of assessment exercise design: A model of best practice. International Journal of Selection and Assessment, 5, 62—68. American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (1999). Standards for educational and psychological tests. Washington, DC: American Psychological Association. American Psychological Association. (2001). Publicationmanual of the American Psychological Association (5th ed.). Washington, DC: Author. Arvey, R. D., & Sacked:, R R. (1993). Fairness in selection: Current developments and perspectives. In N. Schmitt &. W Borman (Eds.), Personnel selection in organizations (pp. 171-202). San Francisco: Jossey-Bass. Barrick, M. R., & Mount, M. K. (1991). The big five personality dimensions and job performance. Personnel Psychology, 44, 1-26. Bass, B. M. (1950). The leaderless group discussion. Personnel Psychology, 3, 17-32. Bass, B. M. (1954). The leaderless group discussion. Psychological Bulletin, 51, 465-492. Brannik, M. T, Salas, E., & Prince, C. (1997). Team performance assessment and measurement: Theories, methods, and applications. Mahwah, NJ: Lawrence Erlbaum Associates. Brannick, M. T, & Levine, E. L. (2002). Job analysis. Thousand Oaks: Sage. Bray, D. W, & Grant, D. L. (1966). The assessment center in the measurement of potential for business management. Psychological Monographs, 80(17, Whole No. 625), 1-27. Caldwell, C., Thornton, G. C., Ill, & Gruys, M. L. (2003). Ten classic assessment center errors: Challenges to selection validity. Public Personnel Management, 32(1), 73—88. Campbell, D. T. (1958). Systematic errors on the part of human links in communication systems. Information and Control, 1, 334-369. Campion, M. A., Palmer, D. K., & Campion, J. E. (1997). A review of structure in the selection interview. Personnel Psychology, 50, 655-702. Chan, D., Schmitt, N., Sacco, J. M., &DeShon, R. R (1998). Understanding pretest and posttest reactions to cognitive ability and personality tests. Journal of Applied Psychology, 83,471^485. Costa, RT, &McCrae, R. R. (l992).NEOPI'Rprofessionalmanual Odessa, FL: Psychological Assessment Resources.
245
246
^^
REFERENCES
Crocker, L, & Algina, J. (1986). Introduction to classical and modern test theory. Fort Worth, TX: Harcourt. Cronbach, L. J. (1990). Essentials of psychological testing (5th ed.). New York: Harper &.Row. Cronbach, L. J., Geiser, G. C., Nanda, H., & Rajaratnam, N. (1972). The dependability of behavioral measurements. New York: Wiley. Douglas, E. F., McDaniel, M. A., & Snell, A. F. (1996, August). The validity of non-cognitive measures decays when applicants fake. Paper presented at the 1996 meeting of the Academy of Management. Cincinnati, OH. Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, & Department of Justice. (1978, August 25). Uniform guidelines on employee selection procedures. Federal Register, 43(166), 38290-38309. Fallen, J. D., Kudisch, J. D., &. Fortunato, V. J. (2000, April). Using conscientiousness to predict productive and counterproductive work behaviors. Paper presented at the 15th annual conference of the Society for Industrial and Organizational Psychology, New Orleans, LA. Fishbein, M., &. Ajzen, I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research. Reading, MA: Addison Wesley. Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin, 51, 327-358. Frederiksen, N., Saunders, D. R., &. Wand, B. (1957). Thein-basket test. Psychological Monographs, 71 (9, Whole No. 438). Gael, S. (1983). Job analysis: A guide to assessing work activities. San Francisco: Jossey-Bass. Gael, S., Cornelius, E. T, III, Levine, E., &Salvendy, G. (Eds.). (1983). The job analysis handbook for business, industry, and government. New York: Wiley. Gatewood, R., Thornton, G. C., Ill, & Hennessey, H. W, Jr. (1990). Reliability of exercise ratings in the leaderless group discussion. Journal of Occupational Psychology, 63,331-342. Gaugler, B. B., Rosenthal, D. B.,Thornton,G. C., Ill, &Bentson, C. (1987).Meta-analysisof assessment center validity. Journal of Applied Psychology, 72, 493-511. Gaugler, B. B., & Thornton, G. C., III. (1989). Number of assessment center dimensions as a determinant of assessor accuracy, journal of Applied Psychology, 74, 611-618. Ghiselli, E. E., Campbell, J. R, &Zedeck, S. (1981). Measurement theory for the behavioral sciences. San Francisco: Freeman. Goldstein, A. P, &Sorcher, M. (1974). Changing managerial behavior. New York: Pergamon Press. Goode, R (1995). Design options and outcomes: Progress in development centre research. Journal of Management Development, 14, 55-59. Gorow, F. F. (1966). Better classroom tests. San Francisco: Chandler. Gronlund, N. E. (1982). Constructing achievement tests (3rd ed.). Englewood Cliffs, NJ: Prentice-Hall. Guion, R. (1998). Assessment, measurement, and prediction for personnel decisions. Mahwah, NJ: Lawrence Erlbaum Associates. Guilford, J. R (1954). Psychometric methods. New York: McGraw-Hill. Harris, M. M. (1989). Reconsidering the employment interview: A review of recent literature and suggestions for future research. Personnel Psychology, 42, 691-726. Harvey, R. J. (1991). Job analysis. InM. D. Dunnette &.L. M. Hough (Eds.), Handbook of industrial & organizational psychology (Vol. 2, pp. 71-164). Harvey, V. S. (1997). Improving readability of psychological reports. Professional Psychology: Research and Practice, 28, 271-274Hedge, J. W, & Kavanaugh, M. J. (1988). Improving the accuracy of performance evaluations: Comparison of three methods of performance appraiser training. Journal of Applied Psychology, 73, 68-73. Hennessy, J., Maybe, B., &. Warr, R (1998). Assessment center procedures: An experimental comparison of traditional, checklist, and coding methods. International Journal of Selection and Assessment, 6, 222-231.
REFERENCES
^*^
247
Hoffman, C. C., & Thornton, G. C., III. (1997). Examining selection utility where competing predictors differ in adverse impact. Personnel Psychology, 50, 455—470. Hough, L. M. (1998). Personality at work: Issues and evidence. In M. Hakel (Ed.), Beyond multiple choice: Evaluating alternatives to traditional testing for selection (pp. 131-159). Hillsdale, NJ: Lawrence Erlbaum Associates. Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternate predictors of performance. Psychological Bulletin, 96, 72-98. International Task Force on Assessment Center Guidelines. (2000). Guidelines and ethical considerations for assessment center operations. Public Personnel Management, 29, 315-331. James, L. R., Demaree, R. G., & Wolf, G. (1993). rwg: An assessment of within group interrater reliability. Journal of Applied Psychology, 78, 306-309. Kendrick, D. T, & Funder, D. C. (1988). Profiting from controversy: Lessons from the person-situation debate. American Psychologist, 43, 23-34. Kleimann, M., Kuptsch, C., &.Koller, O. (1996). Transparency: A necessary requirement for the construct validity of assessment centers. Journal of Applied Psychology, 45, 67-84. Kluger, A. N., &. DeNisi, A. (1996). Effects of feedback intervention on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, I I 9 , 254-284. Knowles, M. (1973). The adult learner: A neglected species. Houston, TX: Gulf Publishing. Kolk, N. J. (2001). Assessment centers: Understanding and improving construct-rented validity. Enschede, The Netherlands. Krueger, R. A. (1988). Focus groups: Apractical guide for applied research. Newbury Park, CA: Sage. Kudisch, J. D., Avis, J. M., Fallen, J. D., Thibodeaux, H. E, III, Roberts, F. E., Rollier, T. J., & Rotolo, C. T. (1999, June). Benchmarking for success: A look at today's assessment center practices worldwide. Paper presented at the 27th annual meeting of the International Congress on Assessment Center Methods, Orlando, FL Kudisch, J. D., Lundquist, C., & Smith, A. F. R. (2001, September). Reactions to "dual-purpose " assessment center feedback: What does it take to get participants to buy into and actually do something with their feedback? Paper presented at the 29th annual meeting of the International Congress on Assessment Center Methods, Frankfurt, Germany. Ladd, R. T, Atchley, E. K. R, Gniatczyk, L. A., & Baumann, L. B. (2002). An evaluation of the construct validity of an assessment center using multiple regression importance analysis. In J. D. Kudisch (Chair), Alternative approaches to examining assessment center construct validity. Symposium conducted at the meeting of the Society for Industrial and Organizational Psychology, Toronto, Canada. Lance, C. E., Newboldt, W. H., Gatewood, R. D., Foster, M. R., French, N. R., &Smith, D. B. (2000). Asssessment center exercise factors represent cross-situational specificity, not method bias. Human Performance, 13, 323-353. Lance, C. E., Foster, M. R., & Gentry, W. A. (2002). Assessor cognitive processes in an operational assessment center. Poster session presented at the annual meeting of the Society for Industrial and Organizational Psychology. Larsh, S. L. (2000). Effects of feedback format on self-efficacy and subsequent performance: A comparisonof attribute-based and situation-baseddevelopmental feedback. Unpublished Doctoral Dissertation. Colorado State University Fort Collins, CO. Larsh, S. L. (2001). Reactions to attribute- versus exercise-based feedback: A developmental assessment center study. Published Masters Thesis. Colorado State University, Fort Collins, CO. Latham, G. R, &Saari, L. M. (1979). Application of social-learning theory to training supervisors through behavioral modeling. Journal of Applied Psychology, 64, 239-246. Latham, G. R, Wexley, K. N., &. Pursell, E. D. (1975). Training managers to minimize rating errors in the observation of behavior Journal of Applied Psychology, 60, 550-555.
248
*^9)
REFERENCES
Lawlis, G. E, &.Lu, E. (1972). Judgment of counseling process: Reliability, agreement, and error. Psychological Bulletin, 78, 17-20. Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28, 563-575. Leenders.M., &Erskine,J. (1973,1989). Case research: The writing process. London, Ontario: Research and Publications Division, University of Western Ontario. Lievens, F. (1998). Factors which improve the construct validity of assessment centers: A review. International Journal of Selection and Assessment, 6,141-152. Lievens, F. (2002). Trying to understand the different pieces of the construct validity puzzle of assessment centers. Journal of Applied Psychology, 87, 675-686. Lievens, E, & Kilmoski, R. J. (2001). Understanding the assessment center process: Where are we now? In C. L. Cooper & I. T. Robertson (Eds.), International review of industrial and organizational psychology (Vol. 16). New York: Wiley. Lorenzo, R. V. (1984). Effects of assessorship on managers' proficiency in acquiring, evaluating, and communicating information about people. Personnel Psychology, 37, 617—634. McCall, M. W., &Lombardo, M. M. (1978).Loolcmggkzss: An organizational simulation. (Tech. Rep. No. 12). Greensoboro, NC: Center for Creative Leadership. McCall, M. W, &. Lombardo, M. M. (1982). Using simulation for leadership and management research: Through the looking glass. Management Science, 28, 533-549. McCormick, E. J. (1979). Job analysis: Methods and applications. New York: AMACOM. McDaniel, M. A., Whetzel, D. L., Schmidt, E L., &Maurer, S. D. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79, 599-616. McGraw, K. O., &. Wong, S. E (1996). Forming inferences about some intraclass correlation coefficients. Psychological Methods, 1, 30—46. Middendorf, C. H., & Macan, T. H. (2002). Note-taking in the employment interview: Effects on recall and judgments. Journal of Applied Psychology, 87, 293—303. Mischel, W. (1973). Toward a cognitive social learning reconceptulization of personality. Psychological Review, 80, 252-283. Morgeson, E R, & Campion, M. A. (1997). Social and cognitive sources of potential inaccuracy in job analysis. Journal of Applied Psychology, 82, 627-655. Nunnally, J. C., & Berstein, I. H. (1994). Psychometric theory (3rd ed.). New York: McGraw-Hill. Office of Strategic Services Assessment Staff. (1948). Assessment of men: Selection of personnel for the Office of Strategic Services. New York: Rineholt. Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (1993). Comprehensive meta-analysis of integrity test validities: Findings and implications for personnel selection and theories of job performance [Monograph]. Journal of Applied Psychology, 78, 679-703. Page, B. T. (1995). Assessment center handbook Longwood, FL: Gould. Pigors, R (1976). Case method. In R. L. Craig (Ed.), Training and development handbook (pp. 35.1-35.12). New York: McGraw-Hill. Pulakos, E. D. (1984). A comparison of training programs: Error training and accuracy training. Journal of Applied Psychology, 69, 581-588. Pulakos, E. D., &.Schmitt, N. (1996). An evaluation of two strategies for reducing adverse impact and their effects on criterion-related validity Human Performance, 9, 241-258. Robson, C. (1993). Real world research: A resource for social scientists and practitioner-researchers. Oxford: Blackwell. Ryan, A. K., Duam, D., Bauman, T, Grisez, M., Mattimore, K., Naladka, T, &. McCormick, S. (1995). Direct, indirect, and controlled observation and rating accuracy. Journal of Applied Psychology, 80, 664-670. Ryan, A. M., Ployhart, R. E., &. Friedel, L. A. (1998). Using personality testing to reduce adverse impact: A cautionary note. Journal of Applied Psychology, 83, 298-307.
REFERENCES
^^ 249
Rynes, S. L. (1991). Recruitment, job choice, and post-hire consequences: A call for new research directions. In M. D. Dunnette &. L. M. Hough (Eds.), Handbook of industrial & organizational psychology (Vol. 2, pp. 399-444). Palo Alto, CA: .Consulting Press. Rynes, S. L., &.Connerley, M. L. (1993). Applicant reactions to alternative selection procedures. Journal of Business and Psychology, 7, 261-277. Sackett, R R., & Dreher, G. F. (1982). Constructs and assessment center dimensions: Some troubling empirical findings. Journal of Applied Psychology, 67, 401-410. Sackett, R R., &. Ellingson, J. E. (1997). The effects of forming multi-predictor composites on group differences and adverse impact. Personnel Psychology, 50, 707-721. Sagie, A., StMagnezy, R. (1997). Assessor type, number of distinguishable dimension categories, and assessment center construct validity. Journal of Occupational and Organizational Psychology, 70, 103-108. Schippman, J. S., Ash, R. A., Battista, M., Carr, L., Eyde, L .D., Hesketh, B., Kehoe, J., Pearlman, K., & Sanchez, J. I. (2000). The practice of competency modeling. Personnel Psychokgy, 53, 703-740. Schippman,]. S., Prien, E. R, &Katz, J. A. (1990). Reliability and validity of in-basket performance measures. Personnel Psychology, 43, 837-859. Schleicher, D. J., Day, D. V, Mayes, B. T, & Riggio, R. E. (1999, April). A new frame of reference training: Enhancing the construct validity of assessment centers. Paper presented at the 14th Annual Society for Industrial and Organizational Psychology Conference, Atlanta, Georgia. Schleicher, D. J., Day, D. V, Mayes, B. T, &. Riggio, R. E. (2002). A new frame-of-reference training: Enhancing the construct validity of assessment centers. Journal of Applied Psychology, 87, 735-746. Schmidt,F. L., &Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262-274. Schmitt, N., Schneider, J. R., & Cohen, S. A. (1990). Factors affecting the validity of a regionally administered assessment center Personnel Psychology, 43, 1-12. Schneider, B., &. Konz, A. M. (1989). Strategic job analysis. Human Resource Management, 28,51-63. Shavelson, R. J., &Webb, N. M. (1991). GeneraUzability theory: A primer. Newbury Park, CA: Sage. Shrout, R E., &. Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86, 420-428. Smith, F. D. (1991). Work samples as measures of performance. In A. K. Wigdor & B. F. Green, Jr. (Eds.), Performance assessment for the workplace (pp. 27-52). Washington, DC: National Academy Press. Society for Industrial and Organizational Psychology. (1987). Principles for the validation and use of personnel selection procedures (3rd ed.). College Park, MD: Author. Spychalski, A. C., Quinoes, M. A., Gaugler, B. B., &Pohley, K. (1997). A survey of assessment center practices in organizations in the United States. Personnel Psychology, 50, 71-90. Struefert, S., Pogash, R., &Piasecki, M. (1998). Simulation-based assessment of managerial competence: Reliability and validity. Personnel Psychology, 41, 537-557. Struefert, S., &.Swezey, R. W. (1986). Complexity, managers, and organizations. New York: Academic Press. Sulsky, L. M., &Day, D. V. (1992). Frame-of-reference training and cognitive categorization: An empirical investigation of rater memory Journal of Applied Psychology, 77, 501-510. Sulsky, L. M., &. Day, D. V. (1992). Effects of frame-of-reference training on rater accuracy under alternative time delays. Journal of Applied Psychology, 79, 535-543. Taylor, R, & Small, B. (2000, April). A meta-analytic comparison ofsituational and behavioral description interview questions. Paper presented at the 15th Annual Conference of the Society for Industrial and Organizational Psychology, New Orleans, LA.
250
*xs>
REFERENCES
Tett, R. R (1996). Traits, situations, and managerial behaviour: Test of a trait activation hypothesis. Unpublished Doctoral Dissertation. University of Astern Ontario, Canada. Tett, R. R, &Guterman, H. A. (2000). Situation trait relevance, trait expression, and crosssituational consistency: Testing a principle of trait activationJowrnaZ of Research in Person' ality, 34, 397-423. Thornton, G. C., III. (1992). Assessment centers in human resource management. Reading, MA: Addison Wesley. Thornton, G. C., Ill, & Byham, W. C. (1982). Assessment centers and managerial performance. New York: Academic Press. Thornton, G. C., Ill, & Cleveland, J. C. (1990). Developing managerial talent through simulation. American Psychologist, 45, 190-199. Thornton, G. C., Ill, &. Rupp, D. E. (2003). Simulations and assessment centers. In J. Thomas (Ed.), Industrial and organisational assessment. New York: Wiley. Thornton, G. C., Ill, &. Zorich, S. (1980). Training to improve observer accuracy. Journal of Applied Psychology, 64, 351-354. Tinsley, H. E. A., &. Weiss, D. J. (1975). Interrater reliability and agreement of subjective judgments. Journal of Counseling Psychology, 22, 358-376. Tziner, A., Ronen, S., & Hacohen, D. (1993). A four-year validation study of an assessment center in a financial corporation. Journal of Organizational Behavior, 14, 225-237. Wernimont, R, & Campbell, J. (1968). Signs, samples, and criteria. Joumalof Applied Psychology, 52, 372-376. Wigdor, A. K., &. Green, B. E, Jr. (Eds.). (1991). Performance assessment for the workplace (Vols. 1 and 2). Washington, DC: National Academy Press. Woehr, D. J. (1994). Understanding frame-of-reference training: The impact of training on the recall of performance information. Journal of Applied Psychology, 79, 525-534. Wonderlic. (1992). Wonderlic Personnel Test User's Manual. Libertyville, IL: Wonderlic Personnel Test. Yin, R. K. (1989). Cose study research. London: Sage. Zedeck, S. (1986). A process analysis of the assessment center method. In B. M. Staw & L. L. Cummings (Eds.), Research in organizational behavior (Vol. B, pp. 259-296), Greenwich, CT- JAI. Zickar, M. J., & Robie, C. (1999). Modeling faking good on personality items: An item-level analysis. Journal of Applied Psychology, 84, 552-563.
Author Index
A Ajzen, I., 100 Algina, J., 188 Arvey, R. D., 8 Atchley, E. K. R, 193 Avis, J.M., 31, 37, 72, 86, 100, 116, 158, 165, 172,187 B Barrick, M. R., 7 Bass, B. M., 85 Bauman.T., 172 Baumann, L. B., 193 Bentson, C, 11, 164 Brannick, M. T., 53,54 D r^ W., \V7 11, 11 116, 11^ 143 IXT Bray, D. Byham, W. C, 5, 6, 8, 9, 10, 33, 41, 52, 85, 98,116,129,158,189,193 C Caldwell, C., 163 Campbell, D. T., 169 Campbell, J., 6 Campion, J. E., 9 Campion, M. A., 9, 56 Chan, D., 8 Cleveland, J. C., 4, 11, 13, 33, 151 Cohen, S. A., 31, 164 Connerley, M. L., 10
Cornelius, E. T, III, 54 Costa, R T., 7 Crocker, L, 188 Cronbach, L. J., 188 D Day, D. V, 168, 169, 193 Demaree, R.G., 188 DeNisi, A., 177 DeShon, R. R, 8 Douglas, E. E, 8 Dreher, G. E, 194 Duam, D., 172 b
,c 0 n11. bllmgson, ,° ''I. t., o trskme, J- ' u p
p
Fallen, J.D., 8, 31,37, 72, 86, 100, 116, 158, 165,172,187 Fishbein, M., 100 Flanagan, J. C., 55 Fleiss, J. L, 188 Fortunate, V. J., 8 Foster, M. R., 194 Frederiksen, N., 116 Friedel, L. A., 8 French, N. R., 194 Funder, D. C., 41
251
252 ^^
G Gael, S., 54 Gatewood, R., 98 Gatewood, R. D., 194 Gaugler, B. B., 11, 24, 41, 101, 164, 165, 187 Geiser, G. C, 188 Gentry, W. A., 194 Gleser, G. C., 188 Gniatczyk, L. A., 193 Goldstein, A. P, 100, 141, 166 Goode, E, 174, 178 Grant, D. L., 11, 116, 143 Grisez, M., 172 Gruys, M. L., 163 Guilford, J. R, 188 Guion, R., 56 Guterman, H. A. M., 42 H Hacohen, D., 11 Harris, M. M., 9 Harvey, R. J., 53 Harvey, V. S., 47 Hedge, J. W., 170 Hennessey, H. W. Jr., 98 Hennessy, J., 172 Hoffman, C. C., 10 Hough, L. M., 8 Hunter, J. E., 7, 8, 10 Hunter, R. E, 7, 8 J Jackson, D. N., 7 James, L. R- 188 K K
A 54, 01 129 no v Katz, J.T A., Kavanaugh, M. J., 170 Kendrick, D. T., 41 Kilmoski, R. J.,41 KIeimann.M.,43 Kluger,A.N., 177 Knowles, M., 166 Kolk.N.J., 194 Koller,0.,43 Konz, A. M., 54 Krueger, R. A., 56 Kudisch, J. D., 8, 31, 32, 37, 40, 70, 72, 86, 100, 116, 158, 165, 172, 187 Kuptsch,C.,43
AUTHOR INDEX
L Ladd, R. T., 193 Lance, C. E., 194 Larsh, S. L., 177 Latham, G. R, 166, 174 Lawlis, G. E, 188 Lawshe, C. H., 56 Leenders, M., 70 Levine, E., 54 Levine, E. L., 53, 54 Lievens, E, 41, 193 Lombardo, M. M., 143 Lorenzo, R. V, 164 Lu, E., 188 Lundquist, C, 32, 40, 70 M Macan,T. H., 172 Magnezy, R., 164 Mattimore, K., 172 Maurer, S. D., 8 Maybe, B., 172 Mayes, B. T., 168, 169, 193 McCall, M. W, 143 McCormick, E. J., 53 McCormick, S., 172 McCrae, R. R., 7 McDaniel, M. A., 8 McGraw, K. O., 188 Middendorf, C. H., 172 Mischel, W., 41 Morgeson, E R, 56 Mount, M. K., 7 N
Naladka.T, 172 Nanda, H., 188 Newboldt, W H., 194 ^ „ „c „ Ones, D. S., 7 P
Page, B. T, 125, 166 ' ' " Piasecki, M., 143 Pi 8°rs' P" 62' 132 Ployhart, R. E., 8 Pohley,K,101,165,187
Palmer D K 9
AUTHOR INDEX Pogash, R., 143 Prien, E. R, 54, 129 Pulakos, E. D., 10, 168 Pursell, E. D., 174 Q Quinoes, M. A., 101, 165, 187
R Rajaratnam, N., 188 Riggio, R. E., 168, 169, 193 Roberts, F. E., 31, 37, 72, 86, 100, 116, 158, 165 172 187 Robie,C.,8 Robson, C., 70 Rollier, T. J., 31, 37, 72, 86, 100, 116, 158, 165 172, 187 Ronen,S., 11 ' Rosenthal, D. B., 11, 164 Rothstein, M., 7 Rotolo, C. T., 31, 37, 72, 86, 100, 116, 158, 165, 172, 187 Rupp, D. E., 10, 33, 98, 100, 116, 129, 189, 193 Ryan, A. K., 172 Ryan, A. M., 8 Rynes, S. L., 8, 10 S Saari, L. M., 166 Sacco, J. M., 8 Sackett, R R., 8, 194 Sagie, A., 164 Salvendy, G, 54 Saunders, D. R., 116 Schippman, J. S., 54, 116, 129 Schleicher, D.J., 168, 169, 193 Schmidt, F. L., 7, 8, 10 Schmitt, R, 8, 10, 31, 164 Schneider, B., 54 Schneider,]. R., 31, 164 Shavelson, R. J., 188 Shrout, R E., 188 SmaII B 8 > -> Smith, A. E R., 32, 40, 70
***a>
253
Smith, D. B., 194 Smith, F. D., 6, Snell, A. E, 8 Sorcher, M., 100, 141, 166 Spychalski, A. C, 100, 165, 187 Struefert, S., 143 Sulsky, L. M., 168 Swezey, R.W., 143
T Taylor, R, 8 Tett, R. R, 7, 42 Tinsley, H. E. A., 188 Thibodeaux, H. E, III, 31, 37, 72, 86, 100, 116,158,165,172,187 Thornton, G. C., Ill, 4, 5, 6, 8, 9, 10, 11, 13, 14, 24, 33, 37, 41, 85, 98, 100, 116, 126, 129,151, 158, 163, 164, 169, 189,193 Tziner, A., 11 V Viswesvaran, C., 7 W Wand, B., 116 Warr, R, 172 Webb, N. M., 188 Weiss, D. J., 188 Wernimont, R, 6 Wexley, K. N., 174 Whetzel, D. L., 8 Woehr, D. J., 169 Wolf, G., 188 Wong, S. R, 188 Y Yin, R. K., 70 Z Zedeck, S., 41, 99, 111 Zickar, M. J., 8 Zorichi S-) 169
This page intentionally left blank
Subject Index
A Ability, 44-45 Accomodations, see Disabilities Accuracy, see Validity Administration guidelines, see Materials, support Adverse impact, 8, 10, 33 Alpha see Reliability Analysis problem see Case study Applicant reactions, 8, see also Participant reactions Assessment center, 13, 193-194 Assessment techniques, 6-10 intelligence tests, paper and pencil, 7 interviews, 8-9 personality inventories, paper and pencil, 7 work samples, 9-10 Assessors, 4, 5, 29-30, 62, 68, 69, 80-81, 95-96, 98, 126-127, 137, 139, 150-152, 163-181, 185, 186-187, 190, 196 definition, 4 evaluation, 179-180 internal vs. external, see Assessors, selection number of, 164-165 selection, 163-164 training, 29-30, 68, 80-81, 95-96, 98, 126-127, 137, 139, 150-152, 163-181, 185 Attributes, see Dimensions
B Background interview, see Interviews Behavioral assessment, 4 Behavioral examples, 52 Behaviorally anchored rating scales (BARS), 19, 52, 68, 94-95, 98, 125, 174 Behavior observation checklists, 68, 94-95, 124-125,172 Bias see Adverse impact Business games, 142-154, 156 C Case study, 61-70, 131, 156-157, 213-214 Competencies, 52, see also Dimensions Competency analysis, see Competency modeling Competency modeling, 17, 23, 53-54, see also Job analysis Complexity, 6, 90-91, see also Difficulty level Complexity of behavior, 6 Computer, see Technology Construct validity, see Validity Content validity, see Validity Cost/benefit analysis, 33, 96-97, 159 Criterion-related validity, see Validity Critical incident method, 55 D Day in the life exercise, 36-38, 155-160
255
SUBJECT INDEX
256 Decision-making exercise, see Oral fact finding exercise Development of simulations, see Simulation exercises, development of Development, professional, see Purpose Difficulty level, 17,4647,88,89, 119-120 Dimensions, 4, 5, 25, 41, 52, 64, 74, 88, 102, 119, 134, 145-146, 189-190, 193-194, 196, 199-203,205-207 definitions of, 199-203,205-207, see also Materials, support examples of, conflict management, 102 decision analysis, 64 delegating, 119 group leadership, 88 individual leadership, 102 interpersonal sensitivity, 88 listening skills, 134 oral investigation, 134 oral presentation skill, 74 persuasiveness, 74 planning and organizing, 119, 145-146 problem analysis, 64 teamwork, 145-146 list of see Materials, support number of, 24,41 specifications, see Situational exercises, specifications type, 25 Dimension analysis, 17, 23 Disabilities, accommodation of, 47-48
E Equal Employment Opportunity Commission, 182-183, 189 Errors, 169-1 70, 174-1 76 observation, 169-1 70 rating, 174-176 Ethical obligations, 14,3 1, see ako Guidelines and Ethical Considerations for Assessment Center Operations Evaluation, 18, 20-21, 30,32-33, 70, 83, 97-98, 109-110, 114, 127, 129, 141, 153-154, 180-181,209-211, see also Participant reactions assessor, see Assessors, evaluation formative, 20, 30, see also Pilot testing psychometric, 32-33, 70,83,97-98, 114, 129, 141, 153-154, see also Reliability and Validity summative, 18, 20-21, 153 Exercise rating, 194-195
F Face validity, see Validity Fact finding exercise, see Oral fact finding exercise Faking, 8, 9, 10 Fatal errors, 14.33, 49,53, 67, 78,89, 103, 119, 139-140, 151, 159, 164,184 Feedback to participants, 174, 177-178 Fidelity, 6, 34,35-36 Focus group discussion, 56 Follow-up questions, 32,7940,94, 109-1 10, 124, 153-154, 209-21 1 Formative evaluation, see Evaluation Frame of reference training, see Training
G Game, see Business games Gender bias, 33, 45,48,49, see also Adverse impact Generalizability study, see Reliability Group discussion, see Leaderless group discussion Guidelines, see Uniform Guidelines on Employee Selection Procedures Guidelines and Ethical Considerations for Assessment Center Operatiow 165, 166-167, 180, 182-183
H Halo error, see Errors, rating
I In-basket exercise, 115-129, 156-157, 23 1-235 Incident method see Oral fact finding exercise Individual rank order forms, 91,93 Instructions, 4345, 61, see also Materials, support Integrated exercises, see Day in the life exercise Intelligence tests, see Tests Interaction exercise, see also One-on-one interaction simulation Inter-rater agreement, see Reliability Inter-rater reliability, see Reliability Interviews, see Assessment techniques Interview simulation, see One-on-one interaction simulation
SUBJECT INDEX J Job analysis, 53-54, 55, 56-57, 118, 190, 197, see also Competency modeling, Focus group discussion, Interview, and Questionnaire pitfalls of, avoiding, 56-57 strategic, 54 *— TLaw 182-183, 18-7 i«^ see also I TLegali challenge k ii Leaderless group discussion, 84-98, 157, TLearning • principles, '""'I24 166 i**
Legal challenge 8 118, 174, 185-186, X* Manual preparation, 31 Materials, participant, 19, 27, 61, 65, 78,
90-93, 104-106, 120-122, 135-136 147-148 Materials, support, 28-29, 67-68, 79-80, 93-95 106-110 124-126 136-137, 148-150 ' Maximum performance,see Performance, maximum vs. typical Model for construction of simulations, 16-20 Motivation 44-45 N Notetaking, 170-174
O One-on-one interaction simulation, 99-114, 157, 225-229 Oral fact-finding exercise, 130-141, 157, 237-239 Oral presentation exercise, 71-83, 157, 215 Organization analysis, 17, 23, 197 Organization charts, 55, 242-243 Organizational development, see Purpose
***3>
257
application, 20, 31 construction, 19, 27-29 pilot testing, 19-20, 29-30, see also Pilot testing situation analysis, 17, 21-24, see also Situation analysis specifications, 19, 24-27, see also Specifications summative evaluation, 20, 31-33, see also Evaluation Pilot group, 19,30 ^ _^ _ ^ _^ _ pibt
19
96 112 16
29 3
6Q
81 g2)
127-128 139-140 152
19 7
Portfolio, see °' Oral' presentation exercise predictlve validltyP5ee Valldlty Presentation exercise, see Oral presentation exercise Principles for the Validation and Use of Personnel Selection Procedures, 182 Production exercise, 148, see also Business
games Professional development, see Purpose Promotion see Purpose Psychometric evaluation, see Evaluation Purpose, 4, 11-14, 21-23, 63, 73, 86-87, 96-9/, 101, 113, 116-118, lj/-ljj, 144 organizational development, 144 professional development, 4, 12, 13 promotion, 4, 73, 116-117 research, 4, 13, 117-118 selection, 4, 11-12, 63, 86-87, 96-97 training, 4, 12, 13, 101, 113, 132-133, 141
Q Questionnaires, 55-56 Questions, assessor, see Materials, support R
Paper-and-pencil tests, see Tests Participant reactions, 20, 30, 31-32, 69-70,
' Pact ' Ratin errors see Errors § ' Rating scales, see Behaviorally anchored rating scales Reactions, see Participant reactions Reading level, see Writing tips Reliability, 31, 32-33, 83, 97-98, 114, 116,
82-83,97, 113-114, 128-129, 140-141, 153, 181, 210-211 Performance, maximum vs. typical, 43-44 Personality tests, see Tests Phases of development of simulations, 16-33
129, 141, 158-159, 183-184 alpha, 32, 186 generalizability, 188 inter-rater agreement, 83, 98, 129, 187-188
P
Racial bias see Adverse im
Raters see Assessors
258
'«X3>
inter-rater reliability, 33, 83, 98, 114, 129, 186-188 parallel forms, 33, 186 split half, 186 test-retest 32-33 186 Resource person, 4, 131, 136-137, 196 Results, feedback, see Feedback Results, misuse of, see Ethical obligations Role play exercise, see One-on-one interaction simulations Role player, 4, 19, 99, 110-112, 125-126, 178-179, 185, 196 S Samples, see Signs vs. samples Security, test, 30, 128 Selection, see Purpose Self presentation, see Oral presentation Sexist language, 45 Signs vs. samples, 5-6 Simulation exercises, 3-5, 6-7, 10-11, 12-15, 193-195,196-198 advantages, 4, 10-11 development of, 16-33 model for construction, 16-20 steps in the development of, 21-33 Situation analysis, 17, 19, 23, 51-57, 118-119 four types, 17, 19, 23 goals of analysis, 24 information gathered, 51-52 methods, 54-56 Situational interview, see Interviews Situations, strong and weak, see Trait activation Specifications, 17, 19, 24-26, 197 dimensions, 19, 24-25 simulation, 25-26 content of exercise, 17, 25 difficulty level, 17, 19, 25 setting of exercise, 17, 19, 25 type of exercise, 17, 19, 25 Standardization, 28-29, 79-80, 86, 93-95,
106-110,124-126,136-137, 183-186 Standards for Educational and Rychological Testing, 182, 188, 194 Steps in the development of simulation exercises, see Simulation exercises, steps in the development of Structured interview, see Interviews Subject matter experts (SMEs), 23-24, 55, 56, 191-193
SUBJECT INDEX Summative evaluation, see Evaluation T Task analysis, 17, 23, see also Job analysis Technology, 37-40, 78, 123-124, 158 advantages and disadvantages of, 39-40 applications of, 37-38 computer, 37, 123, 159 iests, 4-/ intelligence, 6, 7 paper-and-pencil, 4-6 personality, 6, 7 Time limits, 26-27 Tips, see Writing tips Training, see also Purpose assessor, see Assessor training frame of reference, 95-96, 98, 127, 168-169 resource person, 137, 139 role players, 110-112, 126, 185 Trait activation, 41-42 Transparency, 43 Typical performance, see Performance, maximum vs. typical U Uniform Guidelines on Employee Selection Procedures, 182, 189 * Validity, 31,32, 83, 97-98, 114, 116, 118, 129, 141, 158-159, 169, 183-184, 188-195 concurrent content, 32, 33, 53, 83, 129, 189-193 construct, 32, 129, 193-195 criterion-related, 32, 98, 129, 189 face, 118, 183 predictive, 83, 98, 189 Videotaping, 93, 172
W Work samples, see Signs vs. samples Writing tips, 45-50, 122-123 conceptual level, 46-47 inappropriate language, 45-46 names, 48-49 neutral setting, 49-50 reading level, 46-47 union activity, 49
\