People and Organizations Explorations of Human-Centered Design
William B. Rouse
BICENTENNIAL
BICENTENNIAL
WILEY-INTE...
15 downloads
1719 Views
21MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
People and Organizations Explorations of Human-Centered Design
William B. Rouse
BICENTENNIAL
BICENTENNIAL
WILEY-INTERSCIENCE A John Wiley & Sons, Inc., Publication
This Page Intentionally Left Blank
People and Organizations
T H Ew t i E Y BICENTENNIAL-KNOWLEDGE FOR G E N E R A T I O N S
6
ach generation has its unique needs and aspirations. When Charles Wiley first opened his small printing shop in lower Manhattan in 1807, it was a generation of boundless potential searching for an identity. And we were there, helping to define a new American literary tradition. Over half a century later, in the midst of the Second Industrial Revolution, it was a generation focused on building the future. Once again, we were there, supplying the critical scientific, technical, and engineering knowledge that helped frame the world. Throughout the 20th Century, and into the new millennium, nations began to reach out beyond their own borders and a new international community was born. Wiley was there, expanding its operations around the world to enable a global exchange of ideas, opinions, and know-how. For 200 years, Wiley has been an integral part of each generation’s journey, enabling the flow of information and understanding necessary to meet their needs and fulfill their aspirations. Today, bold new technologies are changing the way we live and learn. Wiley will be there, providing you the must-have knowledge you need to imagine new worlds, new possibilities, and new opportunities. Generations come and go, but you can always count on Wiley to provide you the knowledge you need, when and where you need it!
WILLIAM J. PESCE PRESIDENT
AND CHIEF
EXECUTIVE OmCER
PETER
BOOTH WlLEV
CHAIRMAN OF THE BOARD
People and Organizations Explorations of Human-Centered Design
William B. Rouse
BICENTENNIAL
BICENTENNIAL
WILEY-INTERSCIENCE A John Wiley & Sons, Inc., Publication
Copyright 0 2007 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-601 1, fax (201) 748-6008, or online at http://www.wiley.com/go/permission. Limit of LiabilityiDisclaimer of Warranty: While the publisher and author have ustd their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic format. For information about Wiley products, visit our web site at www.wiley.com. Wiley Bicentennial Logo: Richard J. Pacific0 Library of Congress Cataloging-in-Publication Data: Rouse, William B. People and organizations : explorations of human-centered design / by William B. Rouse p. cm. Includes bibliographical references and index. ISBN 978-0-470-09904-9 (cloth) 1. Human engineering. I. Title. TA166.R68 2007 620.8'2- dc22 2007006003 Printed in the United States of America. 1 0 9 8 7 6 5 4 3 2 1
This book is dedicated to the wealth of co-authors, collaborators, customers, and compatriots who have both enabled the journey chronicled here and contributed to the insights reported.
This Page Intentionally Left Blank
TABLE OF CONTENTS
Preface
xvii
The Author
xix
1
The Path of Serendipity
1
Early Serendipity
2
Role of Serendipity
3
Human-Centered Design
5
Design Objectives
6
Design Framework
6
Design for Success
8
9
Overview of Book
2
Human Abilities
10
Human Limitations
12
Supporting Humans
12
Levels of Understanding
14
Serendipity Revisited
15
References
19
Estimation, Mental Models, and Teams
21
Predictor Displays for Air Traffic
22
Sources of Suboptimal Prediction
24
Atmospheric Physics
28 vii
viii Contents
3
Stochastic Estimation Tasks
29
Mental Models
30
Fundamental Limits
34
Groups and Teams
38
Group Decision Making
38
Aegis Team Training
40
Performing Arts Teams
43
Conclusions
47
References
47
Processes, Networks, and Demands
51
Introduction
51
Processes
52
Queueing Processes
53
Causes of Waiting
54
Staffing Information Desks
55
Selecting Acquisition Sources
56
Allocating Resources Across Processes
58
Networks
60
Impact of Technology
63
Data Collection
64
Case Studies
66
Libraries as Networks
68
Summary
69
Forecasting Demands
69
Conclusions
72
References
73
Contents
4
Tasks, Errors, and Automation
77
Introduction
77
Multi-Tasking Decision Making
77
Adaptive Aiding
78
Detecting Attention Allocation
79
Queueing Model of Multi-Task Performance
80
Optimal Control Model of Multi-Task Performance
82
Human-Computer Interaction in Dynamic Systems
83
Adaptive Aiding Revisited
84
Framework for Design
86
Human Error
5
91
Studies of Human Error
91
Error Tolerant Systems
95
Intelligent Interfaces
ix
98
Electronic Checklists
98
Pilot’s Associate
100
Conclusions
105
References
106
Failures, Detection, and Diagnosis
109
Introduction
109
Detection and Diagnosis Performance
111
Context-Free Simulations
111
Network Size, Pacing, Aiding, and Training
112
Feedback and Redundancy
114
Complexity
116
Context-Specific Simulation
118
Cognitive Style
121
x
Contents
Measures of Performance
122
Summary of Experiments
122
Dynamic Process Plants
124
Large-Scale Dynamic Networks
130
Human Abilities, Limitations, and Inclinations
134
Models of Detection and Diagnosis Initial Fuzzy Set Model
136
Initial Rule-Based Model
137
Fuzzy Rule-Based Model
139
Rule-Based Model for Dynamic Processes
140
Modeling Failure Detection
142
Summary of Models
143
Training and Aiding
6
135
145
Training
145
Aiding
148
Summary
149
Conclusions
149
References
150
Displays, Controls, Aiding, and Training
155
Introduction
155
Systems Engineering
156
Human and Organizational Aspects
157
CostBenefit Analysis
158
Modeling Human Behavior and Performance
165
Modeling Mental Workload
167
System Design HumadMachhe Interaction
169 170
Contents
7
Displays and Controls
172
Design of Aiding
174
Design of Training
179
Training vs. Aiding Tradeoffs
180
Design Evaluation
182
Process Control and Manufacturing
185
Conclusions
189
References
190
Information, Knowledge, and Decision Making
197
Introduction
197
Information and Knowledge
20 1
R&D Decision Making
205
Design Decision Making
207
Environment of Design
209
Design Challenges
210
Design Methods and Tools
21 1
Implications for Support
21 1
Management Decision Making
8
xi
212
Strategic Management Tools
214
Online Management Services
218
Information and Knowledge Support
220
Conclusions
224
References
225
Products, Systems, and Services
229
Introduction
229
Market and Product Models
230
xii Contents
Multi-Attribute Models Utility Theory Models
232
Quality Function Deployment
235
Summary
236
Product Planning Advisor
237
Using the Product Planning Advisor
239
Usage Guidelines
239
Problem Representation
242
Interpreting Results
243
Typical Users of PPA
245
Case Stories
246
Conclusion
248
Facilitating Product Planning
9
232
248
Objections and Responses
249
Elements of Facilitation
25 1
Summary
252
Conclusions
252
References
25 3
Invention, Innovation, and Options
257
Introduction
257
Invention vs. Innovation
257
Technology and Innovation
258
Overview
260
Purpose of R&D
260
Multi-Stage Decision Processes
26 1
Discounted Cash Flow
263
Technology Options
265
Contents xiii
Option Pricing Models
268
Framing Options
268
Estimating Input Data
268
Calculating Option Values
269
Performing Sensitivity Analyses
270
Example Calculations
270
Technology Investment Advisor
27 1
Case Stories
274
Limitations and Extensions
27 6
Value Streams and Networks
278
R&D World
282
Value-Centered R&D
285
Characterizing Value
285
Assessing Value
290
Managing Value
29 1
Organizing for Value
297
Summary
302
Technology Adoption
302
Conclusions
303
References
304
10 Challenges, Situations, and Change
31 1
Introduction
311
Essential Challenges
3 12
Situations and Stories
316
Common Business Situations
317
Situation Assessment Advisor
326
Understanding Change
328
xiv Contents
Organizational Delusions
330
Need-Beliefs-Perceptions
334
Summary
335
Organizational Simulation
337
Architecture of Organizational Simulation
338
Implications for Strategic Management
340
Applications of Organizational Simulation
343
Conclusions
344
References
346
11 Value, Work, and Transformation
351
Introduction
35 1
Defining Transformation
353
Role of Theory
354
Context of Transformation
355
Modeling the Enterprise
357
A Theory of Enterprise Transformation
360
Value Deficiencies Drive Transformation
360
Work Processes Enable Transformation
36 1
Allocation of Attention and Resources
363
Management Decision Making
363
Social Networks
365
Transformation Processes
365
Summary of Theory
366
Ends, Means and Scope of Transformation
367
Value Deficiencies Drive Transformation
368
Work Processes Enable Transformation
369
Illustrations of Transformation
37 1
Contents xv
Transportation
37 1
Computing
373
Contemporary Illustrations
375
Conclusions
380
Implications for Research
380
Transformation Methods and Tools
381
Emerging Enterprise Technologies
382
Organizational Simulation
383
Investment Valuation
384
Organizational Culture and Change
385
Best Practices Research
386
Summary
387
Experiences from Practice
389
Search Technology
389
Georgia Tech
391
Transforming Decision Making
395
Conclusions
395
References
398
12 People, Organizations, and Serendipity
Introduction
407
407
Human-Centered Design
408
Role of Serendipity
408
Intersecting Perspectives
409
Crossroads of Serendipity
410
Current Crossroads
41 1
The Legacy of Bologna
412
Prospects for Academia
413
xvi Contents
Implications for a Flat World
414
References
416
Index
419
PREFACE
Planning is a process of placing yourself in the path of serendipity. Better yet if you can be at the crossroads of serendipity. Prepared with a clear set of intentions and at least notional plans, you’ll recognize serendipity when it happens. The process will be one of “smart” luck rather than “dumb” luck. As a result, you’ll pursue your plans and achieve success in ways not previously imagined. This assertion conflicts with the idea that unrelenting execution of well-honed plans is the key to success. While that sometimes works, success often surprises us in both its nature and timing. If we are primed to notice such twists of events, we can intelligently and aggressively take advantage of serendipity. We might not “make the times,” but we can take advantage of them. This book reports on a journey laced with serendipity, a journey to understand how people and organizations function, as well as devise means for supporting them to function better. This journey has been a mixture of inching forward, striding occasionally and, in several situations, leaping to new insights and ideas enabled by serendipity. This process has involved seeing distinctions and connections across domains, disciplines, and cultures. The domains, to name a few, have included aerospace, electronics, software, and pharmaceuticals on the hightech side, and retail, education, and non-profits on the soft-tech side. The disciplines have ranged from engineering and computing, to management and behavioral and social sciences, to architecture, humanities, and the arts. Cultures have included work in more than 30 countries in Africa, Asia, Europe, and the Americas. The distinctions and connections of interest involve people and organizations. Work with operations and maintenance personnel, researchers and designers, and senior executives in industry, government, academia, and the non-profit sectors has been the source of the many xvii
xviii Preface
insights and ideas discussed in this book. I acknowledge many of these people throughout the book, indicating their contributions to the journey. Serendipity emerges from differing points of view, different lenses perhaps in terms of concepts, principles, methods, tools and, very importantly, experiences. Different disciplines, for instance, see phenomena at varying levels of aggregation and abstraction, sometimes seeing their view as the “correct” view. In the conflict between competing strong assertions, unexpected hybrids can emerge as the source of new insights and ideas. Such hybrid perspectives can be termed transdisciplinary in that they are not simply aggregations of multiple disciplinary (or multi-disciplinary) perspectives, but instead often represent an integrating framework populated with knowledge from a range of disciplines. My experience is that transdisciplinary perspectives can be invaluable in addressing complex systems problems such as healthcare, security, and the environment. This book addresses a wide range of human and organizational issues that are central to complex systems such as these. I feel very fortunate to have been able to devote 40 years to the research discussed in this book. Many people have been my mentors, teachers, guides, and colleagues in this work. I am indebted to them and will introduce many of them throughout this book. I am also pleased to acknowledge the help of my colleague Kristi Kirkland, who served as managing editor for this book and several of my earlier books. Atlanta, Georgia William B. Rouse January 2007
THE AUTHOR
Bill Rouse is the Executive Director of the Tennenbaum Institute at the Georgia Institute of Technology. This university-wide center pursues a multi-disciplinary portfolio of initiatives focused on research and education to provide knowledge and skills that enable fundamental change of complex organizational systems. He is also a professor in the College of Computing and School of Industrial and Systems Engineering. His earlier positions include Chair of the School of Industrial and Systems Engineering, CEO of two innovative software companies - Enterprise Support Systems and Search Technology - and faculty positions at Georgia Tech, University of Illinois, Delft University of Technology, and Tufts University. Rouse has almost four decades of experience in research, education, engineering, management, and marketing. His expertise includes individual and organizational decision making and problem solving, as well as design of organizations and information systems. In these areas, he has consulted with more than one hundred large and small enterprises in the private, public, and non-profit sectors, where he has worked with several thousand executives and senior managers. Rouse has written hundreds of articles and book chapters, and has authored many books, including most recently Essential Challenges of Strategic Management (Wiley, 200 1) and the award-winning Don 't Jump to Solutions (Jossey-Bass, 1998). He is editor of Enterprise Transformation: Understanding and Enabling Fundamental Change (Wiley, 2006), co-editor of Organizational Simulation: From Modeling and Simulation to Games and Entertainment (Wiley, 2005), co-editor of the best-selling Handbook of Systems Engineering and Management (Wiley, 1999), and editor of the eight-volume series HumadTechnology Interaction in Complex Systems (Elsevier). Among many advisory roles, he has served as Chair of the Committee on Human Factors of the National Research Council, a member of the U.S. Air Force Scientific Advisory xix
xx The Author
Board, and a member of the DoD Senior Advisory Group on Modeling and Simulation. Rouse is a member of the National Academy of Engineering, as well as a fellow of four professional societies -- Institute of Electrical and Electronics Engineers (IEEE), the International Council on Systems Engineering (INCOSE), the Institute for Operations Research and Management Science, and the Human Factors and Ergonomics Society. He has received the Joseph Wohl Outstanding Career Award and the Norbert Wiener Award from the IEEE Systems, Man, and Cybernetics Society; a Centennial Medal and a Third Millennium Medal from IEEE; the Best Article Award from INCOSE, and the 0. Hugo Schuck Award from the American Automation Control Council. He is listed in Who's Who in America, Who's Who in Engineering, and other biographical literature, and has been featured in publications such as Manager's Edge, Vision, Book-Talk, The Futurist, Competitive Edge, Design News, Quality & Excellence, IIE Solutions, Industrial Engineer, and Engineering Enterprise. Rouse received his B.S. from the University of Rhode Island, and his S.M. and Ph.D. from the Massachusetts Institute of Technology.
Chapter 1
THE PATH OF SERENDIPITY
This book is about people who operate, maintain, design, research, and manage complex systems, ranging from air traffic control systems, process control plants and manufacturing facilities to industrial enterprises, government agencies and universities. The focus is on the nature of the work these types of people perform, as well as the human abilities and limitations that usually enable and sometimes hinder their work. In particular, this book addresses how to best enhance abilities and overcome limitations, as well as foster acceptance of the means to these ends. This book is also about serendipity and how unforeseen connections and distinctions enable innovative approaches to problems as well as solution concepts. The serendipitous connections and distinctions discussed here stem from perspectives that cut across domains and disciplines, and sometimes cultures. This enables, for example, using understanding of the operations of library networks to develop concepts for maintaining aircraft engines. The key, by the way, is the networked nature of relationships among system elements in both domains. The possibility of serendipity can be enhanced by a willingness to ask questions such as, “How is a NASCAR pit crew like a Waffle House cooking and serving team?” Pursuit of answers to these types of questions is facilitated by adopting a transdisciplinary perspective that borrows from behavioral and social science, management, engineering, computing and so on to create cross-cutting frameworks for understanding questions and formulating answers. A wealth of ideas about people and organizations are discussed in this book. These ideas appear, at least in retrospect, to follow a fairly orderly path from individuals to teams to organizations to enterprises. Yet, as illustrated by the many stories and vignettes in the chapters that follow, the path is only smooth when looking backward. Looking forward, the ideas emerged from serendipitous insights and matured independently at first, only coming together as a coherent whole more recently. 1
2
People & Organizations
EARLY SERENDIPITY The picture in Figure 1 symbolizes this process. While the circle in the center of this picture may appear to be a UFO landing site, it is actually a driver-training track. When I was 14 years old, I eagerly looked forward to obtaining my driver training permit when I turned 15. In preparation, I bought a 1952 Plymouth for $35. I wanted to teach myself the basics of driving before I hit that all-important 15" birthday. There was a field behind our house, a bit of pasture owned by Mrs. Cook, who lived across the street. I asked her if I could drive my car in this field. She agreed and, over the next few months, the track shown in the picture emerged. I achieved my goal of being ready for my permit, although my experience was limited to first and second gears, as I could never get going fast enough to get into third.
Figure 1. One Path of Serendipity
The Path of Serendipity
3
While all this was happening, an aerial survey was being taken of our town (Portsmouth, RI) in preparation for a new highway. The survey captured my track and my aunt, Nancy Lantz, a newspaper reporter, got me a copy of the picture. Only then did I fully realize the pattern I had created. Prior to that recognition, I simply viewed the field as the place I practiced driving. The pattern of the track helped me to understand the significance my first car played in my earliest research ventures - this picture now hangs prominently in my office to remind me. For instance, I was able to figure out how to drive downtown without going on any public roads, since 14 year olds weren’t supposed to be driving. It was a very bumpy route through fields and along abandoned streetcar tracks. All the bumping resulted in many needed repairs. I accomplished some of these repairs using parts from my Erector Set because I could not afford to buy spare parts. I learned a lot from that 1952 Plymouth because it kept on breaking. Within six months, a 1949 Chevrolet, also purchased for $35, replaced the Plymouth, and my lessons continued. This was serendipity at its best, and certainly was one of the reasons I later pursued engineering. More recently, these experiences have contributed to the great fun we are having studying the automobile industry, as discussed in Chapter 11.
ROLE OF SERENDIPITY This research discussed in the following chapters clearly illustrates the substantial role serendipity has played in this work. This was not fully apparent to me until I began to study strategy and planning more formally in the 1980s, continuing into the 1990s. In Best Laid Plans (1994), and later Don’t Jump to Solutions (1998), I concluded that planning is a process of placing yourself in the path of serendipity. The basis of this conclusion was my own experiences as well as numerous discussions with very successful people and organizations. Success seldom emerges exactly as planned. I have often heard that technologies were most successful in ways other than expected, often in markets that were not anticipated. The stories of Post-Its and Super Glue are familiar examples. There are countless stories such as these (Burke, 1996; Evans, 2004). This conclusion has frequently been countered with questions such as, “Why bother to plan at all if it’s all serendipity?” The answer is that plans
4
People & Organizations
are needed to be able to recognize serendipity - to assure “smart” luck rather than having to accept “dumb” luck. Prepared with a clear set of intentions and at least notional plans, you’ll recognize serendipity when it happens. As a result, you’ll pursue your plans and achieve success in ways not previously imagined. The people I have talked with about this phenomenon were usually pursuing clear goals with great energy when it struck them that there was a better way of achieving success, typically with slightly altered goals and refined plans. Perhaps the unfortunate fact that the glue will not permanently stick is actually a benefit - hence, Post-Its. Perhaps the inopportune fact that the gunk happens to create an almost unbreakable bond is actually a benefit - hence, Super Glue. This book is focused much more on research to understand people and organizations than it is about product innovations, although see Chapters 6, 8, and 9. Serendipity in research involves recognizing new phenomena, seeing old phenomena in new ways, and identifying new concepts, principles, methods, and tools for addressing phenomena. In essence, serendipity in research involves the unexpected emergence of new ways of thinking about something. There are many sources of serendipity and ways to place yourself in the path of serendipity. One way is to work across domains. My domain experiences range from the aerospace and electronics industries, to consumer products and retailing companies, to government agencies and non-profit organizations. The contrasts across this range of domains have yielded many serendipitous insights. For example, the heterogeneity of the constituencies in a domain - the stakeholders -- seems to strongly impact decision-making processes and how these processes can best be supported. Another way to foster serendipity is working across disciplines. By disciplines, I mean engineering, computing, management, behavioral and social sciences, law, medicine, architecture, humanities, arts, and so on. Different disciplines often bring very different perspectives to problem solving and decision making, both in terms of formulation and solution. Understanding these differences can lead to new, transdisciplinary formulations. For instance, our ongoing research on modeling work and workflow has benefited from the contrasting approaches of management, engineering, computing, and architecture, and the ways they address business processes and flows of physical objects, information, and people, respectively. A third way to foster serendipity is to cross cultures. The research discussed in this book has been pursued in the contexts of more than 30
The Path of Serendipity
5
countries, usually with colleagues from these countries. The contrasts between Western and Eastern cultures, as well as the contrasts between developed and developing countries, have provided many useful insights. For example, it seems that the role of the family in a culture affects entrepreneurship in particular and risk taking in general. Thus, in one initiative, we found that simply providing venture capital was not sufficient to overcome risk averse attitudes - support was needed to help would-be entrepreneurs to better understand and manage risks. Most of the serendipitous insights discussed in this book emerged from crossing domains (e.g., libraries and electronics), disciplines (e.g., technology and arts), and cultures (e.g., developed and developing countries). These “border crossings” placed us in the path of serendipity and new insights were found on this path. An overarching theme of this book is the value of such border crossings.
HUMAN-CENTERED DESIGN The primary theme of this book is human-centered design (Rouse, 1991, 2001). Human-centered design is a process of assuring that the concerns, values, and perceptions of all stakeholders in a design effort are considered and balanced. Human-centered design can be contrasted with usercentered design (Billings, 1996; Booher, 2003; Card, Moran & Newell, 1983; Norman & Draper, 1986). The user is a very important stakeholder in design, often the primary stakeholder. However, the success of a product or service is usually strongly influenced by other players in the process of design, development, fielding, and ongoing use of products and services. Human-centered design is concerned with the full range of stakeholders. This broad view of human-centered design emerged for me in the late 1980s as our research on intelligent interfaces for aircraft pilots (see Chapter 4)matured and we pursued plans for integrating this concept into production aircraft. Our intelligent interface concept was clearly pilot centered. However, we soon discovered that “pilots may fly ‘em, but they don’t build ‘em or buy ‘em.” We needed to pay much more attention to the aircraft manufacturers, the airlines, and various regulatory bodies if we wanted to get our idea on airplanes. This led us to identify and understand the various stakeholders who affect the design, development, manufacture, operation, and regulation of airplanes and their operations.
6
People & Organizations
Design Objectives There are three primary objectives within human-centered design. These objectives should drive much of designers' thinking, particularly in the earlier stages of design. Many discussions in later chapters illustrate the substantial impact of focusing on these three objectives. The first objective of human-centered design is that it should enhance human abilities. This dictates that humans' abilities in the roles of interest be identified, understood, and cultivated. For example, people tend to have excellent pattern recognition abilities. Design should take advantage of these abilities, for instance, by using displays of information that enable users to respond on a pattern recognition basis rather than requiring more analytical evaluation of the information. The second objective is that human-centered design should help overcome human limitations. This requires that limitations be identified and appropriate compensatory mechanisms be devised. A good illustration of a human limitation is the proclivity to make errors. Humans are fairly flexible information processors -- an important ability -- but this flexibility can lead to "innovations" that are erroneous in the sense that undesirable consequences are likely to occur. One way of dealing with this problem is to eliminate innovations, perhaps via interlocks and rigid procedures. However, this is akin to throwing out the baby with the bath water. Instead, mechanisms are needed to compensate for undesirable consequences without precluding innovations. Such mechanisms represent a human-centered approach to overcoming the human limitation of occasional erroneous performance. The third objective of human-centered design is that it should foster human acceptance. This dictates that stakeholders' preferences and concerns be explicitly considered in the design process. While users are certainly key stakeholders, there are other people who are central to the process of designing, developing, and operating a system. For example, purchasers or customers are important stakeholders who often are not users. The interests of these stakeholders also have to be considered to foster acceptance by all the humans involved.
Design Framework We soon found that human-centered thinking could be applied in a wide range of domains. The process was formalized in terms of the framework
The Path of Serendipity
7
shown in Figure 2 involving four phases and specific issues, methods, and tools (Rouse, 1991, 2001). These phases support the above design objectives by first focusing on the full range of humans involved in the success of the solution being designed, and then emphasizing how abilities can be enhanced, limitations overcome, and acceptance fostered. The Naturalist Phase involves understanding the domains and tasks of stakeholders from the perspective of individuals, the organization, and the environment. This understanding not only includes stakeholders’ activities, but also prevalent values and attitudes relative to productivity, technology, and change in general. The Marketing Phase - or market research phase - builds upon the understanding of the domains and tasks of current and potential stakeholders to conceptualize alternative products or services to support these people. Product or service concepts can be used for initial marketing to determine whether or not people perceive a product or service concept as solving an important problem, solving it in an acceptable way, and solving it at a reasonable cost.
...................
.......... ............
Figure 2. Framework for Human-Centered Design
8
People & Organizations
The Engineering Phase addresses tradeoffs between desired conceptual As indicated in Figure 2, functionality and technological reality. technology development will usually have been pursued prior to and in parallel with the Naturalist and Marketing Phases. This will have at least partially assured that the concepts shown to stakeholders were not technologically or economically ridiculous. However, one now must be very specific about how desired functionality is to be provided, what performance is possible, and the time and dollars necessary to provide it. As the Sales and Service Phase begins, the product or service should have successfully been developed and evaluated, that is, proven to be a “verified” solution. The focus is now on the extent to which it solves the targeted problem (i.e., is a valid solution), solves it an acceptable way, and does so at reasonable cost (i.e., is a viable solution). It is also at this point that one assures that implementation conditions are consistent with the assumptions underlying the design basis of the product or service. It is important to indicate the role of technology in the human-centered design process. As depicted in Figure 2, technology is pursued in parallel with the four phases of the design process. In fact, technology feasibility, development, and refinement usually consume the lion’s share of the resources in a product or service design effort. However, technology should not drive the design process. The aforementioned human-centered design objectives should drive the process and technology should support these objectives. Design for Success As the human-centered design methodology matured and was applied in a wide range of domains (see Chapter S ) , we conducted numerous workshops. The workshop notes evolved into a book manuscript that was to be titled “Human-Centered Design.” Upon concluding a workshop at NASA where this material was used, several participants approached me and said that they thought the tentative book title was a poor choice. Further, they argued that the human-centered methodology could be used to design virtually anything, as all products and services have stakeholders, even those products that have no real “users” in the sense of anyone interacting with them. They suggested that the book be titled “Design for Success.” When I returned from this trip, I called my editor at John Wiley and he readily agreed that this new title would be much more descriptive and catchy. In this way, a serendipitous exchange in the waning moments
The Path of Serendipity
9
of a workshop fundamentally repositioned a product (the book) in a broader market.
OVERVIEW OF BOOK As shown below, Chapters 2-11 each address what I term an essential phenomena and a central question: 2.
Estimation: How to assess what is happening, has happened, will happen?
3.
Queueing: How to allocate resources to provide efficient service?
4.
Control: How to determine what task should be performed next?
5.
Diagnosis: How to determine the cause of observed symptoms?
6.
Design: How to synthesize human-centered solutions?
7.
Information: How to identify sources and retrieve valuable information?
8.
Stakeholders: How to consider and balance stakeholders’ interests?
9.
Future: How to consider future uncertainties?
10.
Challenges: How to address essential management challenges?
11.
Transformation: How to address and pursue fundamental change?
These essential phenomena played central motivational roles as the research on these phenomena was pursued. I can recall discussions of how estimation (e.g., predicting future states of a system) is a central aspect of being human rather than any other species. I can remember similar discussions of control (deciding what to do next) and diagnosis (deciding what’s wrong). The notion of an essential phenomena implied that our research was central to understanding the essence of being human. This made the research very compelling.
10 People & Organizations
Table 1 describes the essential phenomena of Chapters 2-1 1 in terms of the three human-centered design objectives discussed earlier. Examples of human abilities and limitations are indicated, as are possible approaches to support, that is, enhancing abilities and overcoming limitations. In the following subsections, abilities, limitations, and support are summarized across the ten essential phenomena.
Human Abilities The human abilities column of Table 1 includes three broad types of abilities. First, people are good at recognizing familiar patterns and mapping from these familiar patterns to past successful responses. Consequently, they tend to be good at running an “as is” system - for example, vehicle, process, or enterprise -- to achieve familiar objectives. People tend to have a wealth of “common sense” that is invaluable in situations where this knowledge and these skills apply. Second, when they need to consider alternatives - for instance, when recognition-primed decision making (Klein, 1998, 2002) is not sufficient they are often good at articulating their interests, defining the attributes of these interests, and specifying the relative importance of interests and attributes. However, their abilities tend to be limited to making pair wise comparisons of alternatives, as opposed to being able to compare many alternatives simultaneously. Thus, they tend to satisfice (Simon, 1969) and make reasonable rather than optimal choices. Third, when new alternatives are needed, people are often quite good at envisioning the possibilities, formulating new ideas, and imagining the consequences of these ideas. They are also often good at articulating a vision, and leading people to pursue this vision. Sometimes, but not always, they are good at recognizing when the vision needs to change to better fit the evolving environment, for example, production needs, competitors, or the weather. These three types of abilities are among the primary reasons why humans are included as elements of complex systems. Common sense, abilities to abstract, inventiveness, and communication abilities are almost always very important in complex domains. Thus, human abilities are usually essential. However, the information processors (humans) with these abilities also have limitations that tend to be central to the “costs” of enjoying the benefits of human information processing.
The Path of Serendipity ~
Chapter
4
5
9
10
II
~~
Essential Phenomena
Human Abilities
Human Limitations
Approaches to Support
Estimation: How to assess what is happening, has happened, or will happen? Queueing: How to allocate resources to provide efficient service? Control: How to determine what task should be performed next? Diagnosis: How to determine the cause of observed svmotoms? Design: How to synthesize human-centered solutions?
Good at recognizing familiar patterns and mapping to action
Inaccurate mental models and perceptions of the State of the process
Stochastic forecasting models and displays of filtered, smoothed & predicted states
Good at specifying attributes and making pair wise comparisons
Difficult to trade off multiple attributes in designing service
Stochastic process models and multiattribute decision analysis methods
Information: How to identify sources and retrieve valuable information? Stakeholders: How to consider and balance stakeholders’ interests? Future: How to consider future uncertainties?
Good at recognizing valuable information once retrieved Good at specifying interests and importance of associated attributes
Difficult to specify the attributes of valuable information
Adaptive aiding to perform tasks when task load becomes excessive Aiding to see implications of available information to choose most diagnostic tests Frameworks, methods & tools for systematically identifying & addressing issues Aiding in tailoring and use of search tools, as well as compilation of results
Difficult to deal with stakeholders’ differing and conflicting interests
Multi-stakeholder, multi-attribute models that enable tradeoffs and decisions
Good at imagining alternative futures and possible consequences Good at running the “as is” business to achieve familiar objectives
Difficult to consider future contingencies and specify longterm returns Tendency to be tactical rather than strategic &too focused to see situation Difficult to recognize forces for change and then commit to change
Decision models that provide economic assessments of the value of contingencies Toolkits that enable systematic addressing & pursuit of the essential challenges
,.
8
~
11
Challenges: How to address essential management challenges? Transformation: How to address and pursue fundamental change?
Good at satisficing and making reasonable choices
Good at recognizing familiar patterns and mapping to past actions Good at envisioning & formulating ideas for new solutions
Good at articulating a vision and leading people in pursuing this vision
Poor at switching among tasks and performing several tasks at once Difficult to identify the best test and determine implications of test results Difficult to consider and balance numerous design attributes
Methods that address value deficiencies, work processes, decisions & social networks
Table 1. Phenomena, Questions, Abilities, Limitations, and Approaches to Support
12 People & Organizations
Human Limitations Humans have difficulty perceiving variables accurately - that’s why we provide pilots instruments. However, in general, they tend to have inaccurate perceptions of system states, including past, current, and future states. This is due, in part, to limited “mental models” of the phenomena of interest in terms of both how things work and how to influence things. Consequently, people have difficulty determining the full implications of what is known, as well as considering future contingencies for potential systems states and the long-term value of addressing these contingencies. The implications for supporting humans are discussed in the next subsection. When there is a need to consider multiple alternatives, people have difficulty trading off multiple attributes across the multiple alternatives. They tend to want to think in terms of the direct attributes of the alternatives rather than the attributes of the value of the alternatives. These difficulties often cause people to satisfice rather than optimize, in part because the nature of the phenomena (e.g., highly nonlinear) may make optimization difficult, but also because people find it difficult to consider many things simultaneously. They also tend to be poor at multi-tasking, particularly for high demand situations such as emergencies and other types of crises. For domains where there are multiple types of stakeholders with differing and perhaps conflicting interests, people often have difficulty addressing these differences and conflicts. They tend to focus tactically rather than strategically and not see broader situations that could inform the resolution of tradeoffs and conflicts. People often have difficulty seeing forces for change, as opposed to needs to restore the status quo. Consequently, they may have difficulty committing to change and sustaining such commitments. The above “costs” of including humans as key elements of complex systems might lead one to pursue approaches to automating human abilities to avoid these types of costs. Alternatively, one can pursue means of supporting humans to overcome these limitations and, thereby, enjoy the benefits of humans’ abilities without the costs of human limitations. The next subsection considers this possibility.
Supporting Humans An obvious way to support humans is to provide them information hence, the aforementioned pilot’s instruments. Information can be
The Path of Serendipity 13
provided at varying levels of aggregation and abstraction (Rasmussen, 1986, 1994), for instance, temperature measurements at regular points along the plant’s pipes vs. energy flow throughout the power plant. One might display estimated future states, for example, using “predictor” displays for air traffic control. Another example is a display of information search results, including relationships among results and their implications. At a deeper level, one might display underlying processes and employ various models to infer underlying states and the bases of these states. Examples include time series forecasting models, queueing process models, business process models, and value stream representations. Network models might be used to portray, for example, queueing networks or perhaps social networks. Such models could be used to “drive” displays of abstractions of evolving systems phenomena. The models themselves might also be useful to overcoming human limitations of not being able to understand the underlying complexity of a system without appropriate abstractions. For example, libraries are not simply networks of queues of people and materials, but portraying libraries in this way can facilitate understanding and managing the complexity of libraries’ business processes (see Chapter 3). Another type of support involves helping people decide what to do what choice to make and how to allocate resources. There is a wide range of models for addressing economic uncertainty, as well as related models for dealing with multi-stakeholder, multi-attribute decision making situations. There are both prescriptive and descriptive models for these types of problems, addressing what people should do and what people naturally tend to do, respectively. Such models can be used directly or incorporated into decision aids that embody these models. Yet another type of support addresses task performance. Aiding can be employed to perform part of a task, to sometimes perform the whole task (adaptive aiding), or to always perform the whole task (automation). One can think in terms of a range of levels of automation, from none to complete with several levels in between (Sheridan, 1992,2002). One often needs to provide aiding in use of aiding, for example, to assure that search tools provide the greatest value. Chapters 4 and 7 address task performance aids. Humans can also be supported by methodologies that provide frameworks, methods, and tools in domains such as research, design, and management. Use of these methodologies can be facilitated via toolkits
14 People & Organizations
that provide alternative models, methods, and tools for targeting specific problems. Methodologies and toolkits can greatly facilitate the “datadriven decision making’’ discussed in Chapters 8-10. For all of these types of support, there is another dimension - aiding vs. training. Aiding involves directly augmenting human performance while training is focused on enhancing the potential to perform. Given any particular knowledge or skill, the question arises of whether the humans in question need to know it or simply know how to do it. Perhaps they do not need to understand nuclear engineering, but do need to be able to operate nuclear plants, for instance. This question is pursed in more detail in Chapter 6.
Levels of Understanding This book is about people and organizations in terms of how to understand them and how to support them. At one level, understanding can be expressed in terms of data about phenomena, for instance, human perception of visual displays in particular circumstances. This level of understanding may enable predicting human performance in similar circumstances. To some extent, you predict by reviewing the tabulated results and, hopefully, interpolating to match the conditions of interest. Another level of understanding enables control of the phenomenon of interest. Rather than predict what humans will do, you cause the outcomes that you desire via particular inputs and environmental conditions. I have long felt that the ability to control tends to represent a higher level of understanding than the ability to predict. Put simply, if you cannot influence a phenomenon, you do not fully understand it. Yet a higher level of understanding involves the ability to design the phenomenon to behave as you desire. While controlling an aircraft’s trajectory represents a higher level of understanding than predicting its trajectory, designing an aircraft to be controllable represents yet a higher level of understanding. The engineer in me does not feel that you really understand a phenomenon until you can design a means of enhancing that phenomenon. Consequently, this book not only addresses the essential phenomena listed in Table 1. The lion’s share of the attention is devoted to discussion and illustration of how to enhance human abilities and overcome human limitations via a wide range of support mechanisms that my colleagues and I have designed, developed, deployed - either as contract deliverables or
The Path of Serendipity 15
off-the-shelf products sold in the marketplace - and supported in use by hundreds of enterprises and thousands of users. As articulated earlier, an overarching theme of this book is the humancentered design of these systems, products, and services. Throughout the following chapters, there is much discussion of the basic and applied research that supported these design efforts, including summaries of what we learned about people and organizations. It is important to keep in mind, however, that our overall goal was not just prediction - we also wanted to control and, ultimately, design.
SERENDIPITY REVISITED I have portrayed human-centered design as the systematic pursuit of understanding of human abilities and limitations, as well as development and deployment of means for enhancing abilities and overcoming limitations. From this perspective, human-centered design may seem to be a very predictable process that inevitably leads to success. To some extent, this is true. However, the results of this process often are surprising. The reason for such surprises is that serendipity intervenes to take you in another direction, offer an alternative solution, or possibly provide a novel interpretation. From this perspective, this book is a story of how one thing leads to another, in this case, in the context of understanding and supporting people and organizations. For instance, our studies of library networks (see Chapter 3) led us quite naturally to drawing network diagrams, often on large white boards in offices or meeting rooms. These diagrams served as the basis for discussions of finding optimal routes for particular types of library services. It occurred to us that the properties of the network might not be static. For example, a node might be “up” or “down” at any particular point in time. This led to the question, “How would we know?” At that moment, our research thrust in fault diagnosis emerged (see Chapter 5 ) , although we did not realize this for some time. We next generated many random networks and studied how to determine a node was down. The process of generating and displaying networks was very slow as the computer searched for optimal routes. I commented on this to a colleague in electrical engineering as we met at the coffee urn. He said that he was not surprised as we were trying to solve the general circuit routing problem.
16 People & Organizations
My inclination was to run off and take a look at that literature. However, it struck me that the structure of our “circuits” (as we came to often call them) was such that the optimal routes could only have five possible shapes. Rather than searching all possible routes, we could just try variations of these five shapes and pick the best one. Programming this approach led to almost-instantaneous generation and display of networks that supported a long series of studies. Serendipity emerged by crossing domains (library networks and technical networks) and disciplines (industrial and electrical engineering). The result was a new line of research and an idea that was key to making the research feasible. The path of serendipity, in this case, was in the Coordinated Science Laboratory at the University of Illinois, a place where many disciplines mingled at coffee urns and seminars. The characteristics of such places are revisited in Chapter 12. Another example emerged from an invitation to give a lecture at Los Alamos National Laboratory. I was CEO of Search Technology at the time and one of our largest initiatives was the intelligent interface for the Pilot’s Associate, an artificially intelligent co-pilot that we were developing as a member of Lockheed’s team (see Chapter 4). Another much smaller effort was focused on understanding fundamental limits of modeling behavioral phenomena (see Chapter 2). I suggested to Chris Barrett, my Los Alamos host, that I could talk on either topic. He responded by asking me to combine both topics and talk on fundamental limits of intelligent interfaces. Having no idea what this would really lead to, I agreed and then had to invest much time into combining the two streams of thought. This effort paid off, however, as a whole new research thrust emerged (see Chapter 2). The path of serendipity in this case ran through Search Technology, Lockheed, and Los Alamos. The chapters in this book move systematically from studies of operations and maintenance (Chapters 2-6) to studies of research and design (Chapters 7-9) to studies of strategic management (Chapters 10-11). As well planned and smooth at this path may appear, it was laced with serendipitous insights and transitions. The shift to understanding the process of research, its impact on design, and then design itself grew out of frustrations with getting the applied world to the laboratory door (Rouse, 1985). Specifically, Ken Boff of the Air Force Research Laboratory and I, initially independently, were struck by how long it takes for research to impact practice, if it does at all. We joined forces in the early 1980s to
The Path of Serendipity
17
study how best to package research results for designers. We quickly became immersed in studies of how designers work, how they seek and consume information, and the forces and motivations for their behaviors. (See Chapters 6 and 7 for discussions of these studies.) One of our findings, not surprisingly, was that business considerations, rather than technical considerations, often drive the design issues considered and how these issues are addressed and resolved (see Chapter 8). Perhaps inevitably, the enterprises we were helping address their design processes asked us to also look at their management processes. They often noted that they were attracted to the human-centered design process we were advocating, but this process did not dovetail well with their business processes. This eventually led to the formation of a new company - Enterprise Support Systems - focused on business strategy, market assessment, product planning, and technology strategy. Our methods and tools supported decision making by executives and senior managers. A significant level of consulting services was provided with these methods and tools, mostly for Fortune 500 companies and selected government agencies. (See Chapters 8-1 1 for discussions of these initiatives.) The path of serendipity, in this case, led through the Air Force, NASA, Boeing, Motorola, 3M, Lockheed, DARPA, Raytheon, Rover, Coca-Cola, Abbott Labs, Honeywell, Hughes, Hitachi, and so on. We worked with more than 100 enterprises in the private and public sectors. Several thousand executives, senior managers, and designers continually asked us questions - and asked for help. Our dialogues, including arguments and debates, with these people led to many of the new directions, new ideas, and new methods and tools discussed in this book. From a research perspective, the path of serendipity has led to a wide variety of studies of people and organizations, ranging from high tech to low tech, from technology to the arts. At a recent seminar where I talked about our research on teamwork in the performing arts (see Chapter 2), I was asked, “Why on earth would an engineer research such a topic?” Thinking quickly, I responded by saying that I have always found it important to have three types of research initiatives in my portfolio. First, I like to have at least one mainstream research topic. Our recent work in investment valuation probably fits there (see Chapter 9). Second, I like to be working in an emerging area where everyone agrees it will be an important area, but it is not yet clear what form it will take. Our work in organizational simulation fits there (see Chapter 10). Third, and finally, I
18 People & Organizations
always like to be doing some research that causes people to say, “What the hell are you doing that for?’ The third category is often completely unsupported, both by sponsors and colleagues. Yet this work usually seems the most creative and, several years later, has provided the foundation for well-supported initiatives. The path of serendipity can be like a magnet, attracting your interest despite lack of clarity and support. My experience is that great things can and will happen - you just do not know what they are. I began this chapter with an illustration of early serendipity - how a beat-up 1952 Plymouth needing constant repairs led to the 40 years of research discussed in this book. Another example of early serendipity was my first engineering job as an assistant engineer at Raytheon during my junior and senior years as an undergraduate. My supervisor was Alec Garnham, an English engineer who was a British pilot during World War I1 and a heavy smoker such as you seldom encounter any more. Alec always helped me to see the bigger picture. He took a “systems perspective” before I knew what that meant. Our work focused on sonar systems in submarines, but Alec was able to put that in the perspective of the whole submarine, not just the sonar. During my two years at Raytheon, I worked in electrical engineering, mechanical engineering, systems engineering, reliability and maintainability engineering, and even bid and proposal operations, always with Alec as my mentor. Transdisciplinary perspectives emerge from having to look at the whole problem, because that is the assignment and/or because you are inherently oriented that way. I have always wanted to understand the broader context. I found that operations and maintenance function within the context of design, and that design operates in the context of business. Of course, business, in turn, operates in the context of the economy and society. These broader contexts are not organized in terms of disciplines. They are holistic phenomena that, occasionally, can be addressed by reducing them to constituent parts. More often, however, key phenomena are not evident among the parts - they only emerge from the whole. Consequently, we have to address the whole problem. Transdisciplinary perspectives provide the basis for transforming your thinking about the whole problem and moving beyond just optimizing the elements of various subproblems. I truly believe that such thinking is key to addressing national and international issues such as health care, security, and the environment. I intend this book as a contribution to advancing this point of view.
The Path of Serendipity
19
Transdisciplinary perspectives, when combined with an openness to enlightenment via serendipity, can be a powerful means to coping with complexity.
REFERENCES Billings, C.E. (1996). Aviation Automation: The Search for a HumanCentered Approach. Mahwah, NJ: Erlbaum. Booher, H.R. (Ed.).(2003). Handbook of human systems integration. New York: Wiley. Burke, J. (1996). The Dinball effect: How Renaissance water gardens made the carburetor possible and other iourneys through knowledge. Boston: Little, Brown. Card, S.K., Moran, T.P., and Newell, A. (1983). The Psychology of Human-Computer Interaction. Mahwah, NJ: Erlbaum. Evans, H. (2004). They made America: From the steam engine to the search engine: Two centuries of innovation. New York: Little, Brown. Klein, G. (1998). Sources of Dower: How people make decisions. Cambridge, MA: MIT Press. Klein, G. (2002). Intuition at work: Why developing your gut instincts will make you better at what you do. New York: Currency. Norman, D.A., and Draper, S.W. (Eds.).( 1986). User Centered System Design: New Perspectives on Human-Computer Interaction. Mahwah, NJ: Erlbaum. Rasmussen, J. (1986). Information Processing and Human-Machine Interaction. New York: Elsevier. Rasmussen, J., Pejtersen, A.M., & Goodstein, L.P. (1994). Cognitive Systems Engineering. New York: Wiley. Rouse, W.B. (1985). On better mousetraps and basic research: Getting,the applied world to the laboratory door. IEEE Transactions on Systems, Man, and Cybernetics, SMC-15(1), 2-8. Rouse, W.B. (1991). Design for success: A human-centered approach to designing successful products and systems. New York: Wiley.
20 People & Organizations
Rouse, W.B. (1994). Best laid plans. New York: Prentice-Hall. Rouse, W.B. (1998). Don't iump to solutions: Thirteen delusions that undermine strategic thinking. San Francisco, CA: Jossey-Bass. Rouse, W.B. (2001). Human-centered product planning and design. In G. Salvendy (Ed.), Handbook of industrial engineering (3rdEdition, Chapter 49). New York: Wiley. Sheridan, T.B. (1992). Telerobotics, automation, and supervisory control. Cambridge, MA: MIT Press. Sheridan, T.B. (2002). Humans and automation: System design and research issues. New York: Wiley. Simon, H.A. (1957). Models of man: Social and rational. New York: Wiley. Simon, H.A. (1969). The sciences of the artificial. Cambridge, MA: MIT Press.
Chapter 2
ESTIMATION, MENTAL MODELS, AND TEAMS
The ability to predict is important to driving a car, catching a ball, and getting through everyday life. There are some tasks where the need to predict is central. For example, Figure 1 illustrates the share price of a company for the twenty-year period 1986-2005. This stock appears headed up. Perhaps 2006 is a good time to buy. On the other hand, 1992-1995 looked just as good and then the share price dropped by 50%. Maybe this is a good time to wait.
Corporate Share Price 80.00
1
-
- - --
I
--
-
Figure 1. Typical Share Price History
21
22 People & Organizations
This chapter focuses, initially at least, on people’s abilities to predict the future of dynamic time series such as shown in Figure 1. Prediction is a type of estimation task. Another estimation task is filtering. Filtering asks the question of whether the 2005 share price of $50 was the “true” share price. Perhaps the 2005 price was “corrupted” by external events such a natural disaster and, therefore, does not represent an accurate basis for thinking about 2006 and beyond. Yet another estimation task is smoothing. Smoothing asks the same question as filtering except it addresses the whole time series in Figure 1. Smoothing is done to remove the “noise” from the data and thereby provide a better basis for decision making. It should not be surprising that filtering, smoothing, and prediction are intimately related to each other. When approached mathematically, the equations for each task have similar components. Thus, the essential phenomenon for this chapter is estimation, which includes: 0
Filtering -What is happening? Smoothing -What has happened?
8
Prediction -What will happen?
When I began to pursue this research in the early 1970s, I came to see this phenomenon as “essential” because humans’ abilities to predict, for instance, think about what might happen next year, seems uniquely human. I am not aware of any other species with this proclivity. This led me to wonder how good humans are at prediction and other estimation tasks.
PREDICTOR DISPLAYS FOR AIR TRAFFIC CONTROL I joined Tom Sheridan’s research group at MIT in the Fall of 1969. Working on a NASA grant, I soon focused on issues surrounding air traffic control (ATC). A central issue at the time - and still today - is the implications of keeping aircraft spaced miles apart as they approach and depart airports. This results in huge amounts of empty sky that limits the capacity of airports.
Estimation, Mental Models, and Teams 23
Aircraft are spaced miles apart to minimize the chance of them colliding with enormous human and economic losses. However, we are able to operate other types of vehicles such as cars and trains with much smaller inter-vehicle distances. Why can’t we land airplanes like trains, just a few feet or yards behind each other? There are problems of turbulence, wind shear, and wake vortices that keep us from operating planes that close to each other. Yet, why can’t we cut a mile or two off the inter-vehicle spacing? I wondered at that time whether the controller’s ability to predict the flight paths of aircraft was possibly a limitation. Such a limitation might cause them to be more conservative in spacing aircraft. To test this idea, I developed a predictor display for air traffic controllers (Rouse, 1970). Rather than just displaying slowly moving blips on the ATC screen, a predictor display provides a projection of each aircraft’s flight path. This is accomplished by using a dynamic model of each aircraft and likely pilot inputs to project the future path of the aircraft. To experimentally evaluate this idea, I built an ATC simulation using an EAI hybrid computer. I had to design and build a device that simulated three aircraft that could all be controlled by a single pilot. I recall the design and construction of this equipment as an important test of what I thought I knew about electrical engineering. Impedance matching, for example, became a reality rather than just a concept! As an aside, my early engineering experiences in designing, building, and manufacturing electrical circuits, machine parts, special purpose computers, and so on were important elements of my education. We do much less of this now, computers being ubiquitous and everything being simulated. This also provides important skills, but tends to ignore the physical reality of many domains. The ATC experiment compared three predictor lengths - 0, 20, and 40 seconds. There were three initial aircraft configurations - easy, moderate, and difficult. The easy configuration involved aircraft pretty well aligned to start. In contrast, the difficult configuration involved aircraft heading toward each in peril unless the controller acted quickly to avoid the conflict. The moderate configuration fit, of course, between these two extremes. The results of this experiment were quite clear. The 20-second prediction was much better than no prediction. However, a 40-second prediction was no better than no prediction. The difficulty with the 40second prediction is that it projected future aircraft flight paths that were unlikely because the pilot would seldom maintain control positions for that
24
People & Organizations
length of time. This confirmed our expectations that there is some optimal length of prediction, totally dependent, obviously, on the nature of the system being controlled. The impact of initial aircraft configurations proved to be quite interesting. The predictor displays did not enhance performance for the easy configuration. The display substantially improved performance for the moderate configuration. Finally, the predictor provided no benefit for the difficult configuration. The reason was that impending aircraft conflicts caused the controllers to significantly reroute aircraft and not worry about smoothly sequencing the aircraft for terminal approach until the rerouted aircraft were in a much easier configuration. This not unexpected result provided an important lesson for a young researcher. The subjects’ assigned task in this experiment was to smoothly sequence aircraft. However, in pressured situations, they consistently changed their priorities to mitigating the emerging conflict. As an experimenter, I was concerned with how well they could perform in very difficult conditions. Experimental subjects, in contrast, focused on eliminating the difficult conditions rather than coping with them. This is a great example of human abilities to adapt. Such adaptation is often difficult to achieve with automation. These abilities are discussed in more depth in Chapter 4.
SOURCES OF SUBOPTIMAL PREDICTION The observed benefit of providing controllers with predictor displays raises the question of why such predictions enhance performance. It would seem that, at least for this task, computers can predict more accurately than people. This, in turn, raises the question of what limits human abilities to predict. This question provided the basis for my next research project. To address this question, I created a new experimental environment, moving from the EAI hybrid computer to a Digital PDP-8. In fact, this minicomputer was the fiftieth PDP-8 manufactured by Digital Equipment, which provided an immersion in computer hardware and assembly language programming that continued my education in the reality of physical devices. This reality is quite different from our experiences today with so many layers between the physical computer and what we interact with on the screen. The experimental task involved presentation of a dynamic time series such as shown in Figure 1, without the lines connecting the points. As I
Estimation, Mental Models, and Teams 25
later explain, it was important to not have connected the points with lines. This was not apparent to me at the time, however. The display was oriented vertically rather than horizontally in Figure 1. Experimental subjects were asked to predict the next point in the time series, using a controller to horizontally position their prediction at the bottom of the vertically displayed time series. They were then shown what actually happened, the display shifted up by one point, they were then asked to predict the next point, and so on. Succinctly, subjects had to infer the dynamics of the time series and employ these inferences to project subsequent points in the series. Subjects’ performance was compared to three types of models: 0
Extrapolation models, for example, simply project the slope of the last two points to predict the next point. Learning model, that is, continually update the inferred model with each new point and use the model to predict the next point. Limited memory model, that is, use the points within a defined window to infer the model and use the model to predict the next point.
Overall, subjects’ predictions were much more accurate than the extrapolation models, much less accurate than the learning model, and comparable to the limited memory model for memory lengths of 13 points (Rouse, 1972, 1973). An important consideration is the fit of the models to the “predictability” of the time series. For the time series employed, the next point equaled a weighted sum of past points plus a random input function. In other words, the next point equaled the “natural response” to past states plus the “forced response” to the random input. Processes that are dominated by natural responses with lesser impact of forced response are termed highly autocorrelated, that is, future states are highly correlated with past states. In contrast, processes that are dominated by forced responses will be much less autocorrelated. Prediction accuracy increases with the degree of autocorrelation. Not surprisingly, the more the future is correlated with the displayed past points, the easier it is to predict future points. Subjects’ performance accuracy declined as the degree of autocorrelation decreased. The decline of performance of the limited
26 People & Organizations
memory model mirrored subjects’ decline. This was accomplished without changing the length of the model’s memory, which is important, as we would not expect people’s memory to be affected by the dynamics of the time series they are viewing. Thus, the limited memory model provides a good account of subjects’ performance in this estimation task. We also considered people’s abilities to predict many time units into the future. For example, predicting 5 , 10 or 20 years into the future of the share price example in Figure 1. This is difficult because the uncertainty grows as you look further into the future. At some point, your best estimate is just the mean of the overall process. To illustrate, I am sitting in my hotel room in Tokyo as I type this and CNN says it will reach 50” F today, March 8, 2006. If asked to predict tomorrow’s high temperature I might, in the absence of other information, simply extrapolate the difference between March 7” and 8” to estimate the high for March 9“. On the other hand, if asked for a prediction of the high temperature for March 9, 2007, the last two days’ weather has little bearing on the conditions for comparable days next year. Thus, to minimize prediction errors, I should predict the average March high temperature or, if the data is available, the historical average high temperature for March 9” in Tokyo. Therefore, as we look further out, our predictions should “regress” toward the mean because the cumulative uncertainty approaches the overall uncertainty of the process we are observing. However, when we asked people to view data such as shown in Figure 1 and predict many points into the future, they did not tend to regress toward the mean. Instead, they made their predictions look like the plot in Figure 1. This departure from optimality reflects the heuristics and biases explored by Nobel prize-winning Daniel Kahneman and his long-time colleague Amos Tversky (1982). Were people to predict the “expected” value of a time series such as in Figure 1, this expected value would regress toward the mean. However, this prediction would not be at all representative of a sample of time series such as shown in Figure 1. To illustrate, consider the flip of a fair coin where we designate heads equals 1 and tails equals zero. Since it is a fair coin, the probability of either side is ?A. Hence, the expected value equals (0.5 x 1) + (0.5 x 0) = 0.5. However, as anyone who has taught probability has experienced, a student will invariably ask, “How can you expect O S ? That will never happen.”
Estimation, Mental Models, and Teams 27
The expected value minimizes the mean-squared prediction error. In contrast, compared to Kahneman and Tversky subjects in their many experiments, the expected value is not what you expect to happen. People they found expect the future to be representative of the past. Thus, in our experiments, people provided predictions that looked like a sample from the times series of interest, not predictions that minimized mean-squared prediction errors. This provides a good example of people’s basic premises of the task differing from premises assumed by the experimenter. In this case, people’s errors were not due to their being limited probabilistic information processors. They brought quite reasonable assumptions to the task, and performed accordingly. Consequently, they had a small probability of predicting more accurately than the predictions of the expected value could ever achieve. We also explored the possibility that people’s suboptimal predictions might be due to it not being worth the effort of doing better. As Ralph Keeney was on my dissertation committee, I adopted multi-attribute utility theory to model reward vs. effort tradeoffs. It is obvious that such tradeoffs must exist. Despite my mother’s exhortations to “always do my best,” I clearly compromise on many tasks including mowing the lawn and washing my car. To explore this phenomenon, I invented various tasks and utility-based reward mechanisms. Using MIT undergraduates as subjects, I tried to prove that utility-based rewards could lead to greater effort and better performance. The difficulty I encountered was that many of these tasks, for instance, maximizing the number of elements of the nines table done in 30 minutes, were found intrinsically rewarding by MIT students and they would invest enormous effort for no reward at all. There were some tasks, however, that even MIT students did not find intrinsically rewarding, for example, memorizing all the X words in the dictionary, and I was eventually able to show that a reward scheme tailored to their reward vs. effort utility function would enhance performance. For example, for the X word task, subjects had almost 100% more words correct if they were rewarded based on their utility functions rather than linearly based on the number of words correct. In summary, suboptimal estimation performance can be due to limited “mental models” of the process of interest, different criteria for judging the “goodness” of predictions, and/or it not being worthwhile to invest the effort required to estimate better. Obviously, a quite simple task gets much
28 People & Organizations
more complicated as you explore the underlying cognitive mechanisms of performance.
ATMOSPHERIC PHYSICS I quite unexpectedly encountered my next estimation task. I was commissioned an officer in the U.S. Air Force in 1971 and was to assume active duty when I received my Ph.D. in 1972. I had worked out a research assignment at the National Institute for Mental Health where I hoped to continue my studies of cognitive limitations. However, the Vietnam War was over and the Air Force indicated that I was eligible for the Palace Option that only required that I spend 90 days on active duty at an Air Force installation of my choice. In the same time period, I was offered a visiting faculty position at Tufts University. Prompted by Tom Sheridan to try out faculty life, I went to Tufts and spent the summer of 1973 at Air Force Cambridge Research Laboratories (AFCRL) at Hanscomb Field in Bedford, Massachusetts. I worked for Charlton Walter at AFCRL. With only a 90-day window to accomplish something, Charlton capitalized on my existing competencies and let me loose in their impressive EAI hybrid computer facility. As I learned my way around this facility, I encountered researchers in atmospheric physics who used the computer to analyze the data from the atmospheric rockets they launched. The data analysis process was rather amazing. They somehow transferred the atmospheric data to 35 mm slides, projected these slides on a screen, measured the recorded phenomena with a ruler, recorded these measurements, typed them onto computer cards, and read the cards into the digital computer as the data set with their computer program. A typical set of data included 10,000 slides and many, many hours of work going through the process just described. It struck me that they needed a way to by-pass the slides and go directly from atmospheric data to digital data. Building upon my experience with the hybrid computer at MIT, I programmed their milliondollar EAI hybrid to create an interactive graphics display that showed the atmospheric data on a computer screen, with a format somewhat like Figure 1. I also had to provide editing capabilities so that they could, for instance, edit out portions of data where instrumentation had dropped out due to the rotation of the rocket. I wondered about how good people were
Estimation, Mental Models, and Teams 29
at editing these and other disturbances out of data. More formally, the researchers were smoothing the time series data collected from the atmospheric rocket shots. How good were they at this smoothing task? I developed a smoothing task for the hybrid interactive graphics system that involved editing random noise from sine waves. This scenario was not a bad approximation of what the atmospheric researchers were doing. I collected data for four subjects across a range of signal to noise ratios. To understand how well they performed, I compared subjects’ performance to an optimal smoothing model using an extension of the mathematical model employed for the prediction task (Rouse, 1976). Without constraints, the optimal smoother easily outperformed subjects in terms of minimizing mean-squared errors. However, constraining the model to consideration of a limited number of points either side of the point being smoothed led to much better matches to subjects’ performance. Specifically, I found that the four subjects had, in effect at least, time windows of 3, 7, 9, and 13 points. The subject for whom the window equaled 3 performed quite poorly, as did the model with this constraint. The success of this model led me to think more generally in terms of estimation tasks that involve filtering, smoothing, and prediction. All this stemmed from a serendipitous encounter with atmospheric physics.
STOCHASTIC ESTIMATION TASKS In an attempt to integrate the various experiments discussed thus far in this chapter, I derived an overall model of human decision making in stochastic estimation tasks (Rouse, 1977). In one of the most mathematical papers I have ever published, I integrated filtering, smoothing, and prediction into an overall model that included both short-term and long-term fading memory models and outputs that were weighted sums of the two estimates. I was able to fit this model to results of the previous experiments and show that it does very well at explaining the linear portions of people’s estimates. In other words, any other linear estimation model would not be able to do any better at explaining the data sets collected in this series of experiments. For nonlinear phenomena, such as the tendency to offer “representative” predictions, the model was only as good as a linear model can be. As is later discussed, we need to invoke other types of cognitive mechanisms to capture such phenomena. I mentioned earlier that we wondered about the effect of connecting the points in the time series such as shown in Figure 1. To investigate this
30 People & Organizations
question, Ken Enstrom and I conducted an experiment involving people estimating the statistical properties (mean and standard deviation) of time series displayed in four ways - lists, points, lines, and splines. Splines are based on a higher-order method of fitting curves to data. We found that the list display was best for estimating means. People could quickly scan the list of numbers and roughly calculate the mean. The point and spline displays were better for estimating the standard deviation. For the line displays, people consistently underestimated the standard deviation. This led us to wonder whether people were interpreting the lines as having as much validity as the points they connected. Based on this assumption, we calculated the standard deviations of the line and spline displays. We discovered that the line displays systematically bias the standard deviation. Estimates based on the piecewise interpolated time series were consistently lower than the real statistical property of the original data points. People were not biased; the display was. Thus, we see that human performance in stochastic estimation tasks depends on people’s mental model of the underlying dynamic process, the way in which they conceptualize the task, their reward vs. effort tradeoffs, and the ways in which we display the data. All in all, there are many subtleties and determinants of human performance in these types of tasks that one might not imagine by just considering the surface level of what people were being asked to do.
MENTAL MODELS I invoked the notion of mental models several times earlier in this chapter. This ambitious construct embraces a broad range of behavioral phenomena. It also has prompted various controversies. On one hand, it would seem that people must have mental models of their cars, for example. Otherwise, how would people be able to so proficiently negotiate traffic, park their cars, and so on? On the other hand, perhaps people have just stored in their memories a large repertoire of patterns of their car’s input-output characteristics, a large look-up table, if you will, of steering wheel angles and vehicle responses. From this perspective, there is no “model” per se - nothing computational that derives steering wheel angle from desired position of the car.
Estimation, Mental Models, and Teams 31
This is a difficult argument to resolve if one needs proof of the representation of the model in one’s head. Are their differential equations, neural nets, or rules in the driver’s head? Alternatively, one might adopt a functional point of view and simply claim that humans act as if they have certain forms of model in their brains that enable particular classes of behaviors. Nancy Morris and I became deeply engaged in this issue, reviewing a large number of previous studies and publishing the often-cited, “On Looking Into the Black Box: Prospects and Limits in the Search for Mental Models” (Rouse & Morris, 1986). We addressed the basic questions of what do we know and what can we know. Our definition, summarized by Figure 2, was, “Mental models are the mechanisms whereby humans are able to generate descriptions of system purpose and form, explanations of system functioning and observed system states, and predictions of future system states.” This definition only defines the function of mental models, not what they look like. We reviewed several methods for identifying mental models. Perhaps the most ubiquitous approach involves inferring characteristics of mental models via empirical study. This approach focuses on identifying the effects of mental models but not their forms. Thus, for example, our study of interpolation methods for displays of time series concluded that people’s model of time series gives the same credence to the interpolated portions of the time series as the data points on which the interpolation is based. This conclusion does not invoke any specific form of model.
Purpose
+ Why System Exists
Describing Function 3 How System Operates Explaining State 3 What System Is Doing
Predictin Form 3 What System Looks Like
Figure 2. Functions of Mental Models
32 People & Organizations
Another approach to identification is input-output modeling, effectively “fitting” a model to the relationship between displayed inputs and observed outputs. Thus, in the context of the earlier experiments, we might perform a regression of predicted future points vs. displayed past points. This would tell us how people weight more recent points vs. less recent points, for example. We would be, in effect, assuming that people’s mental models look like regression equations. Yet another approach is analytical modeling whereby we derive a mental model from first principles, use this model to predict human behaviors, measure actual behaviors in the same conditions, and then compare predictions with measurements. The stochastic estimation model just reviewed is based on this approach. As with most analytic models, it does have a couple of free parameters, that is, fading memory rates, that can be used to adjust the model to better match human behaviors. However, these parameters are usually much fewer in number than the degrees of freedom of the behaviors being modeled. Another approach to identification utilizes verbal and/or written reports of the humans whose behaviors are being studied, e.g., (Ericsson & Simon, 1984). The value of this approach varies with the phenomenon being studied. People may be able to tell you how they think about stock prices, for example, but are much less able to tell you about how they think about riding a bicycle or driving a car, or even how they think about personal disputes. For this reason, verbal and/or written reports are often used to supplement other approaches to identification. There is a variety of issues associated with identifying mental models. An overriding issue is accessibility. Tasks that require explicit use of the functions depicted in Figure 2 are more likely to involve mental models that are accessible to the various methods just discussed. In contrast, tasks for which these functions are deeply below the surface of behavior are less likely to involve accessible models. Another issue is the form of representation. Equations, neural networks, and rules are common choices. It is rarely possible to argue, however, that these representations actually reside in peoples’ brains. Contemporary brain research might argue for neural networks being the substrate for all alternative representations. However, such an assertion hardly constitutes proof. We return to this issue later in this section. Context of representation is another issue. To what extent are mental models general vs. context specific? Do I have different models for my Honda and Volvo, or just one model with two sets of parameters? I seem, for instance, to have general rules of algebra rather than having a long
Estimation, Mental Models, and Teams 33
repertoire of solutions to specific linear combinations of symbols and numbers. Personally, I seem much better at learning rules for solving problems rather than memorizing specific solutions. Perhaps, however, I am fooling myself in thinking about how I think. A particularly interesting issue surrounds cue utilization. What do people actually pay attention to when solving problems and making decisions? To illustrate, recent studies have shown that people judge beauty in terms of the symmetry of faces. However, I am not aware of thinking of attractive people in terms of their symmetry - “Oh, she’s so symmetric!’’ Thus, we do not always know what we are taking into account in performing our tasks. All of the above come together when we consider the issue of instruction. What mental models should we attempt to create and how should they be fostered? Do people need to know theories, fundamentals, and principles, or can they just learn practices? We return to this issue in more depth in Chapter 5. However, its significance cannot be overstated. To illustrate this point, I was involved in a dispute many years ago involving a curriculum committee of an engineering school. (I should preface this example with Henry Kissinger’s well-known observation that the intensity with which faculty debate issues is only exceeded by the insignificance of the issues.) The committee was trying to agree, with little progress, on the required courses for the Ph.D. degree. Given that we could not reach agreement, several faculty members argued that we should require more mathematics because “more math is always good for students.” I asked these faculty members for empirical proof of this assertion. They reacted quite negatively to this request. I suggested that they were experts at mathematics but not education, having never studied or researched education per se. The senior-most faculty member stormed out of the room. The question of what one needs to know is quite complicated and laced with subtleties. Contrary to most faculty members’ perceptions - I should note that I have served on the faculties of four universities - students do not necessarily need to know what the faculty members know in order to succeed in their chosen paths in life. We return to this fundamental issue on Chapter 5. A central element of this exploration of mental models concerns fundamental limits to what we can know. Several factors influence these limits. First, there are the mental models o f researchers. Their methodological expertise, as well as the sociocultural norms within their context of research, has enormous impacts on how they address the mental
34
People & Organizations
models construct - or any research for that matter. If their subdiscipline requires, for example, convergence proofs of parameter estimates, their models of humans’ mental models will be chosen to allow such proofs. Thus, the lenses of the investigators have an enormous impact (Kuhn, 1962; Ziman, 1968). Another aspect of fundamental limits relates to the fact that inferential methods work best when humans have limited discretion in their task behaviors. If humans must conform to the demands of the environment to succeed in the task of interest, then our knowledge of the structure of the environment can provide a good basis for predicting behaviors. On the other hand, if people have discretion - rather than predict the next point in the time series, they can escape to Starbucks -then we cannot solely rely on the structure of the task to predict their behaviors. This raises the obvious question of whether studies of mental models are really studies of the impacts of the structure of the environment. Simon has convincingly made this point (Simon, 1969). We are also limited by the fact that verbalization works best when mental model manipulation is an inherent element of the task of interest. Troubleshooting, computer programming, and mathematics are good examples of tasks where mental model manipulation is central and explicit. In contrast, the vast majority of tasks do not involve explicit manipulation of task representations. Thus, our access of mental models - and the access of people doing these tasks - is limited. We also argued in this paper for a form of uncertainty principle. Werner Heisenberg (1927) showed that physicists could not measure both position and velocity perfectly accurately at the quantum level. You can measure either position or velocity accurately, but not both, mainly because the observation process affects what you are observing. Translating this principle to the search for mental models, we cannot accurately assess both what a mental model is and what it is becoming because the act of assessment affects the model. For example, if I ask you to predict the future of the time series in Figure 1, your ability to predict is affected because you now have to think about making such predictions.
FUNDAMENTAL LIMITS The investigation just summarized was the first of several studies. These investigations were motivated by one of my experiences as a visiting professor at Delft University of Technology in Henk Stassen’s lab in The
Estimation, Mental Models, and Teams 35
Netherlands for 1979-80. There was a lunchtime discussion, upon which I eavesdropped, concerning fundamental limits in sports. For example, given the biomechanics and physiology of humans, what is the fundamental limit on the time to run a mile? People regularly run miles in less than four minutes. Will anyone ever break the three-minute barrier? If you are not sure about three minutes, how about one minute or one second? Unless we change the meaning of “run,” no amount of steroids will enable a one second mile. This experience led me to think about fundamental limits in general, e.g., (Davis, 1965; Davis & Park, 1987; Glymour, 1987). I combed the literature for limits in physics, chemistry, biology, and other disciplines. It quickly became apparent that we are much better at understanding limits in axiomatic worlds (e.g., mathematics) than in natural worlds (e.g., the atmosphere). These insights led John Hammer, Mike Lewis, and me to look at limits to capturing humans’ knowledge and skills (Rouse, Hammer & Lewis, 1989). We considered various representational forms in terms of variables, relationships among variables, and parameters within relationships. These forms ranged from probabilistic models to linear, nonlinear, and distributed signal processing models, to symbol processing models such as switching functions, automata, and Turing machines. We explored cognitive modeling, wondering whether cognition may be like energy in that you can measure its impact but not the phenomena itself. We considered identification of both signal processing and symbol processing models in terms of uniqueness, simplicity, and acceptability of representations. These issues were pragmatically illustrated in the context of studies of three types of tasks: estimation, air traffic control, and process control. The ways in which these issues were addressed depended on the purpose of the modeling effort. The research discussed earlier in this chapter primarily focused on understanding human abilities and limitations. These three studies were focused on enhancing human abilities and overcoming human limitations. This can be pursued via improved design, training, and/or automation. The effects of fundamental limits differ for these three avenues, as the next discussion illustrates. Serendipity intervened as this research was being pursued. As discussed in Chapter 1, Chris Barrett at Los Alamos Laboratory invited me to give a lecture at the lab. I asked him whether I should speak on our fundamental limits initiative or on our large intelligent interfaces project associated with DARPA’s Pilot’s Associate Program (see Chapter 4). He
36 People & Organizations
responded by asking me to speak on fundamental limits of intelligent interfaces. Prior to this request, I had not thought about the relationships between these efforts. Preparation to give this talk provided the impetus to undertake a study of fundamental limits of intelligent interfaces (Rouse & Hammer, 1991). Given that there are limits to modeling human behavior and performance, there must be limits on intelligent systems that rely on embedded models of human behavior and performance. Beyond understanding such limits, we also wanted to devise means for identifying when limits manifested themselves in specific intelligent systems. There are several types of limits in model formulation: 0
Data samples on which models are based are almost always limited.
0
Variables chosen for study may be incomplete andlor incorrect. Structural assumptions may be inadequate and/or inappropriate.
0
Parameters estimates may be non-unique.
Table 1 summarizes the implications of these types of limits. If flawed models are used to aid (or train) people, they will receive incorrect and/or incomplete guidance and may comply with this guidance. If such models are the basis for automation, then the automatic controller’s behaviors will be incorrect and/or incomplete and humans will have misplaced confidence in the automation.
Implications of Modeling Limits
Consequences of Implications Aiding
Automation I
Inappropriate Model
Advice Wrong
Control Wrong
Inadequate Model
Advice Incomplete
Control Incomplete
Non-Unique Model
Misplaced Compliance
Misplaced Confidence
Table 1. Implications & Consequences of Modeling Limits
Estimation, Mental Models, and Teams 37
Our goal was to devise means for detecting, diagnosing, and compensating for the implications and consequences in Table 1. Detection can be based on examining input-output relationships. Three classes of events are of interest: Unanticipated output values, for example, out of range variables Missing output variables - a variable needed but not included 0
Extra output variables - a variable included but not needed
The modeling limitations and compensatory mechanism include: Model incorrect - modify structure Model incomplete - add relationships 0
Model inaccurate - modify relationships
0
Model incompatible - change representation
Note that detection and diagnosis involve two overarching concerns. One is internal evaluation or verzjkation. This addresses the question of whether the system was built as planned. The second concern is external evaluation or validation. This addresses the question of whether the plan solved the problem for which it was intended. The process John Hammer and I devised addresses both of these concerns. We demonstrated this process by evaluating an intelligent information management system (discussed in Chapter 4) based on plan-goal graphs. These graphs relate a user’s goals to their plans for achieving their goals to the information requirements for executing these plans. This intelligent information system was intended to infer the user’s intentions - their goals - and then map to plans and information requirements, and subsequently automatically display the information needed by the user to pursue their goals. Our detection procedure began with observed display discrepancies in terms of the three types of events listed above. Discrepancies were mapped to the information requirements that generated the display, and then to the plans and goals that drove these requirements, and finally to potential deficiencies in inferring intentions. This reverse chaining was the
38 People & Organizations
means for diagnosing the discrepancies in terms of the four types of modeling limits listed above. We applied this methodology to evaluation of the Information Manager within the Pilot-Vehicle Interface of the Pilot’s Associate discussed in Chapter 4. The plan-goal graph had been developed via “knowledge engineering” using a group of expert fighter pilots. The evaluation employed another group of pilots to assess any display discrepancies during a 27-minute flight scenario. The detailed results are presented in Rouse and Hammer (1991). Of particular relevance for this discussion, 22 modeling problems were identified in this 27-minute flight. Only one of these problems was a “bug” in the sense of a programming error. The other 21 reflected both verification and validation problems. Compensation for these problems involved remediating incorrect, incomplete, inaccurate, and/or incompatible models. Fundamental limits keep us from fully understanding human behavior and performance. Nevertheless, we gain what understanding we can and then we base designs, aiding, training, and automation on this understanding. As indicated in Table 1, there are important implications and consequences of this process. It is important that we develop and employ means for detecting, diagnosing, and compensating for these consequences.
GROUPS AND TEAMS Thus far, this chapter has addressed individual behavior and performance. We now will consider groups and teams. Later chapters will address organizational behavior and performance. At this point, however, we will limit consideration to groups or teams, typically performing within a workspace where they can see or at least hear each other. We will consider both operational and management teams, as well as performing arts teams. Group Decision Making
My first exposure to studying groups was in Tom Sheridan’s Community Dialog Project at MIT. This NSF-sponsored research was focused on using technology to facilitate public dialog on issues of importance to the
Estimation, Mental Models, and Teams 39
groups involved. For these studies, Tom had devised a system involving handheld response terminals connected to a minicomputer, in those days via long wires. I would trundle the whole apparatus from meeting to meeting stuffed into my 1969 VW bug with little if any room to spare. My piece of this research focused on using multi-attribute utility theory to support groups in making decisions. The idea was to help them maximize their expected utility. This poses significant methodological problems when your apparatus does not allow associating measured responses with particular individuals. However, we devised ways to minimize the impact of these limitations (Rouse & Sheridan, 1975). The groups studied ranged from scientists and engineers in a research laboratory, to librarians in a university library, to freshmen in an engineering design class, to a community group of ministers, priests, and rabbis. Each group was organized for a specific purpose independent of our studies. Our role was to help them address issues such as: Priorities among alterative investments of computing infrastructure Preferred means of communication for different types of information Ratings of students’ engineering design projects Recommendations for “fair” federal income tax policies The electronic polling system was used to assess the group’s preferences during the discussions of these types of issues. In keeping with our multi-attribute utility theory framework, these discussions focused on key attributes and tradeoffs. Various polls were used in the process of trying to maximize the expected utility of the group and/or rank alternatives by expected group utility. There were a variety of interesting results for each of these studies (Rouse & Sheridan, 1975). Two merit discussion in this chapter. First, all of these groups found multi-attribute utility theory to be rather abstract, despite the high education levels of participants. We eventually stopped explicit mention of utility theory and simply explained the mechanics of the methodology. This helped to foster acceptance. Second, and probably more important, many people had difficulty articulating their values in terms of attributes and preferences among tradeoffs. They simply wanted to argue for their desired alternative. A few people did not want to engage in the process of articulating values
40
People & Organizations
unless they could be assured that their preferred alternative would have the maximum expected utility. These types of people tended to prefer an advocacy approach to decision making rather discussing values and their implications. As we were completing these studies, I began as a young assistant professor at the University of Illinois in Urbana-Champaign. I concluded that the study of group decision making was risky if I was to produce rigorous, definitive publications that would earn me promotion and tenure. Thus, despite my fascination with this topic, I remained focused on individual decision making until several years later.
Aegis Team Training My return to studying groups - actually teams - was prompted by the Aegis cruiser USS Vincennes shooting down an Iranian passenger airliner in the Persian Gulf on July 3, 1988 (Rogers, et al., 1992). The Aegis weapon system was first commissioned in 1983 with the USS Ticonderoga. This system was developed to counter the serious air and missile threat that adversaries posed to U S . carrier battle groups and other task forces. The Vincennes Incident prompted a congressional inquiry. Subsequently, the Office of Naval Research established a research program to study the potential behavioral and social factors underlying this incident. The TADMUS Program was named for its focus -- tactical decision making under stress. I was the principal investigator for Search Technology’s efforts in this program. We began by observing teams at the Aegis test facility in Moorestown, New Jersey. This training facility is unusual in that it looks like a ship superstructure rising out of a field west of the New Jersey Turnpike. It is sometimes referred to as the “Cornfield Cruiser.” Training exercises involved 25 people who staff the Combat Information Center (CIC). At the time of our observations, there were typically two exercises per day, each of which took considerable preparation, pre-briefing, execution, and debriefing. I was struck by the fact that every exercise seemed to be aptly termed “25 guys face Armageddon.” This is what Aegis is designed to do. However, as the July 3, 1988, incident shows, not all situations are Armageddon. We focused our study in the anti-air activities of the CIC as that is the portion of the team that dealt with the Iranian airliner. Our initial
Estimation, Mental Models, and Teams 41
observation of this team suggested to us - Eduardo Salas, Jan CannonBowers, and me - that team members did not have shared mental models. In particular, we hypothesized that inadequate shared models of teamwork -- in contrast to mental models of equipment functioning or task work -hindered the team's performance (Rouse, Salas & Cannon-Bowers, 1992). Our definition of mental models for teamwork followed the functional definition in Figure 2. However, the knowledge content, as shown in Table 2, differs from our earlier discussions. The overall research questions concerned what elements of Table 2 were needed by the anti-air team and how best to impart this knowledge. Data collection started by observing team behaviors and performance to understand the extent to which they fit within the framework provided by Table 2. Two teams were observed for a total of ten exercises. Eight domain experts evaluated the teams' behavior and performance during exercise debriefings. Self-evaluations by each team were also considered. Written notes of these evaluations were taken. A simplified form of our error analysis methodology (see Chapter 4) was employed to evaluate instances of performance deficiencies in these data.
Types of Knowledge Level Detailed/ Specific/ Concrete
I
Global/ General/ Abstract
1
How
Why
Roles of Team Members (Who Member Is)
Functioning of Team Members (How Member Performs)
Requirements Fulfilled (Why Member Is Needed)
Relationships Among Team Members (Who Relates to Who)
Co-Functioning of Team Members (How Members Perform Together)
Objectives Supported (Why Team Is Needed)
Temporal Patterns of Team Performance (What Typically Happens)
Overall Mechanisms of Team Performance (How Performance Is Accomplished)
What
PrinciplesA'heories (Why: Psychology, Management, Etc.)
I
1
Table 2. K n o w l e d g e C o n t e n t of M e n t a l M o d e l s for T e a m w o r k
42
People & Organizations
For the 75 performance deficiencies identified, 27% could be traced to expectation-related inadequacies such a lack of planning or failure to follow a plan. Fifty-five percent could be linked to communicationsrelated problems including failures to communicate, uninterpretable communications (e.g., due to incorrect terminology), lack of followup communications, and late or missed or excessive communications. Nineteen percent of the deficiencies related to use of equipment in configurations or modes that were not appropriate. These results portrayed teams that were not well coordinated and did not communicate well in terms of their behaviors in these demanding exercises. It appeared that team members often did not know what was expected of them by other team members and did know what they could expect of others. Without expectations, they did not communicate or communicated erroneously or ambiguously. Occasionally, they could not explain or interpret communications received. It was clear that these teams needed much-improved shared mental models of teamwork. In fact, we concluded that the teams’ mental models could be improved in several areas - ship, defense system, equipment, situations, teamwork and task work. Nevertheless, we focused on the teamwork needs. It was clear that the improvements sought could not be accomplished in the full-scope Aegis simulator - it was too complex and staffing exercises required too many resources. Phil Duncan took the lead in developing Team Model Training that involved a desktop computer-based training system that approximated the full-scope Aegis simulator. Users of this desktop simulator could play any of the anti-air roles and the remaining roles were simulated. In this way, they could gain an appreciation for their team members’ roles. They also could perform many exercises per day, perhaps 30-40, rather than the two exercises per day typical for the full-scope simulator. Team Model Training included embedded expert models as well as representations of appropriate communications among experts. This enabled providing feedback to trainees to decrease the difference between their behaviors and those of experts. The basic idea was to establish and reinforce their mental models of teamwork before they had to deal with the complexity of the full-scope simulator. An evaluation of Team Model Training compared this training with more conventional lecture-based training. Performance was assessed in the full-scope simulator subsequent to having received one of these training methods. Instruments designed to assess team coordination and
Estimation, Mental Models, and Teams 43
communication indicated that Team Model Training significantly improved performance relative to the conventional training. Team Model Training also significantly decreased the overall level of communications, indicating that well-trained teams are not necessarily those with the highest levels of explicit communications (Duncan, et al. 1996). These findings on training mental models for teamwork fit into the broader picture of team training pioneered by Eduardo and Jan. See Paris, Salas, and Cannon-Bowers (2000) for a review of this research.
Performing Arts Teams The research just discussed concerned training operational teams that typically are responsible for operations of ships, aircraft, process plants, factories, etc. Another type of team is an organizational team, focused on managing projects, business functions, or whole enterprises. There is also a rich literature on this type of team, e.g., Katzenbach and Smith (1993). Rebecca Rouse, my daughter, and I undertook a study of a type of team that is a hybrid of operational and organizational teams - performing arts teams. Building upon the notion of mental models from Figure 2 and the knowledge content of team mental models in Table 2, we interviewed 12 performing arts leaders of symphony, chamber orchestra, chorus, jazz, musical theatre, straight theatre, improv theatre, ballet, and puppetry. These interviews helped us to better understand the “ecology” of performing arts (Rouse & Rouse, 2004). The interviews focused on the background of the arts leader and their organization, the nature of teamwork in their art form, the nature of any training in teamwork, either formal of informal, stories of particularly successful or unsuccessful instances of teamwork, and the role of the leader in their art form. Thus, our focus was on arts leaders’ perceptions of the factors influencing teamwork and issues associated with teamwork. Analysis of the interview data indicated that five dimensions were useful for differentiating the twelve performing arts organizations studied: Size of Performance -- Number of performers and other participants Complexity of Performance -- Extent of required coordination Locus of Coordination -- Rehearsal vs. performance
44
People & Organizations
Familiarity of Team Members -- Ensemble vs. pickup Role(s) of Leader - Prepares team; does vs. does not perform Symphony orchestras and large choruses epitomize a large number of performers requiring extensive coordination with considerable rehearsal involving mostly ensemble performers and a leader who performs with the team. Opera and large musicals are similar with the primary exception that the leader seldom performs with the team. Jazz and improv theatre are perhaps the other extreme, with a small number of performers with minimal coordination that is often accomplished during the performance. In this case, ensemble teams are the norm and leaders almost always participate in jazz and often participate in improv. The nature of the performance interacts with these dimensions. The arts studied included music, words, and movement as the media of expression. Coordination to assure blending of performers’ expressions is important in symphony orchestras and large choruses. This requires extensive rehearsal. Jazz and improv theatre, in contrast, do not pursue blending in this sense. Spontaneity is central to these art forms. Preparation for these forms of performance does not tend to emphasize repetition. Ballet, for example, would seem to fall somewhere in between these extremes. A question of particular interest is how the above dimensions affect teamwork and how teams are supported. Figure 3 provides a summary of potential relationships among these key variables as gleaned from the interview data. The primary outcome of interest is the extent of team training. Everyone interviewed extolled the benefits of teamwork; the prevalence of team training reflects explicit intentions to foster teamwork. The solid arrows in Figure 3 designate crisp relationships, with upward-pointing deltas indicating positive relationships, and downwardpointing deltas indicating negative relationships. The dotted arrow designates a less-crisp relationship. Not surprisingly, increasing team member familiarity decreases the prevalence of team training; further, increasing team training increases familiarity. Thus, ensemble teams may limit training to situations with new team members or possibly unusual productions. Selection may also be used to choose people who “fit in” and, at least stylistically, are therefore more familiar.
Estimation, Mental Models, and Teams 45
The presence of strong leadership, especially leaders that perform, decreases the prevalence of team training. Such leadership also strongly affects selection, with the aforementioned impact on familiarity and, hence, team training. Rehearsal also increases familiarity. Needs for coordination strongly affect needs for rehearsal. Needs for coordination tend to increase with the complexity of the production. Size affects needs for leadership, with the just noted impacts on the prevalence of team training and use of selection. Size and complexity, as indicated by the interview results, are not synonymous. Nevertheless, very large productions do tend to be more complex than small ones, at least very small productions. Except for these extremes, however, we expect that the correlation may be weak. Note that the dynamics portrayed in this figure imply decreasing frequency of formal team training, either due to increasing familiarity or leadership decisions. High turnover among performing team members would tend to lower familiarity and, hence, increase use of team training until new performers are assimilated. Thus, team training may “come and go” with changing composition of performing teams.
7 Size
d
. .. 0
Rehearsal
Figure 3. Relationships Among Key Variables
46
People & Organizations
Figure 3 provides a qualitative model or theory of the relationships among the dimensions of the ecology identified in terms of how these dimensions affect the prevalence of team training. This model does not predict how training will be pursued, or the impact of training on performance. Nevertheless, it suggests the situations where training will be employed to assure successful teamwork. The arts leaders interviewed perceived such teamwork to be central to successful performance. Thus, it is clear that arts leaders recognize the importance of teamwork beyond taskwork. They see collaboration as central to excellence in the performing arts. The mechanisms that they adopt for assuring teamwork vary considerably with the nature of the art form, as well as for the reasons depicted in Figure 3. The notion of mental models, while not explicitly suggested by any of the arts leaders interviewed, relates to two phenomena that were mentioned repeatedly. First, of course, performers need “models” of the performance at hand -- what relates to what, when it relates and, in some cases, why it relates. Second, performers need mechanisms to form appropriate expectations of fellow team members. In some cases, the score or script provides expectations, but in others team members need deeper understanding of each other as individuals. The means used to foster mental models vary considerably. The models associated with individual performance, that is, taskwork, are assumed to be developed when team members are selected. Indeed, this tends to be an implicit or explicit selection criterion. In situations where the score or script does not fully define performance, additional means often informal --are typically employed to enable people to understand each other’s inclinations, motivations, and so on. This is common in theatre and jazz, for example. We found that research results for operational and organizational teams were useful for understanding teams in the performing arts. Perhaps the largest difference between domains is the fact that the excellence of performance in the arts is often dominated by the quality of collaboration among team members. The performance product is inherently a “group product.” Individual accolades seldom occur without the whole functioning well. This is manifestly true to the audiences of arts performances. Quality is immediately rewarded; lack of quality meets faint praise. Operational and organizational teams seldom have such immediate scrutiny and feedback. The results of this study led several Ph.D. students to propose a summer study of teamwork across domains. They studied the differences
Estimation, Mental Models, and Teams
47
between Dad’s Garage (improv theatre), Waffle House (food preparation and service), and NASCAR (pit crews). There are, of course, large differences between these domains in terms of the nature of the work, extent of risks and time requirements. An unexpected commonality, however, was the fact that all these operational teams were actually performers viewed by audiences of one type or another, and receiving admiration or otherwise for their performances. To conclude, it should be noted that these initial forays into the arts prompted another comparative study focused on the nature of invention and innovation in technology and the arts. The results of this effort are discussed in Chapter 9. In Chapter 1 it was noted that crossing disciplines, domains, and cultures can enhance serendipity. Relative to technology, the arts provide elements of all three of these crossings.
CONCLUSIONS This chapter has focused on enhancing human abilities, overcoming human limitations, and fostering human acceptance in a wide range of tasks and domains. People’s understanding of the nature of their task, the roles of team members, the dynamics of the environment, and performance tradeoffs clearly affect their behaviors and performance. Design can help or hinder behavior and performance. Aiding and training can also help or hinder. Design, aiding, and training are all based on our understanding of humans and their abilities and limitations. To the extent that our understanding is embedded in the design, aiding, and training, we risk engendering misplaced compliance and confidence in the people and teams who operate and manage complex systems. We need to understand how to detect such problems and remediate them.
REFERENCES Davis, M. (Ed.).( 1965). The undecidable: Basic papers on undecidable propositions, unsolvable problems, and computable functions. Hewlett, NY: Ravel Press. Davis, P.J., & Park, D. (Eds.).(1987). No way: The nature of the impossible. New York: Freeman.
48
People 8t Organizations
Duncan, P.C., Rouse, W.B., Johnston, J.H., Cannon-Bowers, J.A., Salas, E., & Bums, J.J. (1996). Training teams working in complex systems: a mental model based approach. In W.B. Rouse (Ed.), Humaflechnology Interaction in Complex Systems (Vol. 8, pp. 173-231). Greenwich, CT: JAI Press. Ericsson, K.A., & Simon, H.A. (1984). Protocol analysis: Verbal reports as data. Cambridge, MA: MIT Press. Glymour, C., Scheines, R., Spirtes, P., & Kelly, K. (1987). Discovering causal structures: Artificial intelligence, Dhilosophv of science, and statistical modeling. Orlando, FL:Academic Press. Heisenberg, W.Z. (1927). Quantum theory and measurement. Phvsik, 43, 172-198. Kahneman, D., Slovic, P., & Tversky, A. (Eds.).( 1982). Judgment under uncertainty: Heuristics and biases. Cambridge, UK: Cambridge University Press. Katzenbach, J.R., & Smith, D.K. (1993) The wisdom of teams: Creating high-performance organizations. Boston, MA: Harvard Business School Press. Kuhn, T.S. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press. Paris, C.R., Salas, E., & Cannon-Bowers, J.A. (2000). Teamwork in multiperson systems: A review and analysis. Ergonomics. 43 (8), 1052-1075. Rogers, S., Rogers, W., & Gregston, G. (1992). Storm center: The USS Vincennes and Iran Air Flight 655: A personal account of tragedy and terrorism. Annapolis, MD: Naval Institute Press. Rouse, W.B. (1970). An application of predictor displays to air traffic control problems. SM Thesis, MIT. Rouse, W.B. (1972). Cognitive sources of suboptimal human prediction. PhD Dissertation, MIT. Rouse, W.B. (1973). A model of the human in a cognitive prediction task. IEEE Transactions on Systems, Man, and Cybernetics, SMC-3(5), 473477.
Estimation, Mental Models, and Teams
49
Rouse, W.B. (1976). A model of the human as a suboptimal smoother. IEEE Transactions on Systems, Man, and Cybernetics, SMC-6(5), 337343. Rouse, W.B. (1977). A theory of human decision making in stochastic estimation tasks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-7(3), 274-283. Rouse, W.B., Cannon-Bowers, J.A., & Salas, E. (1992). The role of mental models in team performance in complex systems. Transactions on Systems, Man, and Cybernetics, 22 (6), 1296-1307. Rouse, W.B., & Enstrom, K.D. (1976). Human perception of the statistical properties of discrete time series: Effects of interpolation methods. Transactions on Systems, Man, and Cybernetics, SMC-6(7), 466-473. Rouse, W.B., Hammer, J.M., & Lewis, C.M. (1989). On capturing humans' skills and knowledge: Algorithmic approaches to model identification. IEEE Transactions on Systems. Man, and Cybernetics, SMC-19(3), 558-573. Rouse, W.B., & Hammer, J.M. (1991). Assessing the impact of modeling limits on intelligent systems. IEEE Transactions on Systems, Man, and cybernetics, SMC-21(6), 1549-1559. Rouse, W.B., & Morris, N.M. (1986). On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin, 100(3), 349-363. Rouse, W.B., & Rouse, R.K. (2004). Teamwork in the performing arts. Proceedinm of the IEEE, 2 (4), 606-615. Rouse, W.B., & Sheridan, T.B. (1975). Computer-aided group decision making: Theory and practice. Technological Forecasting and Social Change, 2(2), 113-126. Simon, H.A. (1969). The sciences of the artificial. Cambridge, MA: MIT Press. Ziman, J. (1968). Public knowledge: The social dimension of science. Cambridge, UK: Cambridge University Press.
This Page Intentionally Left Blank
Chapter 3
PROCESSES, NETWORKS, AND DEMANDS
INTRODUCTION I was not particularly interested in libraries until graduate school at MIT. However, during my graduate studies, I was immersed in the value of building your research on the findings and methods of earlier research. I came to embrace Isaac Newton’s dictum that one can see farther on the shoulders of giants. I absorbed myself in the writings of George Miller, Claude Shannon, Herbert Simon, Norbert Wiener, and many others. To this day, my articles often include 50 or more references to works upon which I have built. At that time, the early 1970s, MIT was a hotbed of research related to libraries and information. Building on the tradition of Vannevar Bush (1945) and his oft-cited paper, “As We May Think,” there were three streams of activity that caught my attention. J.C.R. Licklider’s Libraries of the Future (1965) provided a wonderful vision of future information systems, many features of which have been realized in the past 5-10 years. I was also intrigued with Philip Morse’s Library Effectiveness (1968) as he advocated the “systems approach” that I had been immersed in as a young engineer at Raytheon (see Chapter l), and was working to gain a firmer intellectual foundation at MIT. In this work, I could see ready avenues for applying the operations research models and methods I was learning, all in a context that I personally experienced almost daily. The third stream was Project Intrex, sponsored by the Council of Library Resources and hosted by MIT. This initiative was focusing on solving the long-term operational problems of libraries via information technology. The project ultimately failed due to inadequate funding and slow technological advance (Burke, 1996). Nevertheless, at the time, it seemed that research in libraries and information was pervasive. At the same time, my wife Sandra was a staff member at MIT Libraries, earning a graduate degree in Library and Information Science at 51
52 People & Organizations
Simmons College. This provided a day-to-day view of library operations, as well as insights into the academic preparation of professional librarians. All of the above combined to create a rich environment for the research discussed in this chapter, as well as Chapter 7 that addresses online information systems. So, just what are libraries? On the surface - at least in the 1970s there are lots of books, journals, and other media, managed by numerous people, and accessed and used by large numbers of other people, either in the physical facility or remotely after “checking out” the items of interest. As I write this on my laptop, checking my references on Google, with my 11 year-old son Will across the table engrossed in a massive, multi-player online game, this description seems prosaic at best. However, we can instead view libraries as specialized service enterprises that embody processes to provide value-added services to customers. This view of enterprises is far from dated, with “services sciences” currently being a hot topic (Rouse, 2006). Thus, while libraries provide the context for this chapter, the models and methods elaborated have significant contemporary relevance. Later in this chapter, this view is expanded to consider networks of processes across enterprises essentially, information supply chains. Finally, we will consider means for forecasting demands on processes, which is central to optimizing processes and relationships among processes.
PROCESSES Understanding of library processes is central to modeling their functioning, predicting their outputs, and optimizing their operation (Morse, 1968, Rouse, 1979; Rouse & Rouse, 1980, Chapter 3). In contemporary parlance, there are both customer-facing processes and back office processes. Examples of both are considered here. Characterizing the performance of processes - how well they do - is part of the needed understanding. Throughout this chapter, three measures of performance are discussed: P - Probability that the service of interest is successfully delivered W - Average time required, including waiting, to receive the desired service
Processes, Networks, and Demands 53
C - Average cost of providing the service of interest. Ideally, we would like to maximize P, while minimizing W and C. Unfortunately, these metrics are inherently in conflict, as they often are. If we need to keep C low enough to stay within budget, or yield a profit relative to prices people are willing to pay, then we need to allocate the resources to maximize P and minimize W.
Queueing Processes A good portion of what goes on in libraries involves people or things waiting in lines. Figure 1 provides a characterization of this essential phenomenon. People wait for a range of services - or they give up and don’t wait. Books wait for reshelving - much more patiently than people wait for service. Books also wait for cataloging. Journals wait for binding. A basic model of these types of processes is shown in Figure 2. Customers -- people or things -- may balk at seeing the length of the queue, may wait awhile and then renege if the wait gets too long, or may wait until served. Once service commences, customers may renege if service takes too long, may experience unsatisfactory service, or may depart having been satisfied.
Figure 1. Example Queueing Processes in Libraries
54 People & Organizations
I find that this model is applicable to a wide range of service systems (Rouse, 2006). As is later illustrated in this chapter, the basic model in Figure 2 can be used as a building block for large networks of service processes. We can then consider P, W, and C for an overall network rather than a single process. Causes of Waiting The phenomenon of waiting in lines has long fascinated me, in part because I find it so frustrating. This frustration is akin to that felt during time spent in traffic commuting to work or other destinations. Time spent waiting for service or in traffic is wasted capacity, windows of time where productive things could have been done. Waiting is due to a mismatch between demand and service capacity. This mismatch reflects two underlying tradeoffs. First, it seldom is economical to provide all customers service without waiting. It would require too much service capacity, resulting in enormous operational costs. Second, even if the service capacity potentially matches service demands, the random nature of demands tends to occasionally overburden capacity and yield long waiting lines.
P = Probability of Success W = Average Time Required, Including Waiting C = Average Cost of Providing Service
Reneges Waiting
Reneges Service
Figure 2. Basic Queueing Model of Service Processes
Processes, Networks, and Demands
55
Service can be improved, in the sense of less time waiting, if the arrival of demands is less random - for example, arrivals might be scheduled. Decreasing variability of service delivery can also improve service. My favorite illustration of improving service in this manner is the FASTPASS system at Walt Disney World. I can remember in early days waiting in lines for attractions could be excruciating, sometimes requiring 1-2 hours of waiting for 90 seconds of fun. Disney World greatly reduced such grief by allowing you to schedule your time for attractions. With your FASTPASS ticket in hand, you can then pursue other activities until your scheduled time arrives. This innovation, due in part to Georgia Tech educated industrial engineers, had transformed Disney World for me. The result is much less anguish and much more fun. Waiting can also be due to time delays. Waiting at traffic lights could be reduced if all cars accelerated together, much in the way trains do. However, in reality we have roughly 0.2 seconds delay (or longer if cell phones are involved) between each pair of cars as the driver in the rear vehicle reacts to the motion of the vehicle in front. In heavy traffic, this leads to ripple effects and, of course, more waiting. If you cannot decrease waiting, an alternative strategy is to find something to occupy the time. Talking, reading, and listening to music have long been popular. Cell phone usage, as noted above, tends to occupy the time of some drivers, while also tending to increase delays for nearby drivers. My favorite for consuming waiting time has recently become Su Doku puzzles. These puzzles are not an appropriate pastime when driving, but are a good way to fill time just sitting and waiting. Airplane flights go by amazing quickly when absorbed in such puzzles. They also provide an interesting measure of waiting time. For instance, my wait in the doctor’s office this morning was a four Su Doku visit.
Staffing Information Desks Our first study of queueing processes in libraries was a straightforward application of queueing models and Philip Morse’s approach in the MIT Humanities Library. We studied staffing levels at the information desks for different time periods. The relative roles of librarians and analysts in conducting such studies were of particular interest in this effort (Rouse & Rouse, 1973).
56 People & Organizations
Data were collected on times between arrivals of library patrons and times to service these patrons. Due to inherent differences in terms of time required for service between in-person and phone inquiries, the data collected distinguished between these two types of inquiries. The results were quite straightforward. The overall utilization of this two-class (in-person and phone), two server (librarian) queueing system was 0.25. Decreasing the staffing to one librarian would increase utilization to 0.5 1, which would still yield acceptable patron waiting times. This policy change would result in the total staff of customer-facing librarians spending 30% less time staffing the information desk. The results of this study prompted a fairly general question. If resources are saved in one process, in this case the information desk, what is the value of deploying these resources elsewhere? Decreased capacity at the information desk will definitely decrease service for that process. Is the gain from deploying this capacity elsewhere worth the loss at the information desk? I return to this question shortly.
Selecting Acquisition Sources Libraries have both customer-facing processes and back office processes. One of the back office processes is the selection and acquisition of new library materials, that is, books, journals, films, etc. Put simply, this process involves deciding what to put on the shelves and where to get it. We studied the selection of acquisition sources at Tufts University Libraries (Rouse, 1974a). The focus was primarily on the acquisition of books. When a library decides to acquire a book, there are multiple sources. One source is the publisher of the book. There are also wholesalers. The choice among these alternatives is generally determined by discount (D) and delivery time (T). Publishers will usually provide a book quickly but with minimal discount. Wholesalers provide substantial discounts, but can be erratic in delivery time. For both types of sources, both D and T can be modeled as random variables drawn from probability distributions that vary by source. Selecting among acquisition sources can be viewed as a process of trying to maximize D while minimizing T. A central concern is that the attempting to maximize D involves substantial risks in T. For example, you might get a great discount, e.g., 40%, but the book takes 6 months to show up. Due to the significant uncertainty involved in these decisions,
Processes, Networks, and Demands 57
the Head of Acquistions was personally making each decision. Consequently, he was a bottleneck in the process. He needed to delegate these decisions to his staff. This project focused on using multi-attribute utility theory to generate source selection guidelines that maximized this decision maker’s expected utility. The approach developed employed Keeney’ s quasi-separable model (Keeney & Raiffa, 1993). U(D,T) = K1 U(D) + Kz U(T) + (1 - KI - K2) U(D) U(T) U(D) and U(T) are marginal utility functions for discount and time, respectively. The constants are assessed to reflect the decision maker’s preferences for returns and risks. The data for D and T from the alternative sources quickly led us to conclude that sources’ performance varied substantially for trade and nontrade books, that is, popular vs. academic books. We also found that the decision maker’s return vs. risk tradeoffs varied substantially for rush vs. non-rush orders. Thus, we developed different multi-attribute utility models for each of the four quadrants of this two-by-two characterization of the nature of books and orders. The overall result was a set of source selection guidelines shown below in Table 1. These guidelines enabled delegation of the source selection task.
I
RUSH
I
NON-RUSH
D>A
A>B
A>B
B>D
D>C
C>B
C>A
B>A
A>B
A>D
TRADE
NON-TRADE
Table 1. Guidelines for Selecting Acquisition Sources
58 People & Organizations
The development of these guidelines required extensive data collection and, more importantly, 6-8 sessions with the decision maker to structure the problem, discuss modeling approaches, assess return vs. risk tradeoffs, and interpret the results. Of particular importance was time spent understanding why the guidelines were, in some cases, counterintuitive. We had to “peel back the onion” and determine which of his stated preferences were affecting these results. Interestingly, this resulted in this decision maker readjusting his intuitions, as he did not want to alter his stated preferences.
Allocating Resources Across Processes As noted earlier, the overarching question of interest is not simply one of how many staff people to assign to particular tasks. We really are interested in how the allocation of all resources affects the overall performance of the enterprise, in this case the overall library. Of course, we could also be interested in how the library’s performance affects the university’s performance, how the university affects society, and so on. Until later chapters, however, we will focus on just the library. Put simply, the question of interest at this point concerns where an incremental dollar should be allocated. For instance, should the incremental investment go to increased staffing of service desks or should it go to increased acquisition? We pursued this question in our continued work with the Tufts University Libraries. It quickly became apparent that the allocation of resources across processes depended on the allocations within processes. In other words, the value of investing an incremental dollar in a particular process depends on projected impact of the investment. This insight led to the formulation of the hierarchical allocation process shown on Figure 3 (Rouse, 1975). The basic idea is to first consider the impact of allocating a range of resources to each process. The within-process allocation that optimizes process performance is determined independent of all the other processes. A typical result is shown in Figure 4. The allocation methodology only requires a tabulation of investment vs. return, that is, resources vs. performance, rather than any particular function such as shown in Figure 4. Nevertheless, the nature of the diminishing returns shown in Figure 4 is the reason why one does not allocate all resources to one process. In the context of Figure 2, we found that most library processes could be characterized as either waiting or balking processes, with performance
Processes, Networks, and Demands
59
measures of W and P, respectively. We used queueing theory models to predict how these measures would be affected by increased resources. Multi-attribute utility models were used to represent tradeoffs across processes, for example, the relative importance of a patron waiting vs. a book waiting. Assuming utility models that are separable in either an additive or multiplicative sense, dynamic programming can be used to optimally allocation resources, both within and across resources. More specifically, optimal allocations within processes were first determined, typically yielding results such as shown in Figure 4. The resulting set of process returns was used to then allocate across processes. Finally, we suggested how one could formulate a decision model for trading off the overall library budget vs. other services at the university, city, or company (Rouse, 1975).
Overall Budget
System Performance
Among Processes
t t
r
Within Processes
1 1 1 1 1 ,
-
Process Performance
Stage/ Location Performance
Figure 3. Hierarchical Resource Allocation
60 People & Organizations
Resource Al I ocatio n Figure 4. Typical Performance Impact of Resource Allocation
This research caused us to consider the distributed nature of library services and how overall performance was a composite of the performance of the many processes in a library. It is a small leap to think of these processes occurring in different libraries. Soon, we were immersed in library networks.
NETWORKS In the mid 1970s, we moved to the University of Illinois at UrbanaChampaign and continued our research on libraries. Later in this chapter in the section on Forecasting Demand, I discuss a project focused on developing a method of predicting library circulation for the Engineering Library at the University of Illinois. Obviously, the allocation of resources across library services should depend on the relative demand for services.
Processes, Networks, and Demands 61
The expansion of this line of research to consideration of networks of libraries is more germane at this point, however. Our publications - cited earlier - on library operations research led to inclusion in an informal network of researchers with similar interests. In this way, we met Hugh Vrooman, a senior staff member of the Illinois State Library. His interest and enthusiasm led us to develop a proposal for modeling the flow of information requests within the Illinois Library and Information Network (ILLINET). We met with the Illinois State Librarian, Alphonse Trezza, to discuss our initial proposal, with a grand budget of $50,000. This was my first research proposal, my first opportunity to obtain external funds to sponsor my university research. Securing external funds was - and still is - a key factor in assistant professors becoming associate professors and later full professors. Mr. Trezza liked my proposal to develop a model of JLLINET and then embed this model in a computer-based tool. This “value proposition,” by the way, was one of the things I learned at Raytheon. Your engineering work should not only make you smarter; it should enable other people to take advantage of what you learn. Tools embody knowledge and can be the means for other people to leverage what you learn. Mr. Trezza said he liked what I proposed. He would provide $25,000. “Take it or leave it,” he said. I had tried to provide the lowest budget possible to assure that money would not be an issue in getting approval. He halved it. I grinned and accepted it. Over the next five years, we received considerably more funding, but we always operated on a shoestring, always trying to stay innovative, as well as relevant and responsive. This section outlines these efforts. The problem of routing information requests in library networks is captured by Figure 5. Requests for information, documents, and other media, characterized by I+ can enter the network at any library and be serviced by any library. For ILLINET, the network is hierarchical with many individual libraries within each of many library systems. The overarching question is how requests should be routed to maximize P, while minimizing W and C (Rouse, 1976, 1977; Rouse & Rouse, 1980, Chapters 1 &2). Not surprisingly, the best routes depend on the performance characteristics (P, W, and C) for the libraries within the network. However, as you would expect with queueing processes, a library’s. performance depends on the level of demand on the library. Thus, performance affects the choice of the best routes, and the routes chosen affect performance.
62 People & Organizations
Figure 5. Overall Library Network Model
It is important to note that the routing policies adopted define the network. While memoranda of agreement can establish relationships among libraries, routing policies determine the actual flows among libraries. The choice of routing policies, in effect, designs the network. Routing policies, in the context of our model, reflect library performance, lending policies, reimbursement policies, and so on. The mathematical model developed to reflect the representation in Figure 5 was, fortunately, amenable to analytical solution. In other words, given an overall routing policy, we were able to deductively calculate P, W, and C (Rouse, 1976; Rouse & Rouse, 1978; Rouse & Rouse, 1980, Chapters 4 & 5 ) . This required that we also model internal library operations as shown in Figure 6. The performance of each library was a function of the characteristics of each of the nodes in Figure 6 as well as the demand on each node. As discussed below, this characterization also enabled consideration of alternative means for enhancing the performance of individual libraries. In general, we were able to use the library network model to address a range of questions concerning the value of investments in the network.
Processes, Networks, and Demands 63
Figure 6. Network Model of Individual Libraries
Impact of Technology A central issue in the late 1970s and early 1980s was the potential benefits of library automation. Online databases of library holdings and status (i.e., item available or checked out) were being developed. For instance, OCLC (Ohio College Library Center) - now Online Computer Library Center was an innovative provider of access to their online catalog of holdings of libraries across the U.S. The costs of such services were becoming clearer, but the benefits were still mainly limited to intuitions that such systems were valuable. The library network model provided a good means to assess the benefits of online location and availability information. It was reasonable to expect that knowledge of the location and availability of requested items could increase P and decrease W. The unknown was the impact of this
64
People & Organizations
information on C. We applied the library network model to addressing this issue (Rouse & Rouse, 1977). Location and availability information affect where requests are routed and what happens to them along the route. Put simply, you would not route a request to a library whose holdings do not include the requested item. If they own the requested item, but it is unavailable, you may or may not include this library in the route, depending on when the item is due to be returned, which may itself be uncertain. The impact of such information affects the routes chosen in Figure 5 and the transition probabilities in Figure 6, both of which impact overall waiting and processing times. The mathematical models underlying the library network model were extended to enable representation of several alternative online information capabilities. The associated software was also extended, thereby increasing the flexibility of this modeling tool (Rouse & Rouse, 1977; Rouse & Rouse, 1980, Chapter 6). Seven alternative capabilities were evaluated using the model parameters for ILLINET. These analyses yielded values of C ranging from $1.13 to $3.11 per request, with W ranging from 5.5 days to 10.3 days, all with the same value of P. Somewhat simplistically, the difference between the two values of C, that is, $1.98, is the maximum one should be willing to pay for location and availability information. Among the seven alternatives, there are various nuances that impact this number. At the time, OCLC had announced a price of $0.42 per request, which would seem to be attractive. However, the overall value also depended on the online availability information which depended on the nature of circulation systems at each resource library. We revisit these conclusions in the discussions of case studies.
Data Collection The case studies involved collection of large amounts of data on network transactions for the purpose of identifying appropriate structural assumptions for particular networks and estimating the parameters in the library network model. A central and practical question concerns how much data should be collected to enable statistical confidence in the predictions of the library network model. The predictions of interest are P, W, and C. The model predicts expected values, denoted by E(P), E(W), and E(C). Our confidence in these predictions depends on the variance as well as the expected values,
Processes, Networks, and Demands 65
denoted by 0 2 p ,02w, and oZc. The statistic of interest is the coefficient of variation, E(X)/ox . A typical goal is assure that the coefficient of variation is less than 0.05. The question, therefore, is how much data is needed to estimate the model parameters with sufficient accuracy to assure that the coefficient of variation is less than 0.05 or whatever level of confidence is desired. To put this question in perspective, it is important to know the number of model parameters to be estimated. This number is given by the following equation:
No. Parameters = (I + 1) JK + I M ( 2 + JLM) where I = No. of resource nodes J = No. of request classes
K = No. of requesting nodes L = No. of request referrals (length of routes) M = No. of processing nodes at resource nodes N = No. of transactions for which data are collected For the types of applications we have pursued, it is not unusual to have 1020,000 parameters in the overall model. Thus, data collection is an important issue. Put simply, given I, J, K, L, and M, the question is what value of N is needed to meet the coefficient of variation criterion? Derivation of the relationship between N and the coefficients of variation of P, W, and C involved many pages and 40 equations that included consideration of how uncertainty propagates through matrix inverses and other mathematical operations (Rouse & Rouse, 1978: Rouse & Rouse, 1980, Chapter 7). Indeed, there were many intervening mathematical transformations that were too tedious to include in the referenced publications. To illustrate the use of the resulting estimation method, consider the ILLINET case study discussed below. Data for 11,580 requests (16,609 transactions) were collected. These data were classified using the 38 ranges of the Dewey classification system. Multiplying by 2 for categories of periodical and non-periodical, resulted in J =76. However, to assure that the three coefficients of variation were less than 0.05, we could only employ J = 23.
66 People & Organizations
Given this result, we considered collecting more data. To allow J = 76, would have required 8 more weeks of data collection and $10,000 for data collection alone. Consequently, ILLINET managers decided to help us find the most important 23 request categories and we proceeded in that manner. The approach outlined here enabled decision makers to be confident in the model predictions in the sense that the sample size was sufficient to assure that statistical variations of predictions were modest. This is very important in situations where models cannot be empirically validated in the usual scientific sense. In particular, we could not operate a 1,100 library network in different ways, measure the results, and assure that predicted performance actually happened. Large-scale systems seldom allow such an approach.
Case Studies Working with William DeJohn of the Illinois State Library, we undertook several case studies (Rouse, Rouse & Slate, 1977; Rouse & Rouse, 1980, Chapter 6, 8 & 9). One focused on Rolling Prairie Library System (RPL) in western Illinois, another on Suburban Library System (SLS) outside of Chicago, and the third on the overall ILLINET system. All three involved extensive interviews and data collection to fit the queueing network model to the particular situation. Rolling Prairie was the first application of the library network model. RPL includes 86 libraries. Data were collected for 3,366 requests over a 12-week period. These data reflected 6,777 transactions due to referrals. We used these data to estimate the parameters of the model. After making structural assumptions and parameter estimates, the model was used to predict P, W, and C for RPL. We found that the model predictions for W were low - 13 days predicted vs. 19 days actually. Exploration of this difference led us to discover a policy that had not been revealed earlier. RPL was not automatically routing requests throughout ILLINET. After each failure, the request returned to RPL for reprocessing, rather than being automatically sent to the next library in the route. This routing policy was inflating W by 48%. This is a good example of one of the benefits of modeling. The misfit between model and data enabled identification of a policy deficiency not previously recognized. Succinctly, the modeling process required structural assumptions that turned out to be erroneous due to this routing
Processes, Networks, and Demands
67
policy. The model also provided a projection of the benefit of remediating this deficiency. Suburban Library System includes 102 libraries. Data were collected for 6,567 requests (7,805 transactions) over a 4-week period. Note that SLS had roughly 10 times the processing load of RLP in terms of requests and/or transactions per week. RPL and SLS illustrate the range of demand across the regional networks within ILLINET. Based on the RPL experience, we paid very careful attention to the nuances of request routing policies. We discovered that SLS would hold requests either by placing a reserve on the desired item once it returned from use by another patron, or by issuing a purchase order for the item and waiting for its delivery. Evaluation of a routing policy that did not include these elements reduced W by 29% and processing load by 8%, while increasing load by 16% to the broader network. As well intentioned as the reserve policy might have seemed, it was undermining overall performance. The case study of ILLINET addressed the whole state of Illinois. ILLINET includes 18 systems like RPL and SLS. Data were collected for 1 1,580 requests (16,609 transactions) over a 4-week period. This processing load may seem low given ILLINET includes 18 subnetworks, each with substantial processing loads. However, this simply reflects the fact that a large portion of the overall load is satisfied within the subnetworks and never makes it to the state-level network. Once the model was fit to ILLINET via specification of structural relationships and estimation of model parameters, it was used to explore new routing policies. We identified a new policy that satisfied 11% more requests without significantly affecting W and C. Succinctly, this policy employed routes that were tailored to the performance of the major resource libraries within ILLINET. Resource libraries that detracted from P, W, and/or C were moved to the end of routes and used as last resorts. I can recall presenting these results to Kay Gesterfield, who had succeeded Alphonse Trezza as State Librarian. I was pleased that a straightforward policy change could have such a positive impact. However, this change would impact the reimbursement monies received by the resource libraries. In particular, the University of Illinois at UrbanaChampaign would lose significant funds because of their being moved to the end of routes. Rather than impose this impact of the university, Ms. Gesterfield used this result to motivate the library director, Hugh Atkinson, to reconsider his position on other issues of importance to the State. He agreed and the
68 People & Organizations
potential policy change was shelved. Ms. Gesterfield told me that she was impressed with how the model had helped her. This was an important lesson for me. The decision maker had not followed the advice yielded by the model, but still valued the model highly. I learned from this that analytical results can be an important element of decision making, but they are often not the only element. Models provide information. They answer questions. Models do not make decisions. Earlier, I discussed the use of the library network model to consider the impacts of information technology. As discussed earlier, OCLC was at the time of this study an innovator in providing access to their online catalog of holdings of libraries across the U.S. This service could be used to find the location of items and, in some cases, availability information. We used the library network model to determine the value of this information to ILLINET. Our analysis led us to recommend that location and/or availability information be purchased if its cost was less than $1.98/request. These case studies provided a rich set of experiences in immersing ourselves in understanding service operations, identifying processes and policies, and estimating the characteristics of the elements of these processes. We used predictions of current network performance as the starting point for exploration of new routing policies. As indicated in the discussion of each case study, we refined structural assumptions and parameter estimates until we could accurately predict current performance. Then, we explored alternative routing policies to improve P, W, and/or C. Libraries as Networks
The research discussed in this chapter evolved from studies of operations within libraries to networks across libraries. In the late 1970s, working with architect Jim Smith, we looked at networks within libraries (Smith & Rouse, 1979). To some extent, this research represented an extension of the earlier work on resource allocation (Rouse, 1975) where the resource of primary interest was floor space. We applied our library network modeling method to representing floor plans of libraries where the flow of people was the focus rather than the flow of requests. Using data from the recently opened Champaign Public Library in Champaign, Illinois, we developed alternative floor plans with varying levels of centralization, including variations with one or two
Processes, Networks, and Demands 69
floors. We found that alternative floor plans differentially affected congestion in the library. The most efficient plans reduced W by ten minutes. Reducing W results in the library having greater capacity, as well as easing ingress and egress, the latter being important in emergencies. On the other hand, many people who visit libraries are not trying to minimize their time there. Consequently, for many library patrons, the reduced W may primarily result in the facility feeling less crowded. As appealing as this may be, our models were not really intended to address such social issues.
Summary The research discussed in this section reflects a concerted effort to support management decision making in the operation of a pervasive class of public sector service systems. Our approach to model-based policy analysis had significant impacts on the operations of numerous libraries and several networks (Rouse & Rouse, 1979 c, d; Rouse & Rouse, 1980). At the same time, we learned the full extent of what is required for success in terms of collaboration among stakeholders, data collection needs, and ease of use of tools. We also learned how decision makers employ the results of model-based analyses, sometimes implementing these results and other times using the results to accomplish other ends.
FORECASTING DEMANDS The value of investing in improving service systems such as libraries and library networks strongly depends on the demands for services from these enterprises, especially future demands as that will be the means of gaining returns on these investments. We first considered this issue of forecasting demands for the Engineering Library at the University of Illinois at Urbana-Champaign. Building again on Philip Morse’s work (1968), we differentiated circulation from demand. Circulation is satisfied demand, but not all demand is satisfied. Assuming a simple queueing process, average demand per item can be estimated from average circulation and average time until returned per circulation.
70 People & Organizations
Demand was assumed to increase linearly with user population and decrease exponentially with item age, at a rate that could vary with resource class. We assumed one circulation policy per resource class, although we allowed for a different policy for new acquisitions to the collection. With these assumptions, we could derive a model for predicting future circulation of a library (Rouse, 1974b). The application of this model at the University of Illinois first addressed the question of user population. It might seem that the number of students enrolled in engineering courses would be the population. However, library use is far from uniform across years of undergraduate and graduate study. Therefore, we used weighted instructional units as our population measure. The university’s standard formula was used where Ph.D. units are given more weight than M.S. units, which are given more weight than upper division undergraduates, which are given more weight than lower division undergraduates. We used available data for nine years for weighted instructional units, library circulation, and library collection size. The rates of exponential decrease were estimated by fitting the model to the data. The model was tested by providing it only the starting circulation and the histories of instructional units and acquisitions. The model was able to predict the next eight years of circulation within 2%. The model was then used to evaluate alternative acquisition policies: Increment the collection by a constant number of books per year Increment by a constant number of dollars (which results in decreasing numbers of books acquired due to inflation) Stop incrementing the collection As might be expected, circulation plummets quickly with no additions to the collection. Somewhat more subtly, even a modest inflation rate undermines circulation rather significantly in just a few years. The bottom line is that the dynamics of library circulation require significant ongoing investment. We also studied the nature of demands within ILLZNET, a very different environment than an engineering library (Rouse & Rouse, 1979b). We compiled 10,485 requests for monographs at two levels of the network - the regional library system level (e.g., RPL and SLS) and the
Processes, Networks, and Demands 71
overall ILLINET level. As expected, we found that demand decreased exponentially with age. We found that the half-life for regional demand (10 years) was significantly lower than the half-life for statewide demand (16 years). It should be noted that the raw data had to be adjusted for the size of the published literature in each year. One of the reasons that younger monographs are in higher demand (as a class, not individually) is because there are more younger monographs than older monographs. This adjustment tends to increase the half-life statistics for these data by roughly 50%. Another effort focused on forecasting demands for ILLINET services (Kang & Rouse, 1979). Jong Kang compared Box-Jenkins time series models, adaptive filtering, and linear regression. Fading-memory linear regression (with memory parameter = 0.85) provided the best fit, especially for forecasting 1-2 years into the future. This approach was adopted by LLLINET for monthly forecasts and handled the seasonality of demand quite well. As we monitored use of this forecasting method by the Illinois State Library, we were alarmed to find that the model often produced negative forecasts for one or more regional library systems. Obviously, negative demand does not make sense and we asked whether such projections caused users to distrust the model and its predictions. The response was surprising. We were told that spurious forecasts had useful diagnostic value. Whenever the forecast was such that it could not possibly be true - for instance, projected negative demand - they would look at the input data more carefully. They often found data entry errors, for example 10 entered as 10,000. In some cases, the problem could not be understood without contacting the regional library system involved. In many of these cases, they found changes of procedures, definitions, and so on that had not been communicated. Thus, the forecasting method was providing the means for model-based monitoring of network operations. The work with the Illinois State Library resulted in several models and associated software tools. Our last effort in this overall initiative addressed how we could bring all of the resulting concepts, principles, methods, and tools together for ongoing support of ILLINET (Rouse & Rouse, 1979a). This led to the conceptual design and demonstration of a Management Information System for collecting, processing, and utilizing library network data, both to maintain the currency of the library network model and support overall network operations.
72 People & Organizations
CONCLUSIONS This chapter has focused on service systems in the context of library services. In the many years since this research, the nature of library services has changed dramatically. Online access to information and powerful search engines have changed how we think about, access, and utilize information. We can expect continued substantial change in the pervasiveness and economics of information (Battelle, 2005). We return to this discussion in Chapter 7. How does the operation of libraries and networks of libraries relate to human-centered design, the overarching theme of this book? This chapter is most likely, of all 12 chapters in this book, to be perceived as an outlier, more about business processes and less about people and organizations. On the surface, this perception is reasonable. However, below the surface, the experiences related in this chapter were instrumental in the eventual emergence of the concept of humancentered design presented in Chapter 1. Libraries exist to serve customers. People staff libraries to provide either customer-facing or back office service. Universities, cities, and states, among others, invest in libraries. These multiple stakeholders have differing interests among P, W, and C. Customers want maximum P, minimum W, and minimum C. Staff members want acceptable P and W, and C sufficiently high to provide reasonable incomes. Investors want the best P and W possible for a given C. Investors may also be willing to invest capital to gain significant improvements of P, W, and/or C. We initially approached this research from a customer service perspective. We soon discovered, however, that our work in university libraries (MIT, Tufts, and Illinois) and public libraries across the State of Illinois would have considerably more impact if we included consideration of other stakeholders’ interests. Beyond pleasing library patrons, we needed to gain the support of library personnel and those providing library budgets. We needed to assure that the concerns, values, and perceptions of all stakeholders in these efforts were considered and balanced To this end, this research was pursued in the finest tradition of operations research, pioneered by Philip Morse and others. Thus, we needed to immerse ourselves in the service operations of interest. This enabled formulating realistic assumptions for the many mathematical models discussed in this chapter. Beyond “solving” the models - which would have been sufficient for publishing the journal articles cited - we
Processes, Networks, and Demands 73
needed to think deeply about sources of data, amounts of data, and how these data should be processed. The key was not to limit ourselves to just enough data to validate our models. We also wanted to support the managers and investors who endorsed and supported our studies. We wanted them to use our results as inputs to decisions that were important to them. Consequently, data collection received much attention and considerable effort. We also realized that without user-friendly methods and tools, our work would end when we left. Therefore, we needed to “package” the knowledge we gained into usable forms for managers of libraries and networks. Overall, our research had to be viable and acceptable, as well as academically valid. Thus, we had to solve management problems in ways that were technically correct, fit into preferred ways of doing things, and be worth it from a cost-benefit perspective. This chapter, therefore, presents a case study of human-centered design, focused on stakeholders, their interests, and finding the “sweet spot” that delights primary stakeholders and earns the support of secondary stakeholders. An overarching conclusion is that service systems, in essence, concern people and organizations. Their success depends on addressing these concerns well. Another primary theme of this book is the role of serendipity. As discussed in Chapter 1, the research summarized in this chapter was the primary source of the rich stream of research summarized in Chapter 5. Our studies of processes and networks involved sketching many diagrams such as shown in Figures 5 and 6. One day when studying such diagrams, I wondered how one would know whether a node was functioning correctly or not. Almost immediately, this led to thinking about network diagnosis. We had encountered serendipity.
REFERENCES Battelle, J. (2005). Search: How Google and its rivals rewrote the rules of business and transformed our culture. London, UK: Penguin Portfolio. Burke, C. (1996). A rough road to the information highway. Project INTREX: A view from the CLR Archives. Information Processing and Management. 2 (l), 19-32. Bush, V. (1945). As we may think. Atlantic, 176 (l), 101-108.
74
People & Organizations
Kang, J.H., & Rouse, W.B. (1980). Approaches to forecasting demands for library network services. Journal of the American Society for Information Science, 31(4), 256-263. Keeney, R.L., & Raiffa, H. (1993). Decisions with multiple objectives: Preferences and value tradeoffs. Cambridge, UK: Cambridge University Press. Licklider J.C.R. (1965). Libraries of the Future. Cambridge, MA: MIT Press. Morse, P.M. (1968). Library Effectiveness: A Systems Amroach. Cambridge, MA: MIT Press. Rouse, S.H., & Rouse, W.B. (1979a). Design of a model-based online management information system for interlibrary loan networks. Information Processing and Management, 15(2), 109-122. Analysis of monograph Rouse, S.H., & Rouse, W.B. (1979b). obsolescence at two levels of an interlibrary loan network. Information Processing and Management, 15(5), 219-225. Rouse, W.B. (1974a). Optimal selection of acquisition sources. Journal of the American Societv for Information Science, 25(4), 227-23 1. Rouse, W.B. (1974b). Circulation Dynamics: A planning model. Journal of the American Societv for Information Science, 25(6), 358-363. Rouse, W.B. (1975). Optimal resource allocation in library systems. Journal of the American Society for Information Science, 26(3), 157-165. (Reprinted in D.W. King, Ed., 1978, Key Papers in the Design and Evaluation of Information Svstems. White Plains, NY: Knowledge Industry Publications Inc.) Rouse, W.B. (1976). A library network model. Journal of the American Societv for Information Science, 27(2), 88-99. Rouse, W.B. (1977). Performance criteria for library networks: Theoretical basis and realistic perspectives. In A. Kent & T. Galvin (Eds.), Library resource sharing (Chap. 2 1). New York Marcel-Dekker. Rouse, W.B. (1979). Mathematical modeling of library systems: A tutorial. Journal of the American Society for Information Science, 30(4), 181-192.
Processes, Networks, and Demands 75
Rouse, W.B. (2006). Models, contexts, and value streams for services sciences. Atlanta, GA: Tennenbaum Institute, Georgia Institute of Technology. Rouse, W.B., & Rouse, S.H. (1973). Use of a librariadconsultant team to study library operations. College and Research Libraries, 34(5), 242-248. Rouse, W.B., & Rouse, S.H. (1977). Assessing the impact of computer technology on the performance of interlibrary loan networks. Journal of the American Society for Information Science, 28(2), 79-88. Rouse, W.B., & Rouse, S.H. (1978). The effect of parameter uncertainties on the predictions of a library network model. Journal of the American Society for Information Science, 29(4), 180-186. Rouse, W.B., Rouse, S.H., & Slate, M.P. (1978). Applications of a library network model: Two case studies within the Illinois Library and Information Network. Illinois Libraries, 60(5), 454-458. Rouse, W.B., & Rouse, S.H. (1979~). Analysis of library networks. Collection Management, 3(2), 139-149. Rouse, W.B., & Rouse, S.H. (1979d). A model-based approach to policy analyses in library networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-9(9), 486-494. Rouse, W.B., & Rouse, S.H. (1980). Management of library networks: Policy analysis, implementation, and control. New York: Wiley. Smith, J.M., & Rouse, W.B. (1979). Application of queueing network models to optimization of resource allocation within libraries. Journal of the American Society for Information Science, 30(5), 250-263.
This Page Intentionally Left Blank
Chapter 4
TASKS, ERRORS, AND AUTOMATION
INTRODUCTION Chapter 2 focused on how people estimate the current or future state of a system. Chapter 3 addressed the problem of allocating capacity to perform service tasks. This chapter considers how people choose which tasks to do and how computers can help them. The essential phenomenon of interest is control. Control includes deciding what to do and doing it well -- or not. Control also involves dealing with errors, including missed events, false alarms, and inadequate performance in general. In this chapter, control also involves the use of automation to support task performance. Broadly, I have long been interested in the problem of deciding what to do, along the lines articulated by Herbert Simon (1977). Much of the long tradition of research in human behavior and performance has focused on how well people perform on tasks where their goal is to meet task requirements. The question is whether their perceptual, information processing, and neuromotor systems are capable of performing these tasks. Despite the fundamental importance of such research, I have been attracted to task situations where people have discretion. They can choose what to do and set their own priorities among tasks. The research question is how well can we predict their choices, which begs the deeper question of why and how they make such choices. Given this understanding, the other question of interest is how can we best help them to perform better.
MULTI-TASK DECISION MAKING At the University of Illinois, I had a joint appointment in the Department of Mechanical and Industrial Engineering and the Coordinated Science
78
People & Organizations
Laboratory (CSL). I worked most closely with the Artificial Intelligence (AI) faculty in CSL, most of whom had joint appointments in Electrical Engineering and Computer Science. The A1 folks were trying to understand problem domains, how people operated in those domains, and how to automate people’s problem solving, decision making and, in some cases, execution. My part of the puzzle included two pieces. On the surface, my research task was to determine how people might best interact with intelligent automation. Below the surface, however, I could not help but think about the longstanding question of what people should do and what machines should do. We wrestled with this question for many years, as this section will illustrate. I quickly decided that I need not answer this question in general. Instead, I could limit consideration to complex environments where people have many high-consequence tasks with limited time to perform. Examples include crews of aircraft, ships, process plants, and nuclear power plants. There was great potential to help such people with intelligent automation, especially in high-stress situations. One could address the overall question of who should do what by compiling a list of all the tasks to do, determining whether human or computer would do each task best, and then allocating tasks to the best performer, with the proviso that the humans should not be overloaded. The difficulty with this approach is that the answer to the question of who does what best depends on factors beyond the task in question. I am pretty good at Su Doku, but not when I am parallel parking.
Adaptive Aiding It struck me that it would be better if we did not have to answer this question in advance. We could answer the question better if we could address the question at the moment that the answer was needed - at the moment that a task was demanding attention. I called this notion “adaptive aiding,” in that the level of aiding would depend on the task at hand and other factors influencing human performance at that point in time. My first presentation of this idea was at an Annual Conference on Manual Control at NASA Ames Research Center in 1975. The first presentation of a more formal model was at a NATO Conference on Supervisory Control in Berchtesgaden, Germany in 1976, chaired by Tom
Tasks, Errors, and Automation
79
Sheridan and Gunnar Johannsen. I formulated the problem of adaptive allocation of task responsibility as a multi-queue, multi-server process with a pre-emptive but non-competitive service discipline (Rouse, 1976). The two servers were human and computer. Soon after, I extended this formulation and conducted a wide range of sensitivity analyses focused on differences between human and computer task performance times and errors, including misses and false alarms (Rouse, 1977). These analyses showed clearly the conditions under which adaptive allocation is superior. However, these benefits could only be fully realized if the automation could determine what the human is doing and intending to do.
Detecting Attention Allocation Our next study focused on how this might be done (Enstrom & Rouse, 1977). Ken Enstrom studied people performing two tasks. One task was a pursuit tracking task, with either first or second order dynamics. The other task involved monitoring for a randomly appearing mental arithmetic task (e.g., 23 X 9) in which subjects had to determine the answer and key it in. Some of the arithmetic tasks were real, denoted by X, and others were false alarms, denoted by K. The research question concerned the extent to which we could discriminate in real time between normal tracking and tracking with mental arithmetic. We employed a fading memory linear identification algorithm and linear discriminant models to model tracking performance and detect changes in the pattern of tracking, respectively. We found that discrimination based on the identified model parameters worked quite and easily in real time. We also found that the tracking error was the worst measure in terms of value in the discriminant model. Thus, how well subjects were doing in the tracking task was not a good predictor of their being distracted by the mental arithmetic task. Their input-output function changed systematically, but their performance did not because the tracking error depended on how demanding the tracking task was at the moment of the distraction. These results supported the overall viability of the concept of adaptive aiding. If we can detect when people are overloaded, we can then provide aiding to reduce the overall load and maintain performance. It would be better yet if we could project when people will be overloaded and then aid
80 People & Organizations
them so the overload does not occur. This possibility is discussed later in this section.
Queueing Model of Multi-Task Performance First, however, we wanted to broaden the task environment. Rex Walden created a multi-task flight management environment where pilots had to fly along a map display while also performing a range of procedural tasks in response to subsystem events, for example, a warning light. This environment is shown in Figure 1. It should be noted that the details of the instruments are not shown here. The subsystems are represented by the six indicators shown at the bottom of the screen. Indicators in these six instruments randomly moved back and forth in the top semicircle of the instrument. Upon a subsystem event, an indicator would slowly head to the bottom semicircle, still randomly moving back and forth in the process. To address an event, the pilots had to access and follow a hierarchical checklist that, once accessed. took the place of the six subsystem indicators.
Attitude
I
Airspeed
Figure 1. Multi-Task Flight Management Environment
Tasks, Errors, and Automation 81
The goal was to model pilot performance in this task as a first step in developing model-based adaptive aiding. The model chosen was based on queueing theory (Walden & Rouse, 1978). We assumed Poisson arrivals of task demands, Erlang servicing of demands, a preemptive resume service policy, and a fixed number of classes of tasks. Preemptive resume enables higher priority tasks to preempt lower priority tasks, whose performance can be resumed once the higher priority tasks have been serviced. An experimental study was conducted with varying map complexity and subsystem event arrival rates. All of the model parameters but one were determined by the experimental conditions. The one free parameter was the threshold for arrival of control tasks. This was necessary because of the continuous nature of the control task. The comparison of the model to the 36 experimental measures (2 maps x 3 arrival rates x 6 subsystems) was quite favorable. At this point, we were ready to add a second server to the queueing model, namely, the computer aiding (Chu & Rouse, 1979). This immediately raises the question of the conditions under which the computer should help. Building on earlier queueing theory research, YeeYeen Chu determined that the optimal policy is to activate the second sever (the computer) if
where nk = 0 indicates there is no event in process k, nk = 1 indicates there is an event, and ck is the relative priority of process k. The second server shouid be deactivated when N 5 s. Simulating the flight management task described above, we determined that for the parameters of this task environment, S = 7 for low arrival rates and S = 3 for high arrival rates. For both arrival rates, we employed s = 0. Note that we determined the parameters of the aiding policy based solely on simulation of Rex Walden’s queueing model, elaborated by YeeYeen Chu to include an adaptive second server. We did not use human-inthe-loop experiments to determine these parameters. The ability to use models in this way is important to designing complex systems because it is impossible to experimentally evaluate all possible conditions with humanin-the-loop methods. The experimental conditions under which the aiding policy was evaluated included three levels of control task (manual, autopilot, and
82 People & Organizations
malfunctioning autopilot) and three levels of arrival rates for subsystem events (none, low, and high). Two classes of experimental results are of importance. First, not surprisingly, aiding improved overall flight management performance and subjects were quite positive about the nature of the aiding. The aiding helped manage the level of pilot workload in ways that they found valuable. The second class of results concerns the comparison of predicted performance, via the queueing model, with actual performance. For this comparison, all the parameters of the model were either predetermined or empirically measured and no adjustments were made. Across 28 comparisons between model predictions and actual performance, no significant differences were found. We also found very high correlations, as high as 0.96, between the model’s predictions of human workload and pilots’ subjective assessments of workload. Our overall conclusions were that adaptive aiding can work, both in terms of system performance and humans’ reactions to the aiding. The multi-class, multi-server queueing model was found to be very useful, both for designing the adaptive aiding policy and predicting the performance and workload benefits of implementing this policy. These results provided strong evidence of the merits of adaptive aiding functionality being central to the intelligent interface concept discussed later in this chapter.
Optimal Control Model of Multi-Task Performance In parallel with Walden and Chu’s research, we also pursued approaches drawn from optimal control theory (Govindaraj & Rouse, 1981). In particular, we sought a deeper understanding of how pilots tradeoff performance of continuous control tasks with intermittent discrete tasks. A flight management task environment was studied that included preview control of the aircraft trajectory on a moving map display, with data entry of waypoint information along the map. Building on earlier research, T. Govindaraj derived an optimal preview control solution, with extensions to determine optimal control values over the full preview horizon. This was necessary to determine the best placement of discrete tasks. It might seem reasonable to simply set the control to zero during minimal activity periods and then perform the discrete tasks during those periods. However, this is insufficient because the control activities outside these zero periods have to be adjusted to compensate for these periods.
Tasks, Errors, and Automation
83
This compensation ,process was modeled by assuming that the weighting on control activities in the optimization criterion becomes effectively infinite within the targeted periods, reflecting the desire to not control during these periods. The optimal preview control is then recalculated and, as expected, there is no control activity during the targeted periods. In addition, control outside of these periods changes to compensate for the consequences of no activity in these periods. A flight management experiment was conducted using three levels of arrival rates of discrete tasks (low, medium, high). The model was fit to each subject by adjusting the ratio of control weighting during discrete tasks to control weighting during other periods. Comparison of model predictions and actual performance across the three experimental conditions were favorable. Thus, we could predict when subjects would schedule discrete tasks and how they would control to compensate for these periods of minimal control. In general, the optimal control model provided a better representation of control task performance and the queueing model provided a better representation of discrete task performance. Within each class of model, we had to heuristically handle the task for which the model was not intended. We incorporated these heuristics rigorously, rather than as ad hoc “add ons.” The later work on intelligent interfaces relied much more on heuristics.
Human-Computer Interaction in Dynamic Systems In 1980, Tom Moran of Xerox Palo Alto Research Center asked me to contribute a paper on human-computer interaction in dynamic systems for a special issue of Computing Surveys that he was guest editing (Rouse, 1981). At that time, most studies of human-computer interaction addressed tasks such as word processing. Tasks involving dynamic systems such as aircraft and process plants are different in that the state of the system continues to evolve whether or not the human does anything. Preparation of this survey paper provided an opportunity to bring together much of what has been discussed in this chapter thus far, as well as considerable research by others. The first part of this paper addressed the nature of systems control and human-computer interaction in control tasks, drawing on research from World War I1 to date. I next focused on allocation of tasks between humans and computers, again drawing on a rich history.
84
People & Organizations
This review provided the background for elaboration of model-based dynamic aiding, drawing on a broad set of models of instrument scanning, manual control, multi-task coordination, failure detection, and failure diagnosis. I drew heavily from my just-published monograph on systems engineering models of human-machine interaction (Rouse, 1980). This was, and is, clearly a strong knowledge base upon which to build when designing model-based adaptive aiding. This paper also addressed human-computer communication. The success of adaptive aiding depends on both human and computer knowing what each other is doing and, better yet, what each other intends to do. Overt communication minimizes uncertainty and errors, but creates workload in and of itself. Model-based covert communication - that is, inferring intentions from activities - imposes much less burden but inevitably is more prone to error. I discuss this issue in more depth in the section on intelligent interfaces. This survey paper concluded with a lengthy design example. This example was less than compelling in that it was hypothetical rather than an actual case study. Nevertheless, it enabled illustrating the impressive knowledge base that one could bring to bear for designing adaptive aiding. Further, this example set the stage for the design of the intelligent interface for the Pilot’s Associate in the mid 1980s. Adaptive Aiding Revisited
The foregoing research in the late 1970s and early 1980s was mainly supported by NASA, with a bit of additional funding from the Joint Services Electronics Program and the Air Force Flight Dynamics Laboratory. In the early to mid 1980s, our attention shifted to detection and diagnosis of system failures, a serendipitous outgrowth of our networking modeling research as discussed in Chapter 3. Initially, NASA supported this new thrust, but soon the Army came to be the primary sponsor. In the mid 1980s, the Air Force Aerospace Medical Research Laboratory (AMRL) asked us to extend our adaptive aiding ideas to more elaborate aircraft piloting scenarios. Nancy Morris and I, working with Sharon Ward of AMRL,created an environment where a pilot performed a manual tracking task while also identifying targets in an aerial reconnaissance task (Morris, Rouse & Ward, 1988). The targets to be
Tasks, Errors, and Automation 85
identified were certain types of boats deemed to be important, typically representing 10%of the population of boats visible to the pilot. The computer aid could also perform the target identification task. It used simple template matching and could not take advantage of the organization of the visual scene, such as rivers and harbors. Consequently, the computer tended to be better than the pilot in wide open water areas with many targets, while the pilot tended to be better than the computer in areas with a significant portion of land and useful geographic features. The first experiment varied terrain composition, tracking difficulty, and the availability of the boat spotting aid. As expected, we found that the pilot’s spotting performance was worse over bays than channels. Interestingly, performance over channels was also poorer if pilots were just coming off a bay. More targets were spotted when the aid was available, even when it was not in use. Thus, there were some subtleties to human performance in this task environment. The second experiment was similar to the first with the addition of three levels of aiding - none, manual use of aiding, and automated invocation of aiding. Results were similar to those for the first experiment; with the additional result that automated invocation of aiding over open water was superior to manual invocation, although most pilots preferred to invoke aiding themselves. Perhaps this preference explains why the availability of aiding, when not in use, also enhanced spotting performance. These experiments, especially during early “shake down” experiments, also provided unexpected insights into leading indicators of the need for aiding. In the process of preparing a presentation of preliminary results for the 1984 Annual Conference on Manual Control at NASA Ames Research Center, we were reviewing our slides. The slides happened to be lying askew on the table when I picked a couple of them up. I held them up to the light to see which ones I had. It struck me that spotting latency seemed to degrade about 20 seconds before the emergence of misses and false alarms. This observation was due to the serendipitous misalignment of two slides. The basis of this leading indicator was soon quite clear. As the aircraft enters a bay, there are more targets to classify and the task load increases. Due to this increased load, the response time (or latency) for any particular targets starts to increase. For this task environment, this increase begins roughly 20 seconds before the task load becomes so high as to preclude identifying all the targets correctly. The possibility of
86 People & Organizations
leading indicators provides more opportunity for the human-computer communication necessary for adaptive aiding to work best.
Framework for Design As we were completing the latest set of studies of adaptive aiding, I decided to draw together everything we had learned from 1974 on (Rouse, 1988). Beyond reviewing all the research that had been done up to that point in time, I synthesized a design framework with three elements: design issues, principles of adaptation, and principles of interaction. The design issues are framed as the six design questions, with alternative answers indicated below each question: What is Adapted To?
0
o
Class of Users or Tasks
o
Member of Class
o
Member at Specific Point in Time
o
Adapting to vs. Adapting of
Who Does the Adapting? o
Designer
o
User
o
System
When Does Adaptation Occur?
0
o
Offline
o
Online in Anticipation of Changes
o
Online in Response to Changes
What Methods of Adaptation Apply? o
Transformation (making a task easier)
o
Partitioning (performing part of a task)
o
Allocation (performing all of a task)
Tasks, Errors, and Automation 87 0
0
How is Adaptation Done? o
Measurement
o
Modeling
What is the Nature of Communication? o
Explicit
o
Implicit
Two sets of principles were synthesized for deciding which answers to these six questions are appropriate. The first set includes principles of adaptation: The need for aiding can depend on the interaction of impending and recently completed task demands - task allocation decisions should not be based solely on the demands of the task in question. The availability of aiding and who does the adapting can affect performance when the aid is not in use - total system performance may be enhanced by keeping the user in charge of allocation decisions. When using measurements as a basis for adaptation, temporal patterns of user and system behavior can provide leading indicators of needs for aiding -- it may be possible to use secondary indices as proxy measures of the indices of primary concern. When using models as a basis for adaptation, the degree of task structure will dictate the accuracy with which inferences of activities, awareness, and intentions can be made -- tasks with substantial levels of user discretion may limit the potential of model-based adaptation.
To the extent possible, incorporate within the aid models that allow predictions of the relative abilities of users and the aid to perform the task in particular situations -- substantial variations of relative abilities of users and aids provide the central impetus for adaptation. The second set of principles focus on human interaction with adaptive aiding:
People & Organizations
Users can perceive themselves as performing better than they actually do and may want an aid to be better than they are -- as a result, an aid may have to be much better than users in order to be accepted. Ensure that user-initiated adaptation is possible and appropriately supported, even if aid-initiated adaptation is the norm -- ensure that users feel they are in charge even if they have delegated authority to the aid. Provide means to avoid user confusion in reaction to aid-initiated adaptation and methods for the user to preempt adaptation -- make it very clear whether human or computer is supposed to perform a particular task at a specific time and provide means for changing this allocation. It appears that aid-initiated "off-loading" of the user and user-initiated recapturing of tasks is a viable means of avoiding "hot potato" trading of task responsibilities -- this asymmetry may help to ensure that users will feel in charge of the overall system. There is a trade-off between the predictive abilities (i.e., in terms of uncertainty reduction) of models of human performance and intent, and the way in which the explicit versus implicit communication issue is resolved -- the cost of explicit communication (e.g., workload and time required) should be compared with the cost of adaptation errors (i.e., misses and false alarms). The extent to which users can be appropriate agents of adaptation may depend on their models of the functioning of the aid and themselves -adaptation of the user (e.g., via embedded training) is a viable approach for providing such models. A variety of specific human factors principles for design of complex information systems appear to apply to the design of the displays and controls associated with adaptive aiding. This framework was further elaborated by Rob Andes and me as we applied it to designing adaptive aiding for a fighter pilot for beyond visual range combat scenarios (Andes & Rouse, 1992). Five expert subjects
Tasks, Errors, and Automation
89
addressed 42 scenario events and specified the desired nature of adaptive aiding using the above framework. We found that the different designers were quite consistent in their recommended design solutions using the adaptive aiding design framework. Experiences with the above methodology led to the development of an overall architecture of adaptive aiding as shown in Figure 2. The task queue is filled by a combination of planned tasks, inferred tasks from operator’s actions, and error remediation tasks (discussed in the next section). Aiding is selected from transformation, partitioning, and allocation (see above) based on the system model of state-dependent default choices and operator preferences. The type of aiding selected is communicated and a message and/or request for the operator are formulated. The aiding recommendation as well as the message and/or request are then sent to the operator interface manager. Note that operators’ preferences - to perform a task or allocate it to automation preempt much of this processing.
Transformation 1
P Partitioning
A
Allocation Operator Interface
.
J
Figure 2. Architecture of Adaptive Aiding
Operator Interface
90 People & Organizations
In 1994, I was asked to provide a keynote talk at a conference focused solely on adaptive aiding. I titled my talk, “Twenty Years of Adaptive Aiding: Origins of the Concept and Lessons Learned.” This paper was subsequently published as a book chapter (Rouse, 1994). In this talk, I summarized much of the history and findings presented above. I also discussed the application of adaptive aiding in the intelligent interface efforts considered later in this chapter. One aspect of this presentation turned out to be quite controversial. I was motivated by Issac Asimov’s “Laws of Robotics,” first appearing in short stories in the 1940s and 1950s and later as a collection (Asimov, 1968). His three laws are: 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. With this motivation, I proposed the First Law of Adaptive Aiding: Computers can take tasks, but they cannot give them. This “law” avoids the practical problem of computers taking tasks because the human is overloaded, then sensing that the human is no longer overloaded, and consequently handing the tasks back, only to repeat the cycle again and again. This law also reflects the design philosophy of the purpose of computers being to help people rather than vice versa. My assertion of this law was controversial because the many scientists in the room felt that, while our many experiments supported this idea, I had by no means proven this law. I needed more than just an apple falling from a tree. I needed, they felt, a definitive set of experiments that would support or refute this law. I did not disagree but, after 20 years of adaptive aiding, these experiments needed to be addressed by other researchers. I was in a different place at that time, as the later chapters of this book illustrate.
Tasks, Errors, and Automation 91
HUMAN ERROR Many studies of human behavior and performance consider human abilities to comply with demands of tasks and environments. To the extent that humans can comply you often learn more about the task and environmental demands than you learn about people’s abilities. However, when humans cannot comply, then you learn about human limitations and their abilities to adapt to demands. Human error is an interesting case in point. Humans are said to make errors when their behaviors do not comply with expectations and these behaviors cause or might cause undesirable consequences. If the consequences are desirable, we call people creative or inventive. Undesirable consequences, in contrast, imply undesirable behaviors. Not all errors are equal. There are slips when errors occur executing things you were intending to do, which were the right things to do. In contrast, mistakes involve choosing the wrong thing to do, and then perhaps executing flawlessly. There are many variations on this simple dichotomy that are elaborated in this section. Beyond describing the demographics of human error, this section also considers how to deal with errors. One possibility is to use interlocks and other mechanisms to preclude humans from doing other than exactly what is wanted. Unfortunately, this also eliminates human creativity and inventiveness, often a primary reason we include people in complex systems. Alternatively, we can embrace human errors as the price of including human information processors in the system, and instead focus on dealing with the consequences of errors. Indeed, most of us make errors quite frequently, most of which we catch and reverse, or perhaps compensate for the consequences. This approach is termed error tolerance. The concept of error tolerant systems is elaborate and illustrated later in this chapter.
Studies of Human Error Our first formal study of human error focused on crews of professional engineering officers being trained in a high-fidelity simulator of a supertanker engine control room (van Eekhout & Rouse, 1981). The facility used for this study was at the TNO (Dutch Organization for Applied Scientific Research) Institute for Mechanical Construction at Delft, The Netherlands. This research was performed in 1979-80 while I
92 People & Organizations
was a visiting professor at the Delft University of Technology working in Henk Stassen’s laboratory. Joost van Eekhout studied seven crews of engineers or operators. The operators’ task was to diagnose the cause of a range of simulated failures. Across 40 failures, 86 errors were made. Thus, the human error rate was approximately two per failure. However, this figure is deceptive since almost all the errors were reversible once the operator realized his mistake. Thus, only a few of the errors would have truly led to costly consequences. Nevertheless, almost all the errors cost the operators in terms of wasted time. Two-thirds of the errors fit into three board categories: 0
23 related to incomplete execution of procedures, which includes omission of procedural steps and, to a slight extent, out-of-sequence steps
0
22 related to inappropriate identification of the failure, which includes both false acceptances and false rejections.
0
11 related to incomplete observation of the state of the system prior to forming hypotheses regarding the cause of the observed systems.
We found that a lack of knowledge of the basic system and controller functions was significantly correlated (r = 0.77) with occurrence of errors in identifying the fault. Further, the presence of simulator design and fidelity inadequacies was highly correlated with the frequency of errors in executing procedures (r = 0.94). One of my favorite failures was the failure of the boiler level sensor. When this failure occurred, the feedwater value would be automatically opened to fill the already full boiler. The operator viewing this would see that the boiler was full, yet the feedwater valve was wide open. This does not make sense, unless one understands sensors and controllers. Operators confused by this failure made several errors. Our second study of human error addressed maintainers of aircraft powerplants (Johnson & Rouse, 1982). Bill Johnson conducted two experiments involving 56 aircraft mechanics. The first experiment compared three methods of training - a context free troubleshooting simulator, a context specific troubleshooting simulator, and traditional video instruction. (These troubleshooting simulators, as well as their full training impacts, are explained in Chapter 5 . ) Upon completion of
Tasks, Errors, and Automation 93
training, mechanics performed troubleshooting of live aircraft powerplants. Their performance was measured and then subjected to error analysis. The results of the first experiment indicated that the context-specific simulator resulted in the worst performance -- more than twice as many errors as the other two training methods. Analysis of these errors showed that trainees knew what tests to make but not how to make them. In fact, the context of the simulator probably misled them. Consequently, this simulator was extended for the second experiment to include test information, including how to use test equipment. The results of the second experiment indicated that the excess errors disappeared. Our third study focused on aircraft pilots and their use of a system that provided electronic display of aircraft procedures (Rouse, Rouse, & Hammer, 1982). Sandra Rouse and John Hammer studied four two-person crews in a high-fidelity simulator flying full mission scenarios. Error analyses focused on procedure selection and execution. There were 15 errors of omission using hardcopy procedures and 2 using the electronic procedures. There were no differences for errors of commission. Reflecting on these three studies of human error (Rouse, 1983), it became clearer how to methodologically address analysis and classification of error. This led to model-based methodology involving six classes of errors: Observation of system state 0
Choice of hypothesis
0
Testing of hypothesis Choice of goal
0
Choice of procedure
0
Execution of procedure
Within these six categories, there were 31 subcategories of errors. There were also four categories of causes and contributing factors: 0
Inherent human limitations
0
Inherent system limitations
94
People & Organizations
Contributing conditions Contributing events Examples of such causes and factors include confusion, distraction, communications lacking, and communications misleading. To illustrate the use of the methodology, the data from the study just summarized were reanalyzed (Rouse & Rouse, 1983). The hardcopy and electronic procedures were compared in terms of incorrect actions and unnecessary actions. Hardcopy had 20 incorrect and 9 unnecessary actions, while electronic has 9 incorrect and 7 unnecessary actions. Confusion was associated with 8 errors, distraction with 13, lacking communications with 14, and misleading communications with 7 . Summarizing, this methodology for analysis and classification of human error was applied to three studies of ship operations, aircraft maintenance, and aircraft operations. The first study identified deficiencies in the design of the control panel and inadequacies in the operators’ knowledge of system functions that led to modification of the training program. The second study identified a training deficiency and subsequently evaluated an improved training program. This third study identified the benefits of a computer-based display system in terms of a substantial decrease in the frequencies of certain types of errors. Taken as a whole, these three studies provide clear evidence of the benefits of systematically isolating and ameliorating the causes and factors contributing to human error. A few years after this series of experiments, Nancy Morris and I pursued a series of studies of human error in process control (Morris & Rouse, 1993). Two experiments involved 10 trainees from a technical training school who served as operators. Their task was to control a simulated process plant in which a variety of malfunctions could occur. Simple malfunctions included valve and pump failures. Complex failures included failures of input or output valves, tank ruptures, display failures, and safety system failures. Operators’ instructions were to maximize production while dealing with the various malfunctions to keep the process up and running. These experiments yielded a wealth of data, including times series of process performance, logs of all operator actions, and verbal protocols of operators’ perceptions. The book chapter cited provides a comprehensive set of analyses of these data. Two overall conclusions gleaned from these analyses are of particular interest.
Tasks, Errors, and Automation
95
First, people encountering a situation in which errors appear likely respond to that situation by trying to reduce the likelihood of error. They do this by either controlling the situation or controlling themselves (e.g., they are more careful or they change strategies). Errors result if people are placed in situations to which they cannot adapt for some reason. One possibility, which underlies much of the analyses of these data, is that people do not perceive the need to adapt until errors have already occurred. Second, when experiencing an increase in perceived effort above some “acceptable” threshold, people attempt to reduce the level of perceived effort. The options of controlling the situation or controlling themselves once again apply. This conclusion is consistent with common report of humans “economizing” in decision making in high-stress operational situations. The implications of these conclusions are interesting. For example, the problems often associated with attempting to identify human error rates are underscored. In light of humans’ adaptive tendencies, the concept of human error rate seems rather ephemeral. Thus, rather than questioning the likelihood of error in a statistical sense, a more important issue to be addressed is the identification of factors that limit humans’ ability to adapt a situation to themselves or vice versa.
Error Tolerant Systems Considering the ubiquity of human error, there is clearly a need to reduce the frequencies of consequential errors and/or develop systems that are error tolerant in the sense that the undesirable consequences of errors do not propagate. Reduction andor tolerance can be accomplished with a variety of mechanisms including selection, training, equipment design, job design, and aiding. Since no single mechanism is sufficient, a mix of mechanisms is needed. An important question concerns how one should allocate resources among these mechanisms to achieve acceptable frequencies of consequential errors. We approached this problem by developing a mathematical model of the effects of resources on error reductiodtolerance mechanisms (Rouse, 1985). Extensive sensitivity analyses were performed, due in part to the lack of definitive data on many aspects of these mechanisms. One result was particularly noteworthy. Across 80 sets of parameter variations, aiding received from one sixth to one half of the total resources allocated.
96 People & Organizations
This result is not really surprising. The other mechanisms focus on reducing the likelihood of all the errors that might possibly occur. Aiding, for the most part, focuses on errors that have occurred, with support that helps recovery and avoidance of consequences. These results provide clear and strong evidence for the benefits of error tolerance. The remainder of this section discusses a conceptual approach to error tolerant systems (Rouse & Morris, 1987; Rouse, 1990). Figure 3 illustrates the components and functional relationships of a system for intelligent monitoring, identification, classification, and remediation of human errors. The boxes labeled operator interface and operator interface manager at the left and right of this diagram, respectively, denote the controls and displays whereby humans interact with the system of interest, as well as the intelligence that manages what is displayed and what is requested of the humans. This functionality is elaborated in the discussion of intelligent interfaces. There are three models in Figure 3 - world model, system model, and operator model. The purpose of these models is to estimate the current and predicted “state” of the world, system, and human. Thus, they can be used to ask “what is” questions (i.e., estimated current state) and “what if’ questions (i.e., predicted state). (Recall that Chapter 2 addresses the general nature of estimation tasks.)
I
~
+ + r ,
Operator
Error
.
CosVBenefits Monitonng *Feedback *Control
Information Requested .Commissions Etc
Error
Operator Interface Manager
Error
t J
-*Causes * Catalysis Conseauences
Remediation
Figure 3. Architecture of Error Monitoring
Tasks, Errors, and Automation
97
While the specific definition of “state” differs for world, system, and operator, the general concept of state is that a set of variables can be defined such that knowledge of the current values of these variables, as well as knowledge of any external inputs, can be used in conjunction with an appropriate model to predict future states. The state of the world may include information on operational demands, weather, etc. The state of the system includes dynamic state, the modes and failure status of subsystems, and information on current and upcoming operational phases, applicable procedures, and so on. The state of the operator includes his or her activities, awareness, resources, and intentions. Error Identification. Identifying errors involves correlating histories of a human’s behavior and the system’s response with operational procedures, scripts, etc. to detect anomalies between expected and observed behavior. For highly structured tasks, a goal-plan hierarchy can be used to make sense of observed actions. Fortunately, less-structured tasks also tend to have much looser time constraints. This allows time for error identification to be more of an interactive process between human and system. For example, the human may be able to tell the computer of his or her intentions, rather than having the computer infer them. Error Classification. Once an anomaly has been detected and identified as an error, it is necessary to classify it if other than a global ERROR message is to be provided. The slips vs. mistakes dichotomy is an important element of explaining an error and can strongly influence how an error is remediated. Knowledge of likely consequences can be essential to effective remediation. For errors of omission, the projection of consequences can be straightforward if design models and knowledge bases are available. Errors of commission require more sophisticated models and levels of reasoning because they can involve actions that designers never anticipated. Error Remediation. Once an error is identified and classified, there are three general types of remediation. One possibility is to continue monitoring, perhaps looking for particular events or consequences that might trigger more active remediation. This type of remediation reflects a more active level of reasoning than associated with passive monitoring of action sequences. More specifically, remediation at this level involves active exploration of alternative explanations and courses of action.
98 People & Organizations
The next level of remediation is feedback, which involves providing messages regarding the identification and classification of an error, and perhaps advice on appropriate compensatory actions. At a minimum, this type of feedback includes traditional alerting and warning systems. However, when appropriate and possible, feedback is much more intelligent involving, for example, synthesis of multiple warnings into an integrated explanation and recommendation. The highest level of remediation is control, whereby the propagation of consequences (in terms of evolving undesirable states) is actively prevented and, in some situations, compensatory actions automatically initiated. Obviously, at this level, it is crucial that humans not be inappropriately preempted from acting. This possibility can be avoided by using error “advisories” that inform humans of the system’s evolving conclusions and, thereby, allow the humans to preempt higher levels of remediation.
INTELLIGENT INTERFACES Thus far, the discussions of adaptive aiding and error monitoring have been isolated from the broader context of systems operations: in the first case, people are overloaded; in the second, they have done something judged to be erroneous. In this section, the pieces are brought together and integrated into an overall concept for support operators of complex systems.
Electronic Checklists Our first efforts in this direction focused on electronic display of procedures. The first experimental study of this concept (Rouse & Rouse, 1980) led to the conclusion that simply putting procedures on an electronic display was inferior to the typical hardcopy procedures. However, it was found that online procedures that provided considerable user assistance were superior to hardcopy procedures. These results motivated development of a prototype onboard computer-based information system. As indicated earlier, this system was evaluated using full-mission scenarios in a two-pilot general aviation flight simulator (Rouse, Rouse & Hammer, 1982). The centerpiece of this
Tasks, Errors, and Automation
99
system was electronic checklists that, for the most part, were sufficiently intelligent to sense the completion of checklist steps and indicate so on the electronic display. The experimental evaluation involved four two-person crews. The pilots in these crews had an average of 500 flying hours. Normal, emergency, and double emergency scenarios were used. The emergency failure involved an engine failure requiring a single engine landing. The double emergency involved the engine failure plus a gear light failure that complicated execution of the single engine landing procedure. Overall results indicated that procedure execution time was faster with hardcopy than computer aided for the normal and emergency scenarios. With regard to execution errors, hardcopy resulted in 15 errors of omission, while computer aided resulted in 2 errors. There was no difference between hardcopy and computer aided for errors of commission. Overall, the electronic checklists dramatically decreased the occurrence of errors, but increased procedure execution time. Two other aspects of this research deserve mention. The double emergency resembled the flight of an Eastern Airlines L-1011 in Florida in the early 1970s. The Eastern flight also had a gear light failure. The crew tried to determine whether the bulb had failed and inadvertently dropped the bulb. In the process of searching for the bulb, they accidentally bumped the control wheel which disengaged the autopilot. Totally absorbed in the gear light failure, none of the four crew members in the cockpit noticed as the plane spiraled into the Everglades. In our experiment, three of the four crews also became engaged with the bulb failure to the extent they were not safely flying the aircraft. This distraction could have resulted in similar consequences to the Eastern flight had the experiment allowed crews to continue unchecked by external observers. Clearly, this problem-solving scenario is compelling and provides strong support for systems that monitor and support operators of complex systems. The other aspect of this research deserving mention relates to the Boeing 777 aircraft. I was a member of an advisory committee chosen to provide review and feedback on the cockpit design. Other members included the Chief Scientist of the Federal Aviation Administration and several other researchers. When we reviewed the 777 electronic checklists, one of the designers mentioned to me that our papers on computer-based display of procedures had significantly influenced Boeing’s design. Researchers do not often get this kind of feedback.
100 People & Organizations
Pilot’s Associate In 198 1, our research group moved to Georgia Tech from the University of Illinois at Urbana-Champaign. There were ten of us - faculty, research staff, and PhD students. We also moved the company Russ Hunt and I had founded in 1980 - Search Technology. At Tech, we continued the types of research discussed in Chapters 4,5, and 7 of this book. Search Technology focused on developing training simulators and decision support systems for marine, aviation, utility, and defense customers. Over five years, the company grew to roughly 35 employees. In 1984, we were contacted by the business unit of Lockheed, headquartered in Marietta, Georgia, a suburb of Atlanta. They had read several of the articles discussed in this chapter and had contacted the University of Illinois looking for the authors. Much to their surprise, the people they sought were now located nearby in Atlanta. This certainly is a form of serendipity. Soon after, Lockheed retained Search Technology as a member of the team bidding to develop the Pilot’s Associate for DARPA (Defense Advanced Research Projects Agency). We were not used to being paid a handsome sum to help write a proposal, but quickly acclimated. Our role was to design the intelligent pilot vehicle interface -- a software module that literally could understand a pilot’s state and adapt its support of the pilot accordingly. The overall architecture of the intelligent interface is shown in Figure 4. The modules labeled Adaptive Aiding and Error Monitor were described in Figures 2 and 3, respectively. The remainder of this section describes the other modules and our experience in developing and deploying this concept. Notice that the intelligent interface includes a module denoted as Interface Manager. This module was necessitated by a recognition that an intelligent Pilot’s Associate would likely overburden pilots with messages and requests (M/R). Specifically, the designers of modules called Mission Planner, Tactics Planner, and Situation Assessment considered their module to have first call on display space and auditory channels. The possibility of channel contention was highly likely. System Manger was devised to manage this Mm flow. Interface Manager decided what went on the displays and the auditory channels as shown in Figure 5. The role of the Interface Manager was to decide what M / R made it to the pilot, how it was scheduled, and its
Tasks, Errors, and Automation 101
modality and format. Thus, for example, a threat that was quite distant was displayed as a small symbol. In contrast, the same threat when nearby was displayed in considerable detail (i.e., with range, bearing, etc.). The level of sophistication of the functionality in Figures 2-5 depends totally on how the pilot’s state - activities, awareness, intentions, resources, and performance - can be modeled. This information resides and is updated in the Operator Model shown in Figure 6. This module is central to understanding what the pilot is doing and intends to do, as well as his or her abilities to perform well given the current and emerging situation. The Resource and Performance Models represent compact versions of the types of model discussed in this chapter and others throughout this book. For the purpose of the intelligent interface, these models tended to involve relatively simply mathematics, integrated via a variety of heuristics. The focus was on operational utility rather than psychological validity, although the latter was sometimes important.
........................ + - ..............+........ :
I ,
Operator Model
,
Manager
’
Operator Interface
I
’ *
Adaptive Aiding A
-
Interface Manager
I
Error Monitor
.................................................
lntelligent lnterface
Operator Interface
i
’
:
Action Subsystems
Figure 4. Overall Architecture of Intelligent Interface
102 People & Organizations
MIR Scheduling
;
Manager
...........
v
MIR Input Queue
MIR +
Prioritization
...........
.
-+
i
MIR M/R Modality =b Output Selection Queue
;
A
MIR Formatting
Operator Interface
-
Figure 5. Architecture of Interface Manager
-Resource
I--+
Operator Interface
i
: :
Intent Model 4
Scripts Plans Goals
1
Model
v
State Update
i
:
t
Performance ' Model
Matrix Models
.............................................................
Operator Model
Figure 6. Architecture of Operator Model
!
:
Tasks, Errors, and Automation 103
The Intent Model was key. How can you determine what a pilot intends to do? Norm Geddes, in his Ph.D. dissertation at Tech, provided the central idea for answering this question. He conceptualized all of the things pilots do - move control wheels, access display pages, flip switches, and push buttons - as words within the language of piloting. This enabled him to employ natural language processing methods to “read” pilot’s actions and infer their intentions. He developed Plan-Goal Graphs (PGGs) for piloting with controls and information requirements attached to plan elements. He used the PGGs to parse pilots’ actions and determine the goals they were pursuing. The set of goals that “fire” in this way represented the pilots’ active intentions. As might be expected, both context and the degree of structure in that context have an enormous impact of being able to infer human intent in this way. Thus, operations of aircraft, ships, power plants, operating rooms, and so on are good environments for applying these ideas. Designers, researchers, and managers, on the other hand, operate in domains that do not have this degree of structure as discussed in Chapter 7 . The conceptual design embodied by Figures 2-6 was a key element of the Lockheed proposal to DARPA, and its executive agent, the Air Force. We won! This meant several million dollars to Search Technology, and many more millions in later phases. This contract and another contract discussed in Chapter 7 led to rapid growth of the company. Shortly after the award was announced, Ren Curry and I were meeting with Jack Barnett and Gene Elmore, the Lockheed senior managers who had recruited us. Gene asked, “Well, we proposed a very ambitious system and won the contract. Can you guys really deliver on this intelligent interface?” I looked at Ren and then back to Gene and said, “How the hell should we know? We’re just in marketing!” Gene and Jack froze and I quickly followed with “Just kidding!” I think they asked us this because our proposal was quite different from what they usually submitted. We backed all the claims in Figures 2-6 with citations of journal articles. We knew all the pieces worked - we had shown it experimentally. What we had not done was put all the pieces together. Evaluation of artificially intelligent (AI) systems is difficult because they do not necessarily do the same thing in the same situation. Instead, they are sensitive to subtleties of situations and, in some cases, these systems learn from past actions and outcomes. This non-repeatability is both a strength and a weakness, at least from a risk point of view.
104 People & Organizations
My first experience demonstrating an A1 system was at the University of Illinois. A group of senior Air Force officers was reviewing our systems for managing failures during aircraft operations. The DEC- 10 hosted intelligent software first encountered an engine fire and, quite intelligently, rerouted fuel from one tank to another to avoid the risk of greater fire. Unfortunately, the portion of the system that about knew about engine fires did not know about vent valves - and vice versa for the portion of the system that knew about vent valves. The bottom line was that soon there were no fires and no fuel and, hence, two non-functional engines. The A1 grad students had anticipated this. Once the onboard situation had become hopeless, the cockpit displays started to flash and the flight management displays were replaced by a depiction of a computer parachuting from the aircraft. The Air Force colonel flying the airplane was not amused. We performed a variety of evaluations of the intelligent interface. Early on, we interviewed 10 fighter pilots regarding their perception of the functionality represented in Figures 2-6. One question concerned the conditions under which pilots would adcept the computer intervening in their actions. We asked this as follows. “You are unconscious and flying straight into the ground at 500 miles per hour. Would you want the computer to pull you out?” Nine out of ten pilots said, “Yes.” One pilot said, “No.” One quip among team members was that 9 out of 10 was fine because, “Darwin will take care of the other one.” In general, pilots accepted the notion of computers intervening if the pilot could set the conditions, for each flight, when such intervention was acceptable. Dan Sewell led a more formal evaluation of our intelligent interface concept (Sewell, Geddes & Rouse, 1987). Using a story board form of presentation, pilots’ perceptions of the functionality of Interface Manager, Error Monitor, and Adaptive Aiding were assessed. In general, all the functionality scored well in terms of validity (technical correctness) and viability (costhenefits). However, Adaptive Aiding did not score well in terms of acceptability (fitting into how people like to work). Pilots were concerned about “Who’s in charge?” This finding has significant impacts on how the intelligent interface was subsequently presented to operational audiences. The intelligent interface became fully functional in the late 1980s (Rouse, Geddes & Curry, 1988; Rouse, Geddes & Hammer, 1990). In the last stages of this project DARPA and the Air Force pushed the Lockheed team to get the whole system to run in real time, that is, update rates of one
Tasks, Errors, and Automation 105
second per second or better. The Symbolics workstations quickly disappeared, replaced by Silicon Graphics and Sun workstations. This helped but was not enough. The discussion soon shifted to what functionality could be eliminated. Successive functions were progressively stripped out. Eventually we met the contractually mandated requirement. I asked the team, “Well, what does it still do?” Their response was, “Not much, but it’s really fast!” Of course, 15 subsequent years of Moore’s Law has made this a moot point. We hoped our next step would be developing an intelligent interface for a real aircraft. In the process of pursuing this goal, we had two problems. First, Lockheed won the F-22 contract and all the C++ programmers disappeared to this new program. Second, and more fundamental, we learned that the Federal Aviation Administration would not certify an avionics system that included software that could learn and, hence, would not operate in a repeatable manner. Serendipity eventually intervened and we had the opportunity to create an intelligent interface for a point-of-sale terminal. We quickly developed a full-blown and successful system for a major player in the retail industry. Evaluations showed that the system provided the projected benefits. However, it proved too expensive for this very price-sensitive market. I have not been involved in research on intelligent interface for operational environments for roughly 15 years. However, Norm Geddes, John Hammer, and others have continued to advance the concept and its application. As the cost of computational power has continued to decline substantially, many of the limitations of the concept have disappeared.
CONCLUSIONS This chapter has provided a strong case study of human-centered design. The focus was totally on understanding human abilities and limitations, as well as the factors that affect human acceptance of systems. The ideas of adaptive aiding, error monitoring, and interface management embody the notion of enhancing abilities, overcoming limitations, and fostering acceptance. The concept of an intelligent interface integrates these functions into a unified intelligent support system. Serendipity was laced throughout this story. The relationships, sponsors, and colleagues involved in this 15+ year saga could not have been predicted, much less designed. Several of these relationships became central to the research thrusts discussed in later chapters. In many ways,
106 People & Organizations
the people you meet and relationships you form are central to the process of serendipity.
REFERENCES Andes, R.C., Jr., & Rouse, W.B. (1992). Specification of adaptive aiding systems. Information and Decision Technologies, l8,195-207. Asimov, I. (1968). I, Robot. London: Grafton Books. Chu, Y . , & Rouse, W.B. (1979). Adaptive allocation of decision making responsibility between human and computer in multi-task situations. IEEE Transactions on Systems, Man, and Cybernetics, SMC-9( 12), 769-778. Enstrom, K.D., & Rouse, W.B. (1977). Real time determination of how a human has allocated his attention between control and monitoring tasks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-7(3), 153161. Govindaraj, T., & Rouse, W.B. (1981). Modeling the human controller in environments that include continuous and discrete tasks. IEEE Transactions on Systems, Man, and cybernetics, SMC-11(6), 410-417. Johnson, W.B., & Rouse, W.B. (1982). Analysis and classification of human errors in troubleshooting live aircraft powerplants. IEEE Transactions on Systems, Man, and Cybernetics, SMC-12(3), 389-393. Morris, N.M., Rouse, W.B., &Ward, S.L. (1988). Studies of dynamic task allocation in an aerial search environment. IEEE Transactions on Systems, Man, and cybernetics, SMC-18(3), 376-389. Morris, N.M., & Rouse, W.B. (1993). Human operator response to errorlikely situations in complex engineering systems. In W.B. Rouse (Ed.), HurnadTechno1og.v Interaction in ComDlex Systems (Vol. 6, pp. 59-104). Greenwich, CT: JAI Press. Rouse, S.H., Rouse, W.B., & Hammer, J.M. (1982). Design and evaluation of an onboard computer-based information system for aircraft. IEEE Transactions on Systems, Man, and Cybernetics, SMC-12(4), 45 1463. Rouse, W.B. (1976). Adaptive allocation of decision making responsibility between supervisor and computer. In T.B. Sheridan & G.
Tasks, Errors, and Automation 107
Johannsen (Eds.), Monitoring behavior and supervisory control (pp. 295306). New York: Plenum Press. Rouse, W.B. ( 1977). Human-computer interaction in multi-task situations. IEEE Transactions on Systems. Man, and Cybernetics, SMC-7(5), 384392. Rouse, W.B. (1980). Systems engineering models of human-machine interaction. New York: Elsevier. Rouse, W.B. (1981). Human-computer interaction in the control of dynamic systems. Computing Surveys, Special Issue on Psychology of Human-Computer Interaction, 12(l), 7 1-99. Rouse, W.B. (1983). Elements of human error. Preprints of NATO Conference on Human Error. Bellagio, Italy, September. Rouse, W.B. (1985). Optimal allocation of system development resources to reduce and/or tolerate human error. IEEE Transactions on Systems, Man, and Cybernetics, SMC-15(5), 620-630. Rouse, W.B. (1988). Adaptive aiding for humatdcomputer control. Human Factors, 30(4), 431-443. Rouse, W.B. (1990). Designing for human error: Concepts for error tolerant systems. In H.R. Booher (Ed.), MANPRINT: An approach to Systems integration (Chap. 8). New York: Van Nostrand Reinhold. Rouse, W.B. (1994). Twenty years of adaptive aiding: Origins of the concept and lessons learned. In M. Mouloua & R. Parasuraman (Eds.), Human performance in automated systems: Current research and trends (pp. 28-32). Hillsdale, NJ: Erlbaum. Rouse, W.B., Geddes N.D., & Curry, R.E. (1988). An architecture for intelligent interfaces: Outline of an approach to supporting operators of complex systems. Human-Computer Interaction, 3(2), 87-122. Rouse, W.B., Geddes, N.D., & Hammer, J.M. (1990). Computer-aided fighter pilots. IEEE Spectrum, Z7(3), 38-41. Rouse, W.B., & Morris, N.M. (1987). Conceptual design of a human error tolerant interface for complex engineering systems. Automatica, 23(2), 231-235.
108 People & Organizations
Rouse, S.H., & Rouse, W.B. (1980). Computer-based manuals for procedural information. IEEE Transactions on Systems, Man, and Cvbernetics, SMC-10(8), 506-5 10. Rouse, W.B., & Rouse, S.H. (1983). Analysis and classification of human error. IEEE Transactions on Svstems, Man, and Cvbernetics, SMC-13(4), 539-549. Sewell, D.R., Geddes, N.D., & Rouse W.B. (1987). Initial evaluation of an intelligent interface for operators of complex systems. In G. Salvendy (Ed.), Comitive engineering in the desipn of human-commter interaction and expert systems (pp. 55 1-558). Amsterdam: Elsevier. Simon, H.A. (1977). On how to decide what to do. Bell Journal of Economics, 9 (2), 494-507. van Eekhout, J.M., & Rouse, W.B. (1981). Human errors in detecting, diagnosing, and compensating for failures in the engine control room of a supertanker. IEEE Transactions on Systems, Man, and Cybernetics, SMC11(12), 813-816. Walden, R.S., & Rouse, W.B. (1978). A queueing model of pilot decision making in a multi-task flight management situation. IEEE Transactions on Systems. Man, and Cybernetics, SMC-8(12), 867-875.
Chapter 5
FAILURES, DETECTION, AND DIAGNOSIS
INTRODUCTION As indicated in Chapter 3, this stream of research emerged serendipitously from my wondering about how one could determine whether a function within a library node - a subnode -- in a broader network of library nodes was functioning properly. All you know is the output performance (i.e., P, W, and C) for the network as a whole or perhaps for individual libraries. What information would you seek to answer this question? I quickly realized that detection and diagnosis of failures within complex systems is a pervasive problem. As indicated in Chapter 1, I soon began to feel that the ability to figure out what has gone wrong and why it has gone wrong is central to success in life. It became the essential phenomenon driving the research group in this area of study. This motivation led this group of faculty, staff, and graduate students to accomplish the research reviewed in this chapter. In the last chapter, we focused on tasks and errors, as well as how automation can share the task load and foster error tolerant systems. For the most part, this involves understanding and overcoming human limitations. As system designers, we accept such limitations because the humans with these limitations also have abilities that are necessary to the successful operation of complex systems. If everything goes as planned, if everything that happens has been anticipated - and the bird is on the wing - then many complex systems could be fully automated. All cars, for instance, would start, stop, flow and merge smoothly. While speeds would vary with traffic load, there would be no traffic jams and no accidents. Of course, that is not what happens. Living in Atlanta, I get to experience frequent traffic jams and see many accidents, often several per day. When plans cannot be comprehensive and things can happen that were not anticipated, we almost always include humans in the processes of 109
110 People & Organizations
monitoring and controlling the system. Their jobs, at least in part, are to determine whether the system is operating acceptably and, if not, determine the causes of the problems, how to remediate these problems, and then imdement auurouriate courses of action. Table 1 summarizes four types of failure situations. Familiar and frequent situations are typically such that we have encountered them many times before and we know exactly what to do. Everyday examples include a burned-out light bulb, dead batteries in a radio, and flat tires. Familiar and infrequent situations are such that we recognize them immediately but, due to lack of exposure, may not be skilled in dealing with them. Unfamiliar and frequent failure situations are a bit unusual. Perhaps a good example is maintaining wireless connectivity across multiple computers and networks. The warnings of loss of connectivity and/or incompatible security settings can be experienced frequently without really understanding what is failing. My personal experience with these types of problems is that familiarity eventually emerges. I have long been fascinated with unfamiliar and infrequent failure situations. You think something may be wrong, but you have not previously seen this combination of symptoms. You can imagine many possible causes of these symptoms. However, you have limited direct knowledge of how these possible causes relate to the set of symptoms you have observed. What should you do? This chapter explores how people address this question, what they tend to choose to do, and how they use the results of these choices to either diagnose the failure or serve as a basis for subsequent choices and later diagnoses. We consider the impact of a wide range of factors on human performance in such tasks, mathematical and computational models of performance, and approaches to training and aiding people to perform these tasks.
Frequency Familiarity vs. vs. Frequency Frequency Familiarity
Frequent Frequent
Infrequent Infrequent
Familiar Familiar Familiarity Familiarity
I
Unfamiliar Unfamiliar Table1. 1.Four FourTypes Types of of Failure FailureSituations Situations Table
Failures, Detection, and Diagnosis 111
DETECTION AND DIAGNOSIS PERFORMANCE This research began with a focus on understanding human abilities and limitations in detecting and diagnosing failures. We expected that human pattern recognition abilities would be central to humans’ abilities in such tasks. We also expected, however, that these abilities would not suffice for very complex failures. In fact, one of our goals was to determine what makes detection and diagnosis tasks complex.
Context-Free Simulations The initial context of our investigation was a computer-generated troubleshooting task called TASK - Troubleshooting via Application of Structural Knowledge - shown in Figure 1. This task later became TASKl as TASK2 emerged, followed by FAULT, PLANT, MABEL, and KARL. The meaning of these acronyms will be explained in due course. Thinking back, it is interesting that we created this stream of acronyms from the late 1970s to late 1980s. It was as if naming your experimental environment or computational model was a “coming of age” point for many of the graduate students. Alternative acronyms were debated with great fervor and many laughs. Once chosen, the name of environment or model quickly became part of the research group’s lexicon, and the environment or model somehow joined the intellectual family.
Figure 1. TASKl - Troubleshooting via Application of Structural Knowledge
112 People & Organizations
TASKl is quite straightforward (Rouse, 1978a). Networks are N x N, where N is the number of rows and columns of nodes. A random number of arcs enter each node. Similarly, a random number of arcs emanate from each node. Nodes are devices that produce either a 1 or 0. Arcs emanating from a node carry the value produced by that node. A node will produce a 1 i f 1) all the arcs entering the node carry values of 1 and, 2) the node has not failed. If either of these conditions is not satisfied, the node will produce a 0. Thus, nodes are AND gates. Failures propagate in the following manner. If a node fails, it will produce values of 0 on all the arcs emanating from it. Any nodes that are reached by these arcs will, in turn, produce values of 0. This process continues and the effects of a failure are thereby propagated throughout the network. The outputs of the rightmost column of nodes are then displayed as the symptoms of the failure, as shown in Figure 1. The human’s task is to “test” arcs until the failed node is found. Figure 1 depicts the manner in which arcs are tested. People enter commands of the form nl, n2 and are then shown the value of the arcs (see the upper left of Figure 1). If they enter a simple return rather than indicating an arc, they are asked to designate the failure. They are then given feedback about the correctness of their choice. And then, the next problem is displayed. I hasten to note that TASKl is, by no means, supposed to be a “real” task in the sense that humans actually perform it in real life. However, it is meant to capture some important aspects of the types of reasoning problems that humans face in systems such as aircraft, ships, and industrial plants. Indeed, networks are used to represent human-system interaction in a wide variety of problem domains (Rouse, 1995). Thus, TASK1, as well as siblings TASK2, FAULT, et al., provide a basis for considerable generality. Further, as later discussion elaborates, network representations are amenable to rigorous analysis, which provides a basis for developing alternative models of human detection and diagnosis.
Network Size, Pacing, Aiding and Training Our first experiments considered the effects of network size, forced-pacing, computer aiding, and training (Rouse, 1978a). Some problems are inherently more difficult than others. For example, a network with all zero outputs is more difficult to diagnose than a network with a single zero output. In the former case, you have to find nodes common to all the zero outputs, while in the latter you can just trace back from the single zero.
Failures, Detection, and Diagnosis 113
To adjust for problem difficulty, we adopted two models of fault diagnosis - a brute force model and an optimal model. The brute force model randomly picks one of the zero outputs and traces back. The optimal solution first forms the set of all nodes that can reach all known zero outputs, and then chooses to test the node that will split this set of feasible failures in half. This is often called the half-split method. Experimental results showed that subjects were close to optimal for smaller networks (i.e., 3 x 3) in terms of number of tests until correct solution, but departed from optimal with larger network sizes of 5 x 5 and 7 x 7. This departure from optimality was not as great as found for the brute force solution. Thus, humans have difficulty performing as well as the optimal solution, but their test choices are not as simplistic as the brute force solution. We also investigated the effects of imposing a time constraint on finding failed nodes. Subjects were allowed up to 30, 60, or 90 seconds to find the failure. We found that such time constraints caused subjects to adopt strategies that were more like the brute force solutions than exhibited in the self-paced conditions. This was even true for the 90 second condition, despite the fact that subjects tended to finish problems much faster than 90 seconds. Thus, the impact of pacing is not simply that subjects run out of time. Observations and discussion with subjects suggested that departures from optimality related to their abilities to identify the feasible set of failures given a particular set of symptoms, that is, zero outputs. Specifically, people seemed to undervalue tests with positive results, that is, those returning a 1 rather than a 0. Positive results could quickly prune the feasible set. However, subjects seemed to have difficulty doing this after all, they were looking for zeroes. This insight led us to develop display aiding. The computer would cross off any nodes that could not cause all of the observed symptoms. As more zeroes were found, more nodes were crossed off. Not surprisingly, display aiding had a profound impact of human performance, with subjects’ average number of tests much closer to the optimal solution. Of course, the conclusion of this might merely be that subjects do better when you make the problems simpler. In search of a richer conclusion, we assessed the impact of training subjects with display aiding and then assessing their performance without the aiding. One might expect that the loss of aiding would severely undermine subjects’ performance because they had not been performing a key element of the task. On the other hand, if subjects had watched what
114 People & Organizations
the computer was doing, they may have learned how to better take advantage of all test results. We found that self-paced conditions led to subjects with aided training performing better without aiding than subjects whose training was unaided. However, this transfer effect disappeared with the forced-paced conditions. Thus, people were able to learn by observing the optimal identification of the feasible set when they had time to think in terms of the feasible set, but they did not learn when time pressure caused them to adopt more brute force approaches that do not construct feasible sets. The first two experiments just discussed employed two different sets of engineering students as subjects. We next conducted similar experiments using aircraft maintenance trainees as subjects (Rouse, 1979a, b). These experiments yielded similar results and conclusions. Specifically, it was found that performance degraded as network size increased and improved with the use of computer aiding. Skills developed with computer aiding were transferred to the unaided situation for fourth semester trainees (Rouse, 1979a), but not for first semester trainees (Rouse, 1979b). This result, when combined with the results for engineering students, suggests that some level on intellectual maturity may be needed to gain training benefit from the aiding.
Feedback and Redundancy Based on this rich set of experimental results, we elaborated TASK1 to create TASK 2 shown in Figure 2 (Rouse, 1979~). In this task, the rectangular components are AND components and the hexagonal components are OR components. This task also includes feedback loops. For the experiments with TASK2, we varied the ratio of number of feedforward and feedback connections, denoted by PFFPFB, and the ratio of number of OR to AND components, denoted by NO/NA. In general, subjects’ performance was better for low PFFPFB and high NO/NA. Subjects tended to avoid probing feedback loops and avoid testing OR components. For the condition with high PFFPFB and low NONA, this strategy resulted in up to 25% erroneous diagnoses, despite their instructions to avoid all misdiagnoses. This was due in part to subjects differing in how they considered feedback loops - carefully avoiding them vs. ignoring them. These results are revisited in later discussions of models of human fault diagnosis.
Failures, Detection, and Diagnosis 115
r 20 25 I
m
. I
1
13 2 4 0 16 13 * 0
616-8 1 2 5 - 0
FAILURC: 7
I
XiWiT I
Figure 2. TASK2 - Extensions of TASK1 with Feedback and Redundancy
TASK2 was also employed in the aforementioned experiments with aircraft maintenance trainees (Rouse, 1979b). The effects of NO/NA were mixed, but the effects of PFFPFB were quite clear and consistent with the earlier experiment in that low PFFPFB resulted in better performance. Subjects appeared to realize that failures could not occur in feedback loops - otherwise, the problems might not be solvable. Thus, they used feedback loops to eliminate portions of the networks from consideration. From this perspective, the more feedback loops the better. Overall, the effects of feedback and redundancy were rather ambiguous. We had expected that these factors would complicate fault diagnosis. Feedback had the opposite effect, perhaps mainly due to our imposing the constraint that problems has to be solvable. Redundancy sometimes enhanced and sometimes degraded performance. It is likely that the specific placement of the OR components has a significant effect on their impact.
116 People & Organizations
The experiment with maintenance trainees also assessed the impact of TASKl training on TASK2 performance. Upon transfer to TASK2 from aided training on TASKl, subjects’ performance was initially worse than that with unaided training, but then became substantially better (Rouse, 1979b). Initial negative but eventual positive transfer of training was a phenomenon that we subsequently looked for as this research progressed.
Complexity Considering the experimental results for TASKl and TASK2, it is interesting to consider the extent to which these fault diagnosis tasks are complex (Rouse, 1978a; Rouse & Rouse, 1979). Complexity has received considerable attention from a wide range of scholars and commentators. It has been argued that complexity is related to the computational resources required to perform a task (Gell-Mann, 1995), the robustness and fragility of an optimized system (Carlson & Doyle, 2002; Doyle, et al., 2005), and the extent to which subsystems adapt to suboptimize their own objectives (Rouse, 2000). In general, there are multiple strong and competing views of complexity, often related to the discipline and purpose of those arguing any particular view (Rouse, 2003,2007). From a human performance perspective, complexity can be factored into perceptual complexity and problem solving complexity. There is a rich tradition of studies of humans seeking targets immersed in patterns, with the time to find the target as the dependent variable and various characteristics of the patterns as independent variables. Perceptual complexity can certainly affect failure diagnosis - as evidence, consider how difficult it is to find things under the hood of a contemporary automobile. However, such difficulties were not part of our studies. There have been limited studies of quantitative measures of problem solving complexity. Nevertheless, based on these limited studies, it seems reasonable to claim that problem solving complexity is related to humans’ understanding of the structure of a problem, or the relationships among the elements within a problem. This observation led us to posit several measures of the complexity of diagnosis problems for TASKl (Figure 1) and TASK 2 (Figure 2). One measure is simply the number of components or nodes in the network. Of course, this does not take into account the specific symptoms of any particular problem. A measure that does relate to particular problems is the optimal solution, that is, the minimum number of tests
Failures, Detection, and Diagnosis 117
needed to correctly diagnose a failure. Another problem-dependent measure is the number of relevant relationships that must be addressed to find the failure; in other words, how much of the problem structure a human must deal with in order to choose a test based on the observed symptoms. Finally, a fourth measure is an information theoretic metric involving the sum of pi log ( Upi) over all nodes that can reach at least one observed symptom. These four measures of complexity were evaluated in terms of their ability to predict the time a subject took to find the failed component. Of course, the time required is related to the choices made. Good test choices will result in faster solutions. Poor test choices can result in much slower solutions. To handle this phenomena, the overall complexity of a problem was defined as the sum of the measures of complexity addressed before each test choice. Such summation was only relevant to the third and fourth measures - number of relevant relationships and the information theoretic measure. The results of correlating problem solving time of aircraft mechanics (Rouse, 1979b) with alternative measures of complexity were as follows: 0
Number of Components (TASKl, r = 0.261) Optimal Solution (TASK1, r = 0.550)
0
0
Number of Relevant Relationships (TASKI, r = 0.839; TASK2, r = 0.770) Information Theoretic Metric (TASKl, r = 0.837; TASK2, r = 0.840)
Note that network size was not varied and the optimal solution was not available for TASK2. These results support the hypothesis that the amount of problemspecific structure that must be considered to solve a problem relates to the complexity of the problem, at least in terms of time to solve the problem. Of course, we need to keep in mind that these measures are only good predictors relative to the intent to find the failed component. When a system is operating normally, it may not seem particularly complex. Your intention at that point is simply to use the system. However, once your intention changes to figuring out what is wrong with the system,
118 People & Organizations
complexity can soar because of the need to address underlying system relationships that one usually ignores. Thus, complexity is related to the intentions of the observer. Using systems typically involves much less complexity than fixing systems. Observing the beautiful flowers as I walk across campus involves much less complexity than the observations of a botanist trying to understand how these flower-bearing plants mutate over time. From this perspective, complexity is associated with the relationship between the system of interest and the observer’s intent. Complexity is also related to the expertise of the observer. Having solved hundreds of Su Doku problems, my abilities to recognize familiar patterns are much better than when I first started doing these puzzles. Such abilities have long been recognized as key to the expertise of chess masters, for example. Beyond recognizing structural patterns, expertise can also be context specific. It is much less difficult to take apart your laptop, figure out what has failed, and repair it when you have done this many times before, even when the specific failure has not be previously experienced.
Context-Specific Simulation Almost all failure diagnosis occurs in the context of automobiles, airplanes, computers, etc. Problem solving in these contexts involves using both knowledge of the structure - or topography - of the system and knowledge of patterns of symptoms as they relate to possible causes. To better understand the interplay of these two types of knowledge, Russ Hunt developed a troubleshooting simulator called FAULT (Framework for Aiding the Logical Understanding of Troubleshooting). FAULT was designed to simulate the problem solving aspects of troubleshooting and was applied to simulating systems such as mechanical powerplants, autopilots, helicopters, ships systems, and the space shuttle (Hunt & Rouse, 1981). FAULT is a direct extension of TASK1 and TASK 2. The main difference is that the networks in FAULT represent actual systems such as aircraft and automotive powerplants. An example network for a turboprop aircraft engine is shown in Figure 3. FAULT randomly selects one of the components to fail and then computes the effects on the other components according to the structure of the network. The problem is then presented to the subject on a computer
I 19
I
22
Figure 3. FAULT Representation of Turboprop Aircraft Engine
I
12
.
Failures, Detection, and Diagnosis 119
120 People & Organizations
display shown in Figure 4. Subjects use a hardcopy of the network representation as shown in Figure 3 and the information in Figure 4 to diagnose the failure. Given the symptoms (shown at the top of Figure 4), they proceed to read gauges, observe the state of connections, request information about components, compare components in terms of failure rates and costs, bench test components, and replace components. The results of three observations followed by a bench test are recorded with their respective costs in the lower left of Figure 4. Upon completion of a problem subjects are given feedback on the actual failure, the action they took, and the total costs of these actions. Our first experiment with FAULT involved training subjects with TASKl in parallel with training them to use FAULT for troubleshooting automotive engines and then assessing their performance troubleshooting unfamiliar aircraft engines. Both first-semester (N=60) and fourthsemester (N=26) aircraft maintenance trainees were studied. Half of each population was trained with the computer-aided version of TASK 1, while the other half used TASKl without computer aiding. All subjects were instructed to minimize the overall cost of solution.
34 Torque
You have six choices : 1 Observotion ...........OX,Y 2 Information. ..........I X 3 Replace a part .........RX 4 Gouge reoding .........GX 5 Bench test .............BX 6 Comparison .......... CX,Y,Z (X,Y and Z are part numbers)
I Your choice.. .
38 Oil Pressure 39 Oil Temperature 40 Fuel Quantity 41 Ammeter
Low
Low Low Normal Normal Normal
I ~
Actions
35 Turbine InletTemp 36 Fuel Flow 37 'Tachometer
Casts
4, 5 Normal $ 26,30Abnormol $ 14,20 Not oval $ 14 is Abnormol $
1 1 0 27
~
~~
Actions
Costs
Parts Reploced
Costs
14 TochGenemtor $ 199
Figure 4. FAULT Troubleshooting Display
Failures, Detection, and Diagnosis 121
The results of this experiment indicated that training with the computer-aided version of TASK 1 enhanced the context-specific performance of trainees, especially for the unfamiliar aircraft engines. Fourth semester subjects trained with computer aiding spent less money and gained more information per action. First semester subjects trained with computer aiding spent less money but gained less information per action. Thus, transfer of aided training was clearly positive for fourth semester trainees, but mixed for first semester trainees. The mixed results for first-semester trainees are similar to the results of the earlier experiments with TASK1 and TASK2. A consistent interpretation of this set of experiments leads one to conclude that contextfree training may be more consistently valuable for trainees who are farther along in their training program. Another possibility is that context-free tasks can be used to select trainees who are more likely to succeed later in the training program.
Cognitive Style Additional assessments were performed using the subjects in Russ Hunt’s experiment just discussed and Bill Johnson’s experiment discussed below. The Matching Familiar Figures Test (MFFI’) and Embedded Figures Test (EFT) were administered following subjects’ training with TASK and FAULT. Both of these tests require scanning a visual field and seeking a solution with a high degree of uncertainty (Messer, 1976). Thus, it appeared that the cognitive styles measured by these tests might be relevant to solving failure diagnosis problems. Measures of interest were response time and errors for the MFFI‘ and EFT. MFlT uses time until first response while EFT uses time until correct response. Correlating these measures with failure diagnosis performance, we found significant correlations at the 0.4-0.5 level, which is quite good for these types of assessments (Rouse & Rouse, 1982). Specifically, we found that more errors on MFFT were related to more time, cost, and errors for TASK and FAULT. Longer time on M F l T was correlated with less cost for TASK and fewer errors for FAULT. Thus, in cognitive style terms, it appeared that “reflectives” achieved better failure diagnosis performance. To test this hypothesis, subjects were classified as either reflective or impulsive and a one-way analysis of variance was done for the measures of performance where significant correlations were found. It was found that
122 People & Organizations
reflectives made significantly fewer redundant and unnecessary actions. Thus, it can reasonably be concluded that reflectives make fewer errors in the process of diagnosing failures in the types of tasks and contexts studied.
Measures of Performance The experiments discussed thus far employed a variety of measures of performance related to the time and accuracy of diagnosis. In an effort to determine whether we were employing the best measures, Dick Henneman considered the full range of measures reported in the large literature addressing human performance in failure diagnosis tasks (Henneman & Rouse, 1984a). Reviewing this set of past studies, he identified 30 measures of ability, aptitude, cognitive style, and task performance. This included 3 measures of the product (results) of diagnosis, 15 measures of the process of diagnosis, 5 measures of ability, 3 measures of aptitude, and 4 measures of cognitive style. He used correlation, regression, and factor analyses of these measures assessed for aircraft mechanics diagnosing faults in both simulated and live equipment. He found that the 18 measures of the product and process of diagnosis reduced to three - errors, inefficiency, and time. Thus, training needs to help people avoid incorrect actions, choose actions that are more diagnostic, and perform more quickly. This may require more than one training method. For example, training with TASK improves errors and time in diagnosing failures in FAULT, but not inefficiency of diagnoses. For the 12 predictive measures of ability, aptitude, and cognitive style, it was found that trainees’ ACT (American College Testing) scores when combined with the measures of cognitive style discussed earlier yielded multiple regression correlations of 0.6 to 0.8 for predicting diagnostic performance in TASK and FAULT. These results have implications for selecting people to be trained for failure diagnosis, as well as the possibility of tailoring training programs to individual ability and style.
Summary of Experiments Overall, ten experiments were performed with TASK and FAULT with a goal of understanding human problem solving performance, developing
Failures, Detection, and Diagnosis 123
computational models of human behavior and performance, and developing and deploying technologies for training and aiding people responsible for failure diagnosis in complex systems. The computational models are discussed in the next main section of this chapter, and the training and aiding systems are discussed in a subsequent main section. At this point, however, it is useful to summarize the results of the ten experiments in terms of the nature of human problem solving abilities in failure diagnosis tasks (Rouse & Hunt, 1984). Stated quite broadly, we found that humans are not optimal problem solvers, although they are rational and usually systematic. In general, their deviation from optimality is related to how well they understand the problem, rather than being related solely to properties of the problem. More specifically, suboptimality appears to be due to a lack of awareness (or inability when forced-paced) of the full implications of available information. For example, humans have a great deal of difficulty utilizing information about what has not failed in order to reduce the size of the feasible set. Human problem solving tends to be context-dominated with familiar, or even marginally familiar, patterns of contextual cues prevailing in most problem solving. Humans can, however, successfully deal with unfamiliar problem solving situations, which is a clear indication that human problem solving skills cannot be totally context-specific. Their degree of success with unfamiliar problems depends on their abilities to transition from stateoriented to structure-oriented problem solving. Humans’ abilities in the latter mode are highly related to their rank-ordering of problem solving rules rather than simply the number of rules available - more on this later. Thus, humans’ cognitive abilities for problem solving are definitely limited. However, humans are exquisite pattern recognizers and can cope reasonably well with ill-defined and ambiguous problem solving situations. These abilities are very important in many real-life failure diagnosis tasks. What are needed, then, are methods for overcoming humans’ cognitive limitations in order to be able to take advantage of humans’ cognitive abilities. This conclusion sets the stage for human-centered design of training and aiding, which are discussed later in this chapter. Serendipity emerged as these experimental studies progressed. Jens Rasmussen and I were at the NATO Conference on Mental Workload in Mati, Greece in 1977. (We had first met at the NATO Conference on Human Supervisory Control the previous year.) We were sitting by the pool overlooking the Aegean Sea and drinking Greek wine. I had recently read his 1974 paper on troubleshooting published in Ergonomics. We talked about the first couple of experiments discussed above as they related
124 People & Organizations
to his work. Before we left the poolside, he and I agreed to organize a NATO Conference of Human Detection and Diagnosis of Systems Failures, which was held in 1980 with the subsequent book published the next year (Rasmussen & Rouse, 1981). The relationship with Jens had a strong impact on the research discussed in this chapter. We had frequent and numerous disagreements on research philosophy, experimental methods, and how humans could and should be supported. Nevertheless, the discussions and debates always influenced the thinking of our research group, especially during Jens’ yearlong visit at Georgia Tech. The serendipity emerged in terms of the creativity that resulted from these discussions and debates despite a clear lack of agreement.
Dynamic Process Plants Thus far, our attention has been limited to maintenance personnel diagnosing failures of equipment systems. Such personnel seldom have to be concerned with operating the system (e.g., flying the airplane) at the same time that they are trying to identify and remediate failures. In Chapter 4, we discussed intelligent interfaces for operators of complex systems. In this section, we consider how people both operate complex systems and, in parallel, determine what has gone wrong with the system and what to do about it. We initially considered this topic in the context of aircraft operations, but subsequently focused for several years on process control. In the aircraft study, conducted in Germany, Gunnar Johannsen and I addressed abnormal and emergency flight operations where the two-person crew had to consider alternative courses of action to address the abnormality or emergency. The goal was to gain an understanding of how the crew planned these alternative courses of action (Johannsen & Rouse, 1983). The specific question addressed was how depth of planning for a flight phase or task within a phase is affected by criticality of the phase or task, the probability of increased difficulty addressing the phase or task, the workload being experienced, and flight performance. Using both sensed flight data and a range of rating scales, extensive data were collected in full mission experiments using a high-fidelity flight simulator for a small twinengine jet aircraft and professional pilots whose job assignment was to fly this type of aircraft.
Failures, Detection, and Diagnosis 125
The data for depth of planning, as related to the other variables measured, indicated two types of planning in the operational scenarios studies. Time-driven planning appears to involve the monitoring of execution of a “script,” for example, for a holding pattern executed in reaction to an external situation - in this case, the runway being closed. In contrast, event-driven planning seems to involve updating a script or creating a plan to address an unexpected event, for instance, an engine failure or hydraulic system failure. The impact of the availability of an aircraft autopilot was of particular interest. The autopilot decreased planning during the abnormal situation but increased planning during the emergency scenario. It appeared that during the emergency, the autopilot freed the pilots to devote more time to planning. In contrast, during the abnormalities, the autopilot assumed a significant portion of the task and appeared to lessen the need for planning. This result serves to emphasize the potential subtle effects of automation. Further subtleties are discussed later in this chapter. The airplane operations environment was very complicated and many factors could not be manipulated. Consequently, data were expensive and limited from reflecting numerous variations of interest. This limited our abilities to conduct systematic studies as we had done with TASK and FAULT. Nancy Morris, Janet Fath and I decided to shift our focus to process control and develop an experimental environment to enable the types of studies we wanted. The result was PLANT (Production Levels and Network Troubleshooting), whose top level display is shown in Figure 5 . There is no real-world counterpart of PLANT, although the dynamic behavior of PLANT is based on physical principles. Rather than physical fidelity to any particular domain, the goal with PLANT was psychological fidelity in that this environment created problem solving opportunities similar to those that could be expected in supervisory control of complex dynamic processes (Morris, Rouse & Fath, 1985). The PLANT operator’s task is to supervise the flow of fluid through a series of tanks interconnected by valves so as to produce an unspecified product. The operator’s goal is to maximize production given the physical limitations of the system such as tank or valve capacity and reliability of system components. Flow through the system is generally left to right, although sloshing may cause flows to move right to left. The level of the fluid in each tank is indicated by the number below the tank. The level in Tank C in Figure 5 has exceeded the acceptable maximum of 99 and the fluid has turned red to indicate this condition.
126 People & Organizations
Lines between tanks indicate valves, of which there may be up to three. Valves “trip” if the flow through them exceeds 100. Operators can reopen valves. However, if the imbalance that created the excess flow is not remediated, valves tend to quite soon trip again. Input into the left column of tanks is controlled by specifying the number of units of fluid per tank to be pumped in per unit of time. System output via the right-hand tanks is specified in a similar manner. The operator needs to balance input and output to maximize production. PLANT production is self-paced; the state of the system is not updated until the operator presses “enter.” The PLANT safety system includes the aforementioned valve trips due to excessive flow. Tanks with excessive levels can also lead to all input valves being tripped until the excessive level is remediated. Operators can trip the whole system if they feel imbalances are out of control. They can then open valves and set input and output levels to bring the system back up. Obviously, trips compromise the goal of maximizing production.
Figure 5. PLANT -Production Levels and Network Troubleshooting
Failures, Detection, and Diagnosis 127
As in real process plants, there are a range of possible malfunctions in PLANT. Valves may fail causing flow through the associated pipe to cease, although the graphic in Figure 5 will continue to show the valve as open. Pump failures cause the flow through all values emanating from a tank to cease. The graphic would, however, still show the valves as open. A third type of failure is a tank rupture. These three types of failure, once diagnosed, can be repaired by dispatching the repair crew, only one of which is available. Valves and pumps require 5 and 15 time units, respectively, to be repaired. Their functionality is unavailable during repair. The fourth type of malfunction is a failure of the safety system. Such failures can be manifest by valve and pump trips not happening when they should, or trips happening when they should not. The repair crew requires 20 iterations to repair the safety system. Operators can choose to continue operation without the safety system, although irreparable damage can happen to valves and pumps if they exceed 110% of capacity. Finally, PLANT has a set of general operating procedures. One is for PLANT startup. The others address imbalance problems, both between and within columns. These five procedures provide proven heuristics for safely and productively recovering from imbalanced tank levels that cause trips and undermine production. Several initial experiments were performed using PLANT to determine interesting levels of the numerous parameters in the simulation. The focus of our first large-scale (N=32) experiment was on the nature of information required to control PLANT effectively during normal situations, and deal with unusual circumstances should they arise (Morris & Rouse, 1985a). The primary comparison was between two types of instruction: 1) the aforementioned operating procedures and, 2) a description of dynamic principles and functional relationships in PLANT. It is important to note that the distinction between these two types of training was fairly controversial at the time. The accident at the Three Mile Island nuclear power plant in 1979 had prompted intense focus on failures of complex systems, e.g., (Perrow, 1984). Numerous pundits argued at the time that operators of complex process systems needed engineering education. Some suggested that an engineering degree should be required for such operators. Four combinations of PLANT training were studied: 1) procedures, 2) principles, 3) neither procedures or principles and, 4) both procedures and principles. Subjects’ experience with PLANT included controlling it in both familiar and unfamiliar circumstances, distinguished by the types of
128 People & Organizations
failure which occurred. Pump and valve failures were familiar in that they occurred two to five times per hour, and the manner in which they should be dealt with was discussed in the procedures. Tank ruptures and failure of the PLANT safety system were unfamiliar in that, even though all subjects knew that such failures were possible, each occurred only once in a subject’s experience (near the end of the experiment), and the procedures did not describe how they should be handled. Despite the fact that differences in instruction provided subjects with differing PLANT-related knowledge, as indicated by scores on a paperand-pencil test administered at the end of the experiment, instruction had no effect on subjects’ achievement of the primary goal of ptoduction. Subjects receiving procedures controlled PLANT in a more stable manner, and were more consistent with each other than those not receiving procedures. Subjects trained with procedures had fewer automatic valve trips, higher average number of valves open at any time, and lower variance of fluid levels (i.e., tank heights) within the system. This did not, however, result in greater production. Surprisingly, instruction had no effect upon subjects’ diagnosis of the unfamiliar failures. All subjects but one correctly diagnosed and repaired the tank rupture. Roughly half of the subjects in each instruction group correctly diagnosed and repaired the safety system failure. Thus, knowledge of principles did not help subjects diagnose the unfamiliar failures. In light of the above results, the following conclusions were drawn: Provision of procedures can help operators to control systems more effectively. Knowledge of theoretical principles does not guarantee that such knowledge will be used when necessary. Attention should be devoted to methods for helping people to use the knowledge they have when circumstances require it. PLANT was also employed for an award-winning (from NASA) series of studies of human operator responses to error-likely situations in complex systems (Morris & Rouse, 1993). This research was conducted with two objectives in mind: 1) investigating the causes of human error and, 2) investigating the relationships between error and mental workload. To pursue these objectives, PLANT was extended to include more types of
Failures, Detection, and Diagnosis 129
failures and automatic updates - forced pacing. We also studied the impact of incentives and rewards, namely, production bonuses. The overall results of these experiments provide a demonstration of the adaptability of humans. Operators of PLANT adapted their strategies and choices of actions to the likelihood of error, perceived effort, and reward contingencies. Not surprisingly, production bonuses cause operators to shift priorities from maintaining PLANT stability to maximizing production. Analysis of the other forms of adaptation led to identification of two potential principles that were mentioned in Chapter 4. First, people encountering a situation in which errors appear likely respond to that situation by trying to reduce the likelihood of error. They do this by either controlling the situation or controlling themselves (e.g., they are more careful or they change strategies). Errors result if people are placed in situations to which they cannot adapt for some reason. It is possible that people do not perceive the need to adapt until errors have already occurred. However, this was not observed in these experiments. Second, when experiencing an increase in perceived effort above some acceptable threshold, people attempt to reduce the level of perceived effort. The options of controlling the situation or controlling themselves again apply. This interpretation of the dynamics of perceived effort observed in these experiments is consistent with common reports of humans “economizing” in decision making. There are interesting implications of these principles. For example, the problems associated with attempting to identify human “error rates” become clearer. In light of humans’ adaptive tendencies, the concept of human error rate may make little sense. Rather than questioning the likelihood of error in a statistical sense, a more important issue is the identification of factors which limit humans’ ability to adapt a situation to themselves or vice versa. These studies also raise the issue of the generalizability of results obtained in constrained experimental situations rather than situations in which more human discretion is possible. In order to understand human behavior in less constrained environments, one must be very careful in placing constraints on the environment, despite the attractiveness of such constraints in terms of experimental control. Human adaptation is the norm rather than the exception and effort should be devoted to identifying the precipitating conditions and ways in which humans are likely to adapt. As indicated in earlier chapters, most traditional studies of human behavior have focused on abilities to conform to the task requirements and environmental constraints. As such, these studies have focused more on
130 People & Organizations
human limitations than human abilities. They expose what humans cannot do, or do well, rather than what potential they have to excel. I noted earlier that this book, in some sense, follows a path from rather constrained operators and maintainers to much less constrained researchers, designers, and managers. As we follow this path, it will be very clear that the nature of the approach to research changes substantially in the process.
Large-Scale Dynamic Networks TASK, FAULT, and PLANT are similar in that you can see the whole system at once. Many important systems are not like this. Large-scale systems like communications networks, transportation systems, and power grids are structured as hierarchical networks consisting of a very large number of nodes and arcs. While many functions associated with such systems are automated, there are inevitably instances, such as during failure situations, that the automation will not be programmed to handle. In these instances, control will revert to humans. Our next step in this line of research was to assess human abilities to succeed in such situations. Dick Henneman and I chose the context of communications networks. A major feature of such systems is their high degree of automation. Messages are sent through the systems via direct or alternative paths that have been predetermined. These systems operate under normal conditions without any human intervention. The switching stations, serving as repositories of network intelligence, automatically perform such tasks as: 1) determining source, destination, and path through the network, 2) testing lines for busy conditions before establishing a path and, 3) continual checking of circuit conditions. Using such features as a general model, Dick developed a computerized simulation of a generic large-scale system. This simulation is referred to as MABEL because of the obvious connotation, but also because it requires human operators to Monitor, Access, Browse, and Evaluate Limits in the process of controlling the system (Henneman & Rouse, 1984b). The MABEL display is shown in Figure 6. The MABEL screen is divided into several sections. The upper right portion of the screen displays a cluster of nodes. The numbers to the left of each node identify the node, while the numbers inside each node represent the current queue size - the number of customers waiting to be served. This portion of the screen is updated approximately every two seconds. A different cluster of nodes is viewed by entering an appropriate command.
Failures, Detection, and Diagnosis 131
_ _ _ _ _ ~
lime = 417 21
System Slarlsllcs C"rrC"t Time 35853
Pcrccnt Change
-Is$?"
Lost
91, I ? -36 2%
%Denied
lo,,
curtonirrs
4vg Tmme 0,"
3376Y
-3O% -50% 36";
A: B: C.
1 2 3 4 5 6 ? 8 9 10 I 1 12 13 14 IS 16 1 2 3 4 5 6 7 8 9 10 I 1 12 13 14 15 16 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Your action.
Figure 6. MABEL - Monitor, Access, Browse, and Evaluate Limits
The lower right portion of the screen is an aid to the user to identify the currently displayed cluster. Each letter (A, B, C) represents a level in the hierarchy. Each number (1, 2, 3, . . .) represents a node or a cluster. Bright and dim characters are used to indicate the operator's current position in the hierarchy. A row of characters that is completely bright represents the cluster that is currently displayed on the screen. One bright character in a row of dim characters indicates the node above the currently displayed cluster. The upper left portion of the screen displays the time, while the lower left is where operators input their actions. The middle left portion is used to display a variety of operator-requested information. Figure 6 shows current System Statistics. The operator can also choose to display Cluster Statistics. In general, the operator can access cluster displays, monitor critical system variables, test for failures, and control the system load, as well as request maintenance on failed nodes. Under normal circumstances, MABEL operates automatically without any interference from the human monitor. When a node failure occurs, however, the operator must act to diagnose and repair it. Node failures may occur due to malfunctioning equipment or loads that exceed node
132 People & Organizations
capacities. Failures result in upstream customers being held, which can lead to further capacity failures. Without intervention, cascading failures will lead to most and perhaps all of the system to fail. Operators locate failures by monitoring critical system states and testing suspect nodes or clusters of nodes. If a failure is found, operators dispatch a crew to repair the node. If the system becomes too crowded with customers, the operator can reduce the number of customers admitted to the system. Our first experiment with MABEL studied the effects of cluster size (4, 9, and 16) and number of levels (2 or 3) in the hierarchy, as well as low vs. high node failure rates. Measures of performance included product measures (number of customers served and average time for successful service) and process measures (diagnosis errors, average time to diagnose failures, and allocation of time among types of activities). Product measures were normalized by subtracting the optimal level of each measure from operators’ performance results. Overall, this experiment produced three results of particular interest. First, performance improved with cluster size. At first this seemed counterintuitive, but reflects the fact that node failures have much more far-reaching effects in small networks. Second, the effects of number of levels in a hierarchical display system can be very strong, producing up to five-fold degradations for a modest change from two to three levels. Third, strategies can be adapted to compensate for increased failure rates. Operators in low failure rate conditions tended to stay at higher levels and use monitor commands to assess the states at lower levels. In contrast, operators in high failure rate conditions tended to actually access the lower levels and perform tests to diagnose failures. As a result, operator performance was not affected by failure rates. Dick’s next step in this research involved extending MABEL to CAIN (Contextually Augmented Integrated Network), shown in Figure 7 (Henneman & Rouse, 1986). The physical structure of MABEL was retained. However, geographic information was added to enable accessing clusters and nodes by name rather than only by moving up and down the hierarchy. Further, failure rates and non-uniform loading were geography dependent, for example, load depended on the time of day in a geographical region. The experimental study with CAIN involved two independent variables: number of levels in the hierarchy (2 or 3) as was studied in MABEL and, 2) level of redundancy or connectivity (6 or 13 connections per node). Cluster size was held constant at 16. Dependent measures
Failures, Detection, and Diagnosis 133
included those used with MABEL as well as measures of “structural” and “strategic” complexity. Structural complexity relates to the physical characteristics of the network. Strategic complexity, in contrast, relates to how human operators address failures of the network. The impact of complexity is that more complex systems result in longer failure diagnosis times and lower percent failures correctly diagnosed. This study focused on correlating the impact of complexity with various measures of structural and strategic complexity over time. Our overall finding was that complexity is a dynamic property of a human-machine system. Complexity is not due solely to the structure of the system, although a system may certainly be complex due to its structure. Complexity also arises when humans try to solve problems within the system’s environment, do not understand the structure and, as a result, issue inappropriate commands, misinterpret information, etc. In short, systems are also complex due to humans’ understanding of the system as reflected by their strategies.
Seattle
Minneapolis
San Francisco
Denver
Los Angeles
Kansas City
Chicago
Boston
Cincinnati New York City
Atlanta
Washington
4 E l * E l l2OI6O Phoenix
Dallas
New Orleans
Miami
I 2 3 4 5 6 7 8 9 10 11 12 13 14 IS 16 I 2 3 4 S 6 7 8 9 10 11 12 13 14 IS 16 C: I 2 3 4 5 6 7 8 9 10 I I 12 13 14 IS 16
A:
B:
Your action.
Figure 7. CAIN - Contextually Augmented Integrated Network
134 People & Organizations
This finding has important implications. As the system becomes more complex due to increased levels and redundancy, it becomes more resistant to the effects of system failures and normal operations are enhanced. The effects of any one failure are minimized due to the number of alternative paths through the system. On the other hand, as the system becomes more complex, the task of finding system failures becomes more difficult. Although the system’s design characteristics can help to avoid the shortterm effects of failures, they can have the dual effect of making the humans’ task of finding failures much more difficult. This suggests a need for another class of automation, or aiding, which helps humans deal with failures of the “normal” automation, either directly or indirectly due to masking failure symptoms. This echoes the finding discussed in Chapter 4 that supertanker engineers had considerable difficulty diagnosing failures of the automatic control systems associated with the ship’s propulsion system. Clearly, this is a central issue in our increasingly automated society
Humans Abilities, Limitations and Inclinations Our series of empirical studies of human detection and diagnosis of failures covered roughly 10 years. Nancy Morris undertook an effort to bring together all that was learned, not only by our research group but also by a wide range of other researchers (Morris & Rouse, 1985b). Thus, the following conclusions relate to a much larger body of research that solely that presented above. Human failure diagnosis performance has been found to degrade as systems increase in size or complexity, and if time constraints are impressed. If the system involved is large or complex, andlor if time is limited, then one or more of the following approaches should be considered: Require a relatively simple strategy 0
Provide more instruction and practice in applying the instruction
0
Select humans with higher ability
0
Provide some kind of performance aid such as “bookkeeping” or procedures
Failures, Detection, and Diagnosis 135
There are obviously tradeoffs involved in adopting any of these approaches, and the choice should depend upon such contingencies as the population from which personnel may be drawn, the resources that may be allocated to each of these, and the extent to which each approach may be expected to lead to acceptable performance. Evidence suggests that as instruction is made more explicit, human diagnostic behavior will probably be affected in the manner desired. Generally, if humans are left to discover the best strategy for a given situation, this discovery will not occur. This interpretation is consistent with virtually all the research reviewed, although perhaps the best illustration is the overall ineffectiveness of theoretical instruction to enabling good failure diagnosis performance. The most effective means of assuring that humans will successfully employ an appropriate strategy is to proceduralize the task. Positive effects were found in each study in which procedures were employed, even when humans had limited experience or moderate ability. Of course, there are limits to proceduralization in that not all contingencies can be foreseen. Thus, explicit instruction in diagnostic algorithms or heuristics can be a useful addition. We return to a discussion of training methods in a later section of this chapter. It is interesting to note that one of the motivations for this long series of studies was a keen interest in the extent to which people could be “general” problem solvers. Could we train and aid people to effectively take into account what has not failed, half-split the feasible set, and so on, regardless of context? The overall answer is, “No.” However, we repeatedly observed the following phenomena. As we used TASK and FAULT to train people in different contexts automobiles, airplanes, ships, etc. - we found that many people, after a considerable number of failures diagnosed across these contexts, would volunteer general principles such as noted above. They seemed to need a range of context-specific failure diagnosis experiences in order to be able to generalize. This serendipitous finding has fundamental implications for training that are discussed in a later section.
MODELS OF DETECTION AND DIAGNOSIS In parallel with the series of empirical studies discussed above, we developed a series of computational models of human behavior and performance. As indicated in earlier chapters, our research philosophy has
136 People & Organizations
been that deep understanding of a phenomenon requires explicating the underlying mechanisms sufficiently to be able to replicate the phenomenon in a model. In particular, the independent variables of interest should impact the model in the same ways that humans are impacted (Rouse, 1981; Rasmussen & Rouse, 1981).
Initial Fuzzy Set Model The first model developed focused on human performance in TASK1, shown in Figure 1 (Rouse, 1978b). In this model, failure diagnosis was represented as an iterative two-phase process, whereby one first separates the set of nodes into those that could possibly be the source of unacceptable outputs (i.e., the feasible set) and those that could not possibly be the source of unacceptable outputs (i.e., the infeasible set). One then selects a member of the feasible set, tests it, and integrates the results of the test into a new partitioning of nodes into feasible and infeasible sets. This process continues until the failure is isolated or, perhaps, until one is unwilling to continue. We expected people to have difficulty crisply delineating feasible and infeasible sets, especially for larger networks. Consequently, we adopted fuzzy set theory (Zadeh, et al., 1975; Rouse, 1983b) for formulation of this model. The fuzzy feasible set was defined as the set of nodes that could reach all symptoms, that is, known 0 outputs. The fuzzy infeasible set was defined as the set of all nodes that could reach any known 1 outputs. Nodes that had high membership in the feasible set and low membership in the infeasible set were good candidates for testing. Membership was defined in terms of the “psychological distance” between the node of interest and the 0 or 1 output in question. Psychological distance was defined to be a combination of functional and geographical distance. Functional distance was defined by the “reachability” matrix for each network, which could be derived from the “connectivity” matrix for the network. Geographical distance was defined by the spatial distance on the displayed network as shown in Figure 1. Nodes that were functionally related to outputs of interest (i.e., could “reach” them) were rank ordered by geographical distance. The resulting rank order was defined as the psychological distance. The membership functions for the fuzzy sets each had two free parameters that defined how quickly membership declined with increasing psychological distance. It was found that only one of these parameters
Failures, Detection, and Diagnosis 137
substantially affected model performance. This parameter related to the model’s abilities to utilize the 1 outputs. Adjusting only this one parameter, the model was able to yield the same number of average tests until correct solution as humans, across different network sizes and several hundred failure diagnosis problems. This model enabled a fairly concise explanation of human performance. In determining the cause of unacceptable outputs, humans are fairly good at using the topology of the network to form an initial set of feasible solutions from which to choose a test. However, they are ineffective at using the acceptable outputs and the network topology to update this feasible set by eliminating infeasible solutions. The benefit of computer aiding is that it minimizes the effect of humans’ inadequate strategy. This model was next extended to TASK2, shown in Figure 2 (Rouse, 1979~).The presence of feedback and redundancy required modification of the definitions of membership in the feasible and infeasible sets. A coefficient was added that was based on the inverse of the number of unknown inputs to OR components in the path from the node of interest and the 0 or 1 output in question. Otherwise, the model was the same as used for TASK 1. The model compared favorably with humans’ performance in terms of average number of tests until correct solution. However, the comparison was much less favorable in conditions where humans had difficulty correctly diagnosing failures. The model did not provide a good description of humans’ performance when they were not solving the diagnostic problems or, in other words, making many mistakes. This requires a much richer model than discussed thus far.
Initial Rule-Based Model In search of a deeper explanation, we worked with Susan Pellegrino to model the specific sequence of tests chosen, not just the average number of tests required to solve a problem (Rouse, Rouse & Pellegrino, 1980). Conceptually, the model adopted was quite straightforward. We assumed that humans have rules they employ when addressing a problem. The first rule is a stopping rule, i.e., has the problem been solved. The subsequent rules look for patterns that satisfy the prerequisites for particular types of tests.
138 People & Organizations
Given that the rules are rank ordered, humans are assumed to employ the highest ranked rule that applies. Thus, since the stopping rule is ranked first, they will always stop if the rule’s prerequisites are satisfied. With this structure for the model, the question then becomes one of identifying the rules and rank ordering. This question was addressed by having experts view replays of humans’ failure diagnosis sessions and then propose candidate rules. We then employed a rank-ordering algorithm to find the best fit set of rules and ordering. The “fit” of the model was defined in terms of two measures. One measure was the percent identical tests, that is, the model and human behaved identically. The second measure was percent similar tests, that is, the model and human chose tests that were equivalent. For the second measure, tests of any one of the inputs to a suspect node were often equivalent as the order in which the inputs were tested has no impact on diagnostic performance. The model was compared to the performance of 118 subjects for TASKl and 36 subjects for TASK2. The model chose tests similar to those chosen by subjects 94% and 88% of the time, respectively, for these two tasks. It chose identical tests 52% of the time for TASK1. This metric was not determined for TASK2 due to the computationally intense nature of the fitting process. This modeling research led to the notion of rule-based aiding. After each test, the computer inferred the rule employed and then provided feedback on the efficacy of the rule, ranging from Excellent (E) to Good (G) to Fair (F) to Poor (P). Ratings of U and N were also provided when the test was unnecessary and when no further testing was necessary in order to designate the failed node. An experimental evaluation of this idea resulted in substantially degraded performance for both TASKl and TASK2, in terms of both number of tests until correct solution and percent failures correctly diagnosed. It seemed that subjects’ priorities shifted to collecting E ratings rather than solving the problems. Fitting the rule-based model to their performance with rule-based aiding supported this hypothesis. This approach to rule-based aiding reflected another attempt to enhance the general problem solving skills of human diagnosticians. The above summary of Nancy Morris’ broad review indicated that such support seldom results in the performance improvements imagined when the idea is initially developed. As is later discussed, it appears that general skills are best engendered by providing humans with a range of context-specific
Failures, Detection, and Diagnosis 139
experiences from which they can glean general observations that are then supported with more formal training and aiding.
Fuzzy Rule-Based Model Russ Hunt focused on modeling failure diagnosis behavior and performance in FAULT (Figures 3 and 4). The rules in this model were grouped into two broad sets: 0
S-Rules: Direct match of symptoms to actions, for example, IF the engine will not start and the starter motor is turning and the battery is strong, THEN check the gas gauge.
0
T-Rules: General rules based on problem structure or topography, for instance, IF the output of X is bad and X depends on Y and Z and, Y is known to be working, THEN check Z.
Based on our research, as well as others such as Jens Rasmussen and Gary Klein, the overall model assumed that people would employ S-Rules if at all possible, but revert to T-Rules if there were no applicable S-Rules (Rouse, 1983a). Within each set of rules, this model assumes that rules are candidates for selection if: 0
The rule is recalled
0
The rule is applicable to the current situation
0
The rule is expected to be useful
0
The rule is simple
Fuzzy set membership functions were defined for the sets of recalled, applicable, useful, and simple rules. The set of choosable rules was then defined as the intersection of these four sets. In terms of fuzzy set operations, membership in the choosable set equaled the minimum of membership in the other four sets (Hunt & Rouse, 1984). To evaluate this fuzzy rule-based model, 34 airframe and powerplant maintenance trainees solved 85 problems involving troubleshooting functional network diagrams of six different automotive and aircraft
140 People & Organizations
systems. Analyses of a portion of trainees’ problem solving transcripts enabled identifying initial sets of S-Rules and T-Rules. As an aside, it was difficult to identify rules that serve to update knowledge rather than resulting in an observable action. From listening to trainees, it was clear that such rules existed. This difficulty relates to the discussion of fundamental limits of modeling in Chapter 2. As with the earlier evaluations of failure diagnosis models, these assessments were performed in two ways. One way compared model and trainee choices and then implemented the trainee’s choice. As noted earlier, this was necessary to assure that both model and trainees were operating on the same information. The result of this comparison was that the model matched trainees’ actions 50 percent of the time, and matched inferred rule choice 70 percent of the time. This result was achieved with no adjustment of model parameters to compensate for individual differences. The second way the model was assessed was open loop. The model invoked rules and took actions without regard to trainees’ choices. In this mode, the model spent nearly the same as the average of the trainees for tests, repairs, and spare parts. It was not possible to compare time until correct diagnoses as the time required to choose and invoke rules was not part of the model. Another interesting comparison involved removing the S-Rules from the model. As expected, the model’s performance was substantially degraded on the familiar automotive problems. However, on the unfamiliar problems such as the autopilot, a model with only T-Rules compared more favorably to trainees. This supported our evolving notion that general problem solving rules are more likely to be evidenced with unfamiliar problems.
Rule-Based Model for Dynamic Processes Annette Knaeuper addressed problem solving in PLANT (Knaeuper & Rouse, 1985). In keeping with the tradition in this line of research, she named her model. KARL, (Knowledgeable Application of Rule-Based Logic) was her choice, in part because it is her father’s name. Interestingly, this tradition of naming simulations and models did not persist in subsequent streams of research, as later chapters will attest. My sense is that it was part of the research zeitgeist during the decade or so when this research was pursued.
Failures, Detection, and Diagnosis 141
KARL consists of a set of production rules that comprise the knowledge base and a control structure that accesses the knowledge base. The 180 rules in KARL include 140 rules specific to PLANT and 40 less context-specific rules. The rules are embedded in a framework associated with the four classes of tasks in PLANT: Failure rules for detection, diagnosis, and correction of failures 0
Transition rules for operating during process transitions Tuning rules for normal operations and failure compensation Procedural rules for standard sequences of rules
To evaluate this model, KARL’S behavior and performance was compared to that of 32 subjects from Nancy Morris’ initial PLANT experiments. Evaluative measures considered were total production, process stability, and action-by-action comparisons. Total production with KARL compared favorably to the 32 human operators. KARL maintained a somewhat more stable PLANT than the human operators. The action-by-action comparison indicated that KARL and operators agreed 18% of the time in terms of using the same command with the same arguments (e.g., valve designation), 35% in using the same command but not necessarily the same arguments, 53% in using the same type of command, and 60% in using the same type of command within two iterations. These are favorable comparisons when one considers that operators were simultaneously trying to maximize production and detect, diagnose, and correct failures. Perhaps the greatest insight from this comparison involved the operators trained with procedures (two groups of eight Operators). Morris found that these operators maintained a more stable PLANT. As we expected, the comparison of KARL with these operators was more favorable than that with the operators who had not received procedural training. However, detailed analyses of operators’ choices indicated that, despite the fact that they knew when and how to use the procedures, as evidenced by a written test, they often did not follow the procedures - as least not as closely as KARL. This observation led to a novel question. Could KARL improve subjects’ performance by providing online advice? To an extent, this serendipitous possibility is tantamount to having a model of yourself helping you. This begs the question of whether two of you are better than
142 People & Organizations
one! The results of exploring this intriguing notion are discussed in a later section of this chapter.
Modeling Failure Detection The symptoms of failures are very clear in TASK and FAULT. In PLANT and MABEL, the effects of failures are not immediately evident. The impacts of failures require time to propagate and create observable symptoms. For many types of complex system failures, failure detection is as difficult as diagnosis. Joel Greenstein pursued the problem of failure detection and developed a model of how humans monitor complex processes to both detect “events” and then allocate their attention once events are felt to have occurred (Greenstein & Rouse, 1982). The scenario of interest was simultaneous monitoring of multiple processes, each with an associated mean time between failures, mean time to repair, and cost of delaying repair of failures. The operators’ task involved estimating the likelihood of events in each of nine processes, allocating attention to servicing those processes with sufficiently high likelihood of events, and then returning to monitoring. Upon returning to monitoring, time would have advanced an amount equal to the sum of the mean repair times for each process inspected plus a constant for transition times. (Note that this task is similar to MABEL except that the process monitoring involved plots of process output - for instance, a chemical process -- rather then queues of communications messages.) The model developed includes a linear statistical pattern recognition model for estimated the likelihood of events and a queueing theory model for determining the appropriate allocation of attention. The model has four free parameters: The relative weighting assigned to feature values calculated over recent and older process measurements The number of process measurements over which features are extracted Two probability thresholds used to exclude or include actions in the action sequence on the basis of event probability only. This model was compared to operators’ performance in two experiments. For the first experiment, which only involved event detection, the model detected 96% of the events detected by both humans
Failures, Detection, and Diagnosis 143
and model within two iterations of the humans’ detection decisions. For the second experiment, the first two of the above parameters were estimated using the data from the first experiment, leaving only two free parameters for fitting the model to the data from the second experiment. The result was that the model responded to 90% of the process failures detected by humans and responded on the same iteration ’78% of the time. The model and humans chose the same sequence of processes to service 93% of the time. This model is notable for its simplicity. KARL included 180 rules; in contrast, this model includes a few relatively straightforward linear equations and a few decision heuristics drawn from queueing theory. In the context of FAULT and the fuzzy rule-based model, this failure detection model could be characterized as having one pattern recognition S-Rule and one sequencing T-Rule. Of course, this simplicity is in part due to having independent processes, one type of failure, and one type of repair action. Both FAULT and PLANT are much richer environments. The fact remains, however, that pattern recognition is undoubtedly the way in which humans detect variations from normality in most tasks. While systems may include warnings and alarms, there are inevitably situations where humans’ pattern recognition abilities are the final defense for detecting abnormalities. As noted in earlier chapters, these abilities are often a key reason for including human operators in complex systems. Summary of Models The models discussed above are summarized in detail elsewhere (Rouse & Hunt, 1984; Rouse, 1985). Of more interest perhaps is placing these models in the context of the wealth of models addressing human detection, diagnosis, and compensation for system failures (Rouse, 1983a). In this review article, I considered six different models of failure detection, eleven models of failure diagnosis, and several alternative approaches to compensation. Contrasting the models of detection, those models based on filter theory (e.g., Kalman filtering) are the best available if the requisite information (e.g., state transition models) to use them can be obtained. Otherwise, pattern recognition based models are most appropriate. In either case, these models are certainly not available “off the shelf.” For example, considerable thought would be needed before these models could be applied to resolving the issues associated with the human’s relative abilities to detect failures in automated systems.
144 People & Organizations
Comparing the variety of models of diagnostic behavior, it is clear that the unconstrained prescriptive models do not provide good descriptions of human behavior and performance. Information theoretic models may be adequate for estimating mean time to repair for particular systems, across all types of failure and personnel. For more fine-grained predictions of behavior (i.e., action sequences), models that involve use of both symptomatic and topographic information are likely to be needed. There are limited models of failure compensation - operating the system while you are trying to find and remediate failures. There are numerous models of operator control behavior but few that are sensitive to whether or not the system is operating in a normal mode. The models discussed in this chapter, in combination with those discussed in Chapter 4, provide a basis for representing how humans coordinate detection, diagnosis, and compensation. Figure 8 suggests how the various pieces discussed above may fit together. Humans have a clear preference for proceeding on the basis of state information by mapping perceived patterns to recalled actions. The use of structural information is definitely a less preferred approach. However, when the state information does not yield known patterns, humans will (eventually) address structural information. The extent to which this tends to happen relates to Table 1, discussed very early in this chapter.
.
Problem
4
Decision
Oriented
Structural
Oriented
Figure 8. Overall Model of Human Problem Solving
Failures, Detection, and Diagnosis 145
TRAINING AND AIDING The modeling of human behavior and performance is interesting in its own right as a means to discovering and understanding human abilities and limitations. However, the research discussed thus far in this chapter was conducted in the spirit of human-centered design. Our goal was to design, evaluate, and deploy means for enhancing human abilities, overcoming human limitations, and fostering human acceptance. Put simply, we were seeking ways to train and aid humans to improve failure detection and diagnosis.
Training Many of the experiments discussed earlier considered transfer of training, typically from context-free to context-specific failure diagnosis tasks (Rouse & Hunt, 1984). The transfer from aided to unaided fault diagnosis was also studied. The central issue, of course, is the transfer of training to the real job. Bill Johnson pursued this issue. Bill was an instructor of airframe and powerplant maintenance at the Institute of Aviation at the University of Illinois at Urbana-Champaign. His Ph.D. dissertation focused on evaluating the efficacy of TASK, FAULT, and video-based instruction in terms of trainees’ performance diagnosing failures of live equipment systems (Johnson & Rouse, 1982). The first experiment compared the three training methods while the second experiment compared a combination of the two simulations to more traditional instruction. The first experiment included 12 trainees in each of the three training groups. Following training, each trainee diagnosed five failures in live four and six cylinder engines found in general aviation aircraft. The five problems covered the electrical, ignition, lubrication, and fuel subsystems of the engines. Live performance measures included the quality of the sequence of actions taken, the time required (adjusted to the engine manufacturer’s labor time schedule), and an overall evaluator’s rating. The results of this experiment showed that traditional instruction was superior. As noted above, TASK and FAULT were then combined into one training program. In addition, information was added to FAULT to indicate how to make tests, as it was discovered in the first experiment that FAULT-trained people often knew what to do but not how to do it.
146 People & Organizations
FAULT was also extended to provide feedback on unnecessary, redundant, and premature tests. (The error analysis that provided these insights is discussed in Chapter 4.) In the second experiment, 11 trainees were in each of the two training groups. The results of the second experiment indicated that performance with TASK and FAULT was equivalent to performance with traditional instruction, even for those problems where traditional instruction explicitly demonstrated the solution sequence for the problems seen in the transfer of training conditions. These results suggest that trainees do not need to be provided with explicit procedures for dealing with each problem that they might encounter. The results of Bill Johnson’s investigations, as well as the many studies discussed earlier, led to the notion of mixed-fidelity training that is summarized in Figure 9 (Rouse, 1982-83). In our work, TASK provided the low-fidelity simulator, FAULT provided the moderate fidelity simulator, and either real equipment or a full scope simulator provided the high-fidelity simulator. As shown in Figure 9, these simulators were dovetailed with a curriculum that included more traditional lectures and demonstrations.
Low-Fidelity Simulator
Introductory Lectures
Development of Basic Skills & Principles
Intermediate Lectures
Realistic Application of Skills & Principles
High-Fidelity Simulator
1
Figure 9. Mixed-Fidelity Approach to Training
Advanced Lectures
Failures, Detection, and Diagnosis 147
Our first full implementation of mixed-fidelity training was at Marine Safety International. Doug Arnott was a graduate student in our research group at the University of Illinois. Although his research was not part of the failure diagnosis studies, he observed these efforts and, upon return to MSI, told them about our simulators and mixed-fidelity training. His boss, Gene Guest, called me and asked if we would like to pursue this “for real.” Soon after, Russ Hunt and I had formed Search Technology and we were under contract to MSI to develop and deploy a mixed-fidelity training program for supertanker engineering officers. While MSI was intrigued with our notions of general vs. specific knowledge and skills, they also had a more practical problem they wanted solved. Their multi-million dollar engine control room simulator involved one trainee at a time as there was typically just one engineering officer on duty at any particular time during supertanker operations. For a training class of eight, this meant that seven trainees had to be kept productively occupied while the one trainee was using the full-scope simulator. MSI felt that our moderate-fidelity simulator could fill this void. We developed FAULT simulations of a range of ship propulsion systems and subsystems. The rollout of these simulations involved review by senior engineering executives of the oil company from where the trainees would come. These executives immediately griped about the FAULT simulations. They were too simple, often misleading, and sometimes erroneous. It looked like Search Technology was in trouble. At the break, I asked Gene whether I could try a new approach. When the executives came back and sat down at their Apple IIs, they saw simulations of automobile engines, aircraft engines, autopilots, and so on, but no ship systems. They quickly got into the “fun” of diagnosing failures in these systems. While they did not use these words, they figured out how T-Rules would work if you did not have any S-Rules. Later in the day, we switched back to ship systems and now they got it. The day was saved! This serendipitous experience led us to realize that we could teach some things better in unfamiliar contexts. Further, the effort invested in creating simulations in one domain could provide training value in other domains. Soon after, Bill Johnson joined Search Technology to head the training systems portion of the business. The experiences at Marine Safety International provided the foundation for building successful training systems for the military services, utility industry, and a wide variety of customers. It is interesting to note that the training simulator developed and evaluated for Aegis Cruiser combat crews, discussed in Chapter 2, also
148 People & Organizations
exploited the mixed-fidelity concept to overcome the bottlenecks experienced with their full-scope simulator which was so complicated that only two exercises could be run per day. The work with MSI, ten years earlier, did not address the nature.of teamwork as was necessary for the Aegis environment. Nevertheless, the lower-fidelity team trainer provided the same relief of the high-fidelity resource, while also resulting in better trained teams.
Aiding Training is a process of creating in people the potential to perform. In contrast, aiding is a process of directly augmenting people’s performance. The studies reviewed in this chapter involved evaluation of several aiding mechanisms. Perhaps the most compelling was aided bookkeeping of the implications of failure symptoms and test results. Another form of aiding was feedback on irrelevant, unnecessary, and redundant tests. Rule-based aiding seemed to mislead people. Annette Knaeuper and Nancy Morris (1984) devised a rather novel approach to aiding. Their idea was to use KARL - the rule-based model of operators controlling PLANT - as an assistant to operators. KARL provided a “second opinion” of sorts, although given that this advice was based on a model of the operator’s behavior and performance in controlling PLANT, it is unlikely that this opinion was actually an independent assessment. KARL provided advice by displaying its assessment of the current situation in PLANT, as well as more specific advice on how to proceed. Experimental subjects tended to appreciate the advice provided by KARL, and often anthropomorphized this aid, and in many cases were disappointed in the final assessment condition where KARL was taken away. In general, using KARL as an aid was a good idea, although this idea needs much more evaluation. There was one result in particular that was rather counterintuitive. KARL did not “know” about the possibility of safety system failures. As the earlier discussion of PLANT indicated, operators had considerable difficulty with this failure, either not understanding the somewhat confusing symptoms of this failure, or not finding the sources of these symptoms. Being rule-based, KARL was oblivious to this failure and provided reassuring assessments that everything was fine.
Failures, Detection, and Diagnosis 149
Eventually, all of the subjects in this initial experiment concluded that KARL was wrong and, subsequently, successfully diagnosed this failure. Thus, KARL by providing incorrect advice prompted successful performance. This suggests that aiding can have quite subtle effects. In this case, the result was positive. However, it is also possible to imagine aiding leading humans “down the garden path,” prompting needless attention to anomalous symptoms when there is no underlying problem. This leads to a discussion that will be pursued in Chapter 7 , namely, the need to provide aiding in the use of aiding. In the context of information search, we found that people would sometimes follow advice long past its usefulness. To remediate this problem, we needed to provide them indicators of productivity of the path they were following. This enabled them to decide when the advice no longer merited attention. Of course, we need look no farther than Microsoft Office to find many instances where we need aiding (online help) to be able to successfully use the aiding (e.g., Excel capabilities). The design of such “meta aiding” can be a useful means for uncovering how the primary aiding might confuse and mislead people. This, of course, can lead to the redesign of the primary aiding.
Summary The research discussed in this section on training and aiding is reviewed in more depth elsewhere (Rouse, 1985, 1991). The relationship of these findings to those of the broader research community is reviewed in Morris & Rouse (1985). In the next chapter, we consider the relationships between training and aiding, and the possibility of dynamically moving between these two forms of support for humans in complex systems.
CONCLUSIONS The detection and diagnosis of failures is a fascinating topic. Our complex economy and society are laced with opportunities for high-consequence system failures. We expect human operators and maintainers to deal with such failures and keep us from harm’s way. Human-centered design can help to achieve this objective by:
150 People & Organizations
Enhancing Human Abilities: People have excellent pattern recognition abilities and can muddle through in extraordinarily ambiguous situations. We can help them by displaying patterns and highlighting ambiguities. Overcoming Human Limitations: People have difficulty bookkeeping the implications of everything they see and/or know. We can help them by showing them the implications of this information and dynamically updating these implications as they evolve. 0
Fostering Human Acceptance: Failure situations can be quite stressful and people can have difficulty responding under pressure. Support that is designed with this phenomenon in mind is more likely to be embraced by operators and maintainers.
Human-centered design is crucial for enabling humans to fulfill their responsibilities in failure situations and fill the gaps that the design of complex systems inevitably creates, and sometime aggravates.
REFERENCES Carlson, J.N., & Doyle, J. (2002). Complexity and robustness. Proceedings of the National Academv of Science, 99 (l),2538-2545. Doyle, J.C., Anderson, D.L., Li, L., Low, S . , Roughan, M., Shalunov, S . , Tanaka, R., & Willinger, W. (2005). The “robust yet fragile” nature of the Internet. Proceedings of the National Academv of Science, 102 (41), 14497-14502. Gell-Mann, M. (1995). What is complexity? Complexity, 1(1). Greenstein, J.S., & Rouse, W.B. (1982). A model of human decision making in multiple process monitoring situations. EEE Transactions on Systems, Man, and Cybernetics, SMC-12(2), 182-193. Henneman, R.L., & Rouse, W.B. (1984a). Measures of human problem solving performance in fault diagnosis tasks. E E E Transactions on Svstems, Man. and Cvbernetics, SMC-14(1), 99-1 12. (Reprinted in A.P. Sage, Ed., 1987, System Design for Human Interaction. New York: IEEE Press.)
Failures, Detection, and Diagnosis 151
Henneman, R.L., & Rouse, W.B. (1984b). Human performance in monitoring and control of hierarchical large-scale systems. Transactions on Systems, Man, and Cybernetics, SMC-14(2), 184-191. Henneman, R.L., & Rouse, W.B. (1986). On measuring the complexity of monitoring and controlling large scale systems. IEEE Transactions on Systems, Man, and Cybernetics, SMC-16(2), 193-207. Hunt, R.M., & Rouse, W.B. (1981). Problem solving skills of maintenance trainees in diagnosing faults in simulated powerplants. Human Factors, 23(3), 317-328. Hunt, R.M., & Rouse, W.B. (1984). A fuzzy rule-based model of human problem solving. IEEE Transactions on Systems, Man, and Cvbernetics, SMC-14( l), 112-120. (Reprinted in A.P. Sage, Ed., 1987, System Design for Human Interaction. New York: IEEE Press.) Johannsen, G., & Rouse, W.B. (1983). Studies of planning behavior of aircraft pilots in normal, abnormal, and emergency situations. Transactions on Systems, Man, and Cybernetics, SMC-13(3), 267-278. Johnson, W.B., & Rouse, W.B. (1982). Training maintenance technicians for troubleshooting: Two experiments with computer simulations. Human Factors, 24(3), 271-276. (Reprinted in R.W. Swezey & D.H. Andrews, Eds., (2000), Readings in Training and Simulation: A 30-Year Perspective. Santa Monica, CA: Human Factors and Ergonomics Society.) Knaeuper, A.E., & Morris, N.M. (1984). A model-based approach for online aiding and training in a process control task. Proceedings of IEEE International Conference on Cybernetics and Society, Nova Scotia. Knaeuper, A.E., & Rouse, W.B. (1985). A rule-based model of human problem solving behavior in dynamic environments. IEEE Transactions on Systems, Man, and Cybernetics, SMC-15(6), 708-719. Messer, S.B. (1976). Reflection-impulsivity: A review. Psychological Bulletin 83 (6), 1026-1052.
-9
Morris, N.M., & Rouse, W.B. (1985a). The effects of type of knowledge upon human problem solving in a process control task. IEEE Transactions on Systems, Man, and Cybernetics, SMC-15(6), 698-707. (Reprinted in A.P. Sage, Ed., 1987, System design for human interaction. New York: IEEE Press.)
152 People & Organizations
Morris, N.M., & Rouse, W.B. (1985b). Review and evaluation of empirical research in troubleshooting. Human Factors, 27(5), 503-530. Morris, N.M., & Rouse, W.B. (1993). Human operators’ response to errorlikely situations in complex engineering systems. In W.B. Rouse, Ed., Human-Technology Interaction in Complex Systems (Vol. 6). Greenwich, CT: JAI Press. Morris, N.M., Rouse, W.B., & Fath, J.L. (1985). PLANT: A process control simulation for study of human problem solving. IEEE Transactions on Systems, Man, and Cybernetics, SMC-15(6), 792-798. Perrow, C. (1984). Normal accidents: Living with high-risk technologies. New York: Basic Books. Rasmussen, J., & Rouse, W.B. (Eds.). (1981). Human detection and diagnosis of system failures. New York: Plenum Press. Rouse, S.H., & Rouse, W.B. (1982). Cognitive style as a correlate of human problem solving performance in fault diagnosis tasks. Transactions on Systems. Man, and Cybernetics, SMC-12(5), 649-652. Rouse, W.B. (1978a). Human problem solving performance in a fault diagnosis task. IEEE Transactions on Systems, Man, and Cybernetics, SMC-8(4), 258-271. Rouse, W.B. (1978b). A model of human decision making in a fault diagnosis task. IEEE Transactions on Systems, Man, and Cybernetics, SMC-8(5), 357-36 1. Rouse, W.B. (1979a). Problem solving performance of maintenance trainees in a fault diagnosis task. Human Factors, 2l(2), 195-203. Rouse, W.B. (1979b). Problem solving performance of first semester maintenance trainees in two fault diagnosis tasks. Human Factors, 2 ( 5 ) , 61 1-618. Rouse, W.B. (1979~). A model of human decision making in fault diagnosis tasks that include feedback and redundancy. IEEE Transactions on Systems. Man, and Cybernetics, SMC-9(4), 237-241. Rouse, W.B. (1981). Experimental studies and mathematical models of human problem solving performance in fault diagnosis tasks. In J. Rasmussen & W.B. Rouse (Eds.), Human detection and diagnosis of system failures (pp. 199-216). New York: Plenum Press.
Failures, Detection, and Diagnosis 153
Rouse, W.B. (1982-1983). A mixed-fidelity approach to technical training. Journal of Educational Technology Systems, ll(2), 103-115. Rouse, W.B. (1983a). Models of human problem solving: Detection, diagnosis, and compensation for system failures. Automatica, l9(6), 613625. (Reprinted in A.P. Sage, Ed., 1987, System Design for Human Interaction. New York: IEEE Press.) Rouse, W.B. (1983b). Fuzzy models of human problem solving. In P.P. Wang (Ed.), Advances in fuzzy set theory and applications (pp. 377-386). New York: Plenum Press. Rouse, W.B. (1985). Models of natural intelligence in fault diagnosis tasks: Implications for training and aiding maintenance personnel. In J.J. Richardson (Ed.), Artificial intelligence in maintenance (pp. 202-221). Park Ridge, NJ: Noyes Publications. Rouse, W.B. (1991). Design for success: A human-centered approach to designing successful products and systems. New York: Wiley. Rouse, W.B. (1995). Network models of human-machine interaction (pp. 109-126). In D. Batten, J. Casti, & R. Thord (Eds.), Networks in action. Berlin: Springer-Verlag. Rouse, W.B. (2000). Managing complexity: Disease control as a complex adaptive system. Information Knowledge Systems Management, 2. (2), 143-165. Rouse, W.B. (2003). Engineering complex systems: Implications for research in systems engineering. IEEE Transactions on Systems, Man, and Cybernetics -Part C, 2 (2), 154-156. Rouse, W.B. (2007). Complex engineered, organizational, and natural systems: Issues underlying the complexity of systems and fundamental research needed to address these issues. Atlanta, GA: Tennenbaum Institute, Georgia Institute of Technology. Rouse W.B., & Hunt, R.M. (1984). Human problem solving in fault diagnosis tasks. In W.B. Rouse (Ed.), Advances in man-machine systems research (Vol. 1, pp. 195-222). Greenwich, CT: JAI Press. Rouse, W.B., & Rouse, S.H. (1979). Measures of complexity of fault diagnosis tasks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-9(11), 720-727. (Reprinted in B. Curtis, Ed., 1981, Human Factors
154 People & Organizations
in Software Develoument. Silver Spring, MD: IEEE Computer Society Press. ) Rouse, W.B., Rouse, S.H., & Pellegrino, S.J. (1980). A rule-based model of human problem solving performance in fault diagnosis tasks. Transactions on Systems, Man, and Cybernetics, SMC-10(7), 366-376. (Reprinted in B. Curtis, Ed., 1981, Human Factors in Software Development. Silver Spring, MD: IEEE Computer Society Press.) Zadeh, L.A., Fu, K.S., Tanaka, K., & Shimura, M. (Eds.).(1975). Fuzzy Sets and Their Application to Cognitive and Decision Processes. New York: Academic Press.
Chapter 6
DISPLAYS, CONTROLS, AIDING, AND TRAINING
INTRODUCTION In recent decades, engineering research has evolved to focus more on engineering science rather than engineering design. A goal of mathematical rigor has driven how engineering problems are formulated and addressed. Not infrequently, problems are scaled down to fit the methods, rather than scaling up the methods to address the real problems (Rouse, 1985). The result has been the creation and deployment of powerful mathematical and computational methods and tools. Such methods and tools have been central to success in developing technological innovations ranging from microchips to advanced aircraft. We have not been as successful with the human aspects of complex systems - our mathematical and computational methods and tools are not as advanced as they are in some purely technical domains (Mavor & Pew, 1997). The material in Chapters 2-5, in the original journal article forms, involved a wealth of mathematical and computational models. However, these methods and tools served as means for exploring and understanding human abilities and limitations, as well as approaches to enhance abilities and overcome limitations. These methods and tools were not intended to become off-the-shelf elements of systems engineers’ toolkits. This possibility is limited by the fact that humans in complex systems seldom play roles that are comparable to transistors or control actuators. In fact, were these roles so limited, humans would likely be replaced by automation. Consequently, understanding and supporting humans in systems cannot be fully addressed with mathematical and computational methods and tools. We seldom can deduce jobs, tasks, and activities from mathematical first principles. Instead, we have to design human roles and the means for 155
156 People & Organizations
supporting these roles. As this chapter illustrates, we can draw upon a wealth of guidelines and frameworks for approaching this design effort. We also have to address evaluation since we seldom can mathematically prove the correctness of our designs. The essential phenomenon in this chapter is design. This phenomenon involves both synthesis and analysis, the latter dominating engineering education for the past few decades. Synthesis includes the formulation of the design problem, representation of the phenomena associated with the problem, and creation of the means to control and otherwise support these phenomena to achieve the design objectives emerging from the formulation of the problem. Analysis, in contrast, involves manipulation of mathematical or computational instantiations of these representations and creations. Chapters 7 and 8 address design and designers in a broad sense, encompassing a wide range of design domains and stakeholders. In this chapter, attention is limited to the operators and maintainers addressed in Chapters 2-5. The goal is to translate the research findings of these earlier chapters into guidelines and frameworks for designing displays, controls, aiding, and training.
SYSTEMS ENGINEERING My first experiences related to human-machine interaction happened the summer after my sophomore year in college. I worked at Raytheon’s Submarine Signal Division in the Reliability & Maintainability Department. This department included a Human Factors Group that focused on the human-machine issues associated with sonar operators. I did not learn much about human factors from this experience, except for the knowledge that they were part of the team that addressed the overall system. Over my two years at Raytheon, I was fortunate to work in several of the groups and addressed issues of mechanical and electrical design, reliability and maintainability analysis, and even proposal preparation. This experience, in combination with my coursework in dynamic systems and control, started me in the “systems” direction that I have pursued ever since. Systems engineering is a broad area, crossing many disciplines in engineering and elsewhere. When Andy Sage and I edited the first edition of the Handbook of Systems Engineering and Management (Sage & Rouse,
Displays, Controls, Aiding, and Training 157
1999), we developed a knowledge map based on a content analysis of the 30 chapters in the handbook. This map includes five clusters of chapters: Five chapters focus on systems engineering and management perspectives. Ten chapters address systems engineering processes and systems management for definition, development, and deployment. Seven chapters emphasize systems engineering and management methods, tools and technologies. Six chapters attend to humans and organizations. One chapter deals primarily with economics - this chapter has the most linkages with all the other chapters.
Human and Organizational Aspects This chapter addresses the human aspects of systems engineering. Organizational aspects are addressed in later chapters. Obviously, much of systems engineering is not addressed by this book at all. By the way, a new edition of the Handbook of Systems Engineering and Management is in the offing which includes updates such as found in Sage and Rouse (2004a, b) as well as much new material. Some readers may wonder why I do not refer to “human aspects” as human factors and/or ergonomics. The reason is simple. In my work with many executives and senior managers in companies and government agencies I have found that they have much more appreciation of systems engineering and management than human factors and ergonomics. They see the former as broadly applicable to their issues and problems, and the latter as narrow and focused on human characteristics in and of themselves. I have found it much easier to sell projects by saying that my general area of expertise is systems engineering and management, and my specialty is the human and organizational aspects of systems. In this chapter, I discuss the genesis and evolution of approaches to system design with emphasis on the human aspects of the system. This material is addressed in depth in Design for Success: A Human-Centered Approach to Designing Successfil Products and Systems (Rouse, 1991a), as well as the eight-volume series, Humaflechnology lnteraction in Complex Systems (Rouse, 1984-1996). At various points throughout this
158 People & Organizations
chapter, I indicate specific parts of these books for more elaboration of the ideas discussed here. It is also important to indicate that this chapter does not address the truly broad view of humans in systems such as covered by my colleagues and me elsewhere (Moray, et al, 1990; Booher & Rouse, 1990; Rouse, 1990b; Martin, et al., 1991). This includes topics such as the appropriateness of automation, social dimensions of technology, sociotechnical systems, and cultural impacts of technology. These topics receive more attention in later chapters of this book, although even those chapters do not fully reflect the philosophy outlined in these other treatments.
Cost/Benefit Analysis The past decade has been a period of very serious scrutiny of the activities of most enterprises. Business processes have been reengineered and enterprises have been downsized or, more popularly, rightsized. Every aspect of an enterprise now must provide value to customers, earn revenues based on this value, and pay its share of costs. Aspects of an enterprise that do not satisfy these criteria are targeted for elimination. This philosophy seems quite reasonable and straightforward. However, implementation of this philosophy becomes rather difficult when the “value” provided is such that anticipated benefits are not readily measurable in monetary units and only indirectly affect things amenable to monetary measurement. It can be very difficult to assess the worth of investments in such benefits. There is a wealth of examples of such situations. With any reasonable annual discount rate, the tangible discounted cash flow of benefits from investments in libraries and education, for example, would be so small as to make it very difficult to justify societal investments in these institutions and activities. Of course, we feel quite justified arguing for such investments. Thus, there obviously must be more involved in such an analysis than just discounted cash flow. This section addresses investments in human effectiveness that enhance abilities, overcome limitations, and foster acceptance. This includes selection, training, system design, job design, organizational development, health and safety, and, in general, the wide range of things done to assure and enhance the effectiveness of people in organizations ranging from businesses to military units. Investments focused on
Displays, Controls, Aiding, and Training 159
increasing human potential, rather than direct job performance outputs, are much more difficult to justify than those with near-term financial returns (Rouse, Kober, & Mavor, 1997). A central issue relates to the preponderance of intangible outcomes for these types of investments. For example, investments in training may enhance leadership skills of managers or commanders. Investments in organizational development can improve the cohesiveness of “mental models” of management teams or command teams, and enhance the shared nature of these models. However, it is difficult to capture fully such impacts in terms of tangible, “bottom line” metrics. Another issue concerns costhenefit analyses across multiple stakeholders. Most companies’ stakeholders include customers, shareholders, employees, suppliers, communities, etc. Government agencies often have quite diverse socio-political constituencies who benefit - or stand to lose benefits - in a myriad of ways depending on investment decisions. For example, government-sponsored market research may be part of a regional economic development plan or may be part of a broader political agenda focused on creating jobs. In general, diverse constituencies are quite likely to attempt to influence decisions in a variety of ways. These situations raise many basic questions relative to the importance of benefits and costs for the different stakeholders. There is a variety of frameworks for scrutinizing and justifying investments including: Costhenefit analysis: Methods for estimating and evaluating time sequences of costs and benefits associated with alternative courses of action Cost/effectiveness analysis: Methods for estimating and evaluating time sequences of costs and multi-attribute benefits to assure that the greatest benefits accrue for given costs. Life-cycle costing: Methods for estimating and evaluating costs of acquisition, operation, and retirement of alternative solutions over their total cycles of life. Affordability analysis: Methods for estimating and evaluating life cycle costs compared to expected acquisition, operations, and maintenance budgets over the total life cycle of an alternative investments.
160 People & Organizations
Return on investment analysis: Methods for projecting the ratio, expressed as a percentage, of anticipated free cash flow to planned resource investments. Costhenefit analyses are very straightforward when one considers fixed monetary investments made now to earn a known future stream of monetary returns over some time period. Things get much more complicated, however, when investments occur over time, some of which may be discretionary, and when returns are uncertain. Further complications arise when one must consider multiple stakeholders’ preferences regarding risks and rewards. Additional complexity is added when returns are indirect and intangible rather than purely monetary. Traditional Economic Analysis. The time value of money is the central concept in this traditional approach. Resources invested now are worth more than the same amounts gained later. This is due to the costs of the investment capital that must be paid, or foregone, while waiting for subsequent returns on the investment. The time value of money is represented by discounting the cash flows produced by the investment to determine a new present value that reflects the interest that would, in effect at least, have to be paid on the capital borrowed to finance the investment. Option Pricing Theory. Many investment decisions are not made all at once. Instead, initial investments are made to create the potential for possible future and usually larger investments involving much greater benefits than likely for the initial investments. For example, investments in R&D are often made to create the intellectual property and capabilities that will support or provide the opportunity to subsequently decide whether or not to invest in launching new products or services. These launch decisions are contingent on R&D reducing uncertainties and risks, as well as further market information being gained in the interim between the R&D investment decision and possible launch decision. In this way, R&D investments amount to purchasing options to make future investments and earn subsequent returns. These options, of course, may or may not be exercised. Option pricing models are discussed in more detail in Chapter 9. Knowledge Capital Auuroach. Tangible assets and financial assets usually yield returns that are important elements of a company’s overall earnings. It is often the case, however, that earnings far exceed what might be
Displays, Controls, Aiding, and Training 161
expected from these “hard” assets. For example, companies in the software, biotechnology, and pharmaceutical industries typically have much higher earnings than companies with similar hard assets in the aerospace, appliance, and automobile industries, to name just a few. It can be argued that these higher earnings are due to greater knowledge capital among software companies, etc. However, since knowlzdge capital does not appear on financial statements, it is very difficult to identify and, better yet, project knowledge earnings. Multi-Attribute Utilitv Models. Costhenefit calculations become more complicated when benefits are not readily transformable to economic terms. Benefits such as safety, quality of life, and aesthetic value are very difficult to translate into strictly monetary values. Multi-attribute utility models provide a means for dealing with situations involving mixtures of economic and non-economic attributes. Multi-stakeholder, multi-attribute models are elaborated in Chapter 8. Comparison of Frameworks. A comparison of the four frameworks is presented in our fuller treatments of costhenefit analysis (Rouse & Boff, 1997, 2003, 2005). Traditional economic analyses are clearly the most narrow. However, in situations where they apply, these analyses are powerful and useful. Option pricing theory seems to be a natural extension of traditional methods to enable handling these limitations. The knowledge capital approach provides another, less mathematical, way of capturing the impacts of investments in human effectiveness. The difficulty of this approach is that it does not address the potential impacts of alternative investments. Instead, it serves to report the overall enterprise score after the game. Multi-attribute utility models can -- in principle -- address the full range of complications and complexity associated with investments in human effectiveness. This is supported by the fact that multi-attribute models can incorporate metrics such as discounted cash flow, option value, and knowledge capital as attributes within the overall model - indeed, the special case of one stakeholder, linear utility functions, and net present value as the sole attribute is equivalent to the traditional financial analysis. Different stakeholders’ preferences for these metrics can then be assessed and appropriate weightings determined. Thus, use of multi-attribute models does not preclude also taking advantage of the other approaches the four approaches, therefore can be viewed as complementary rather than competing.
162 People & Organizations
CostBenefit Methodolow. Costhenefit analysis should always be pursued in the context of particular decisions to be addressed. A valuable construct for facilitating an understanding of the context of an analysis is the value chain from investments to returns. More specifically, it is quite helpful to consider the value chain from investments (or costs), to products, to benefits, to stakeholders, to utility of benefits, to willingness to pay, and finally to returns on investments. This value chain can be depicted as: investments (costs)
to resulting products over time
products over time
to benefits of products over time
benefits over time
to range of stakeholders in benefits
range of stakeholders
to utility of benefits to each stakeholder
utility to stakeholders
to willingness to pay for utility gained
willingness to pay
to returns to investors
The process starts with investments that result -- or will result -- in particular products over time. Products need not be end products - they might be knowledge, skills, or technologies. These products yield benefits, also over time. A variety of people -- or stakeholders -- have a stake in these benefits. These benefits provide some level of utility to each stakeholder. The utility perceived -- or anticipated -- by each stakeholder affects their willingness to pay for these benefits. Their willingness to pay affects their “purchase” behaviors that result in returns for investors. Over the past decade, Ken Boff and I have applied and refined the seven-step methodology summarized in Table 1. This methodology reflects the above value chain: Step 1: Identify Stakeholders. The first step involves identifying the stakeholders who are of concern relative to the investments being entertained. Usually this includes all of the people in the value chain summarized earlier. This might include, for example, those who will provide the resources that will enable a solution, those who will create the solution, those who will implement the solution, and those who will benefit from the solution.
Displays, Controls, Aiding, and Training 163
Description
Step 1
Identify stakeholders in alternative investments
2
Define benefits and costs of alternatives in terms of attributes
3
Determine utility functions for attributes (benefits and costs) ~~
~
4
Decide how utility functions should be combined across stakeholders
5
Assess parameters within utility models
6
Forecast levels of attributes (benefits and costs)
7
Calculate expected utility of alternative investments Table 1. CostBenefit Analysis Methodology
Step 2: Define Benefit and Cost Attributes. The next step involves defining the benefits and costs involved from the perspective of each stakeholder. These benefits and costs define the attributes of interest to the stakeholders. Usually, a hierarchy of benefits and costs emerges, with more abstract concepts at the top, for example, viability, acceptability, and validity (Rouse, 199la), and concrete measurable attributes at the bottom. Step 3: Determine Stakeholders’ Utility Functions. The value that stakeholders attach to these attributes is defined by stakeholders’ utility functions. The utility functions enable mapping disparate benefits and costs to a common scale. A variety of techniques available for assessing utility functions are discussed in Chapter 8. Step 4: Determine Utility Functions Across Stakeholders. Next, one determines how utility functions should be combined across stakeholders. At the very least, this involves assigning relative weights to different stakeholders’ utilities. Other considerations such as desires for parity can make the ways in which utilities are combined more complicated. For
164 People & Organizations
example, the multi-stakeholder, multi-attribute utility function may require interaction terms to assure all stakeholders gain some utility. Step 5: Assess Parameters of Utility Functions. The next step focuses on assessing parameters within the utility models. For example, utility functions that include diminishing or accelerating increments of utility for each increment of benefit or cost involve rate parameters that must be estimated. As another instance, estimates of the weights for multistakeholder utility functions have to be estimated. Fortunately, there is a variety of standard methods for making such estimates. Step 6: Forecast Levels of Attributes. With the costhenefit model fully defined, one next must forecast levels of attributes or, in other words, benefits and costs. Thus, for each alternative investment, one must forecast the stream of benefits and costs that will result if this investment is made. Quite often, these forecasts involve probability density functions rather than point forecasts. Utility theory models can easily incorporate the impact of such uncertainties on stakeholders’ risk aversions. On the other hand, information on probability density functions may not be available, or may be prohibitively expensive. In these situations, beliefs of stakeholders and subject matter experts can be employed, perhaps coupled with sensitivity analysis (see Step 7) to determine where additional data collection may be warranted. Step 7: Calculate Expected Utilities. The final step involves calculating the expected utility of each alternative investment. These calculations are performed using specific forms of utility functions. This step also involves using sensitivity analysis to assess, for example, the extent to which the rank ordering of alternatives, by overall utility, changes as parameters and attribute levels of the model are varied. Use of the Methodology. Some elements of the costhenefit methodology just outlined are more difficult than others. The overall calculations are quite straightforward. The validity of the resulting numbers depends, of course, on stakeholders and attributes having been identified appropriately. It further depends on the quality of the inputs to the calculations. These inputs include estimates of model parameters and forecasts of attribute levels. The quality of these estimates is often compromised by lack of available data. Perhaps the most difficult data collection problems relate to situations where the impacts of investments are both uncertain and
Displays, Controls, Aiding, and Training 165
very much delayed. In such situations, it is not clear which data should be collected and when they should be collected. It is useful to consider how this costhenefit methodology should affect decision making. To a very great extent, the purpose of this methodology is to get the right people to have the right types of discussions and debates on the right issues at the right time. If this happens, the value of people's insights from exploring the multi-attribute model usually far outweighs the importance of any particular numbers. The practical implications of this conclusion are quite simple. Very often, decision making happens within working groups who view computer-generated, large-screen displays of the investment problem formulation and results as they emerge. Such groups perform sensitivity analyses to determine the critical assumptions or attribute values that are causing some alternatives to be more highly rated or ranked than others. They use "What if ..?" analyses to explore new alternatives, especially hybrid alternatives. This approach to investment decision making helps to substantially decrease the impact of limited data being available. Groups quickly determine which elements of the myriad of unknowns really matter where more data are needed, and where more data, regardless of results, would not affect decisions. A robust problem formulation that can be manipulated, redesigned, and tested for sanity provides a good way for decision making groups to reach defensible conclusions with some level of confidence and comfort.
Modeling Human Behavior and Performance Step 6 of the codbenefit analysis methodology involves predicting attribute levels. For investments intended to enhance human effectiveness, the attributes of interest relate to human behavior and performance. Also usually of great interest are the consequences of human behavior and performance for overall system performance. In the late 1970s, working with Danny Gopher and Gunnar Johannsen, we published two broad reviews of models of human-machine interaction (Rouse & Gopher, 1977; Johannsen & Rouse, 1979). I also published an article reviewing alternative representations of human-machine interaction, with nine levels ranging from biological to information processor to creator. These representations mapped to seven classes of models from
166 People & Organizations
information processor to controller to problem solver and planner (Rouse, 1980a). These reviews also addressed the limitations of models. Writing these articles prompted the broad thinking that led to my first book, Systems Engineering Models of Human-Machine Interaction (Rouse, 1980b). This book is methodologically organized in terms of theories of estimation, control, queueing, fuzzy sets, and production systems from artificial intelligence. Each chapter included the elements of the theory, expressed algebraically, and several applications drawn from our group’s research as well as that of many others in the community. Throughout the 1980s, I taught a graduate course at the University of Illinois and Georgia Tech using this slim book. The orange cover chosen by the publisher (Elsevier) led many students to refer to the book as the “orange peril.” The book was flush with equations. However, the only math required was algebra. The difficulty was that few graduate students had encountered all these theories within their undergraduate education. Some had been introduced to estimation theory, others had taken control theory, and so on. This difficulty was manifested quite strongly with assignments where students could apply any model - or combinations of models -- they thought would provide insights and guidance into a complex humanmachine systems problem. Inevitably, a few graduate students would ask about the “right answer” to the question. They were often unsatisfied that there were many good answers, many more bad answers, and no “right” answers. Despite these difficulties, the “orange peril” continued selling for two decades and only went out of print when the publisher sold the book series to another publisher. Although I did not realize it at the time, people’s reactions to this book were my first encounter with the reality that engineers needed - or at least desired - to have phenomena expressed in a representation with which they felt comfortable. Crossing traditional disciplinary boundaries stretched people a bit more than they found comfortable. I was reasonably contented with the various theories expounded in this book in the sense that I thought they could capture a wide range of phenomena associated with human-machine interaction. This contentment evaporated as my relationship with Jens Rasmussen grew. Our collaboration on Human Detection and Diagnosis of System Failures (Rasmussen & Rouse, 198l), as well as the research that led to his subsequent books (Rasmussen, 1986, 1994), profoundly affected the thinking of our group.
Displays, Controls, Aiding, and Training 167
The impact of Jens on our work is captured in my book chapter in his 60* birthday festschrift (Rouse, 1988~). This chapter, “Ladders, Levels, and Lapses: And Other Distinctions in the Contributions of Jens Rasmussen” played on the title of his well-known article on “Signals, Signs, and Symbols.” Ladders refer to his inverted-V diagram of how people perceive stimuli and then respond. Levels relates to his notions of skill, rule, and knowledge-based information processing. Lapses connote his research on human error. The perspective on modeling provided by Jens’ seminal work caused me to realize that, despite my efforts to capture all of mathematics in algebra, there were essential phenomena that simply could not be captured in sets of equations, algebraic or otherwise. This thinking still pervades our research group, despite the fact that we now are concerned with executives and senior managers rather than operators and maintainers. More recently (Rouse, 1995), I have reflected on the commonality of the representations discussed in this chapter, as well as the earlier chapters in this book. My participation in John Casti’s community of networkoriented thinkers led me to summarize and contrast a range of network models of equipment (discussed below), equipment and tasks (e.g., mental models and plan-goal graphs discussed in Chapters 2 and 4), and equipment, tasks, and teams (e.g., team mental models discussed in Chapter 2). Network representations are indeed pervasive. As I reflect on 25 years of modeling human-machine interaction, my thoughts are that the process of understanding human behavior and performance provides substantially more value than any particular predictions resulting from the range of models discussed. From this perspective, our goal should not be to provide Newton’s or Maxwell’s equations for humans - this is both too difficult and the wrong objective. Instead, we should focus on modeling frameworks - such as provided by Jens Rasmussen - that help designers to think about human behavior and performance in the right ways. With the right thinking, designers will devise good ways of enhancing human abilities, overcoming human limitations, and fostering human acceptance.
Modeling Mental Workload Beyond people’s abilities to behave and perform so as to achieve overall system objectives, there is often the question of the extent to which humans can sustain such behavior and performance. This question is often couched
168 People & Organizations
in terms of effort or mental workload. More specifically, what levels of effort or mental are required to sustain performance and can humans maintain these levels? I first delved into this issue in preparation for the NATO Conference on Mental Workload, chaired by Neville Moray in Greece in 1977. My chapter in the eventual book compared measures based on information theory (bitdsecond and fraction of attention), control theory (fraction of attention), and queueing theory (queue utilization & fraction of attention), as well as related measures of human performance, physiological states, and subjective perceptions. I also attempted a qualitative, albeit brief, integration of these perspectives (Rouse, 1979a). Our efforts in this area remained secondary for the next decade or so. It was not that we felt the issue was unimportant. More to the point, the field was too crowded, with numerous universities and government research organizations exploring a wide range of mental workload paradigms. Frankly, I have always tended to avoid crowded research areas. It can be very hard to move beyond the fray and gain attention for one’s efforts and results. In the late 1980s and early 1990s, we were drawn back to this topic. Our work on failure diagnosis (discussed in detail in Chapter 5 ) included studies led by Nancy Morris investigating how human operators deal with mental workload when controlling dynamic processes in error-likely situations (Morris & Rouse, 1993). As indicated in Chapter 5 , when operators experience an increase in perceived effort above some acceptable threshold, they attempt to reduce the level of perceived effort by controlling the situation or controlling themselves. The design implication is that people need means to adapt in these ways. Nancy Morris’ results clearly portrayed the dynamics of mental workload. This suggests that adaptive aiding (see Chapter 4) should be sensitive to current levels and changes of levels of mental workload. This led Sharon Edwards, John Hammer, and me to explore the identifiably of dynamic workload models for the purpose of adaptive aiding (Rouse, Edwards & Hammer, 1993). One workload model was second order based on the correlation models results found by Morris and Rouse (1993). The second model was the heuristic model of Morris and Rouse. We developed a linear identifier that worked well for first model, even with significant noise, but only modestly for the heuristic model. The results of these two sets of studies showed that adapting levels of aiding to current mental workload, and its rate of change, make considerable sense in terms of enhancing overall system performance.
Displays, Controls, Aiding, and Training 169
While the identification methods studied have significant limitations, results were sufficient to demonstrate the potential great impact. From a design perspective, the key conclusion is that static, point solutions (for instance, no automation vs. full automation) are unlikely to be the best solutions.
SYSTEM DESIGN Design related to the human elements of complex systems has received considerable attention since World War 11. Several design handbooks have been published since the 1960s. The most up-to-date offering is Gav Salvendy’s Handbook of Human Factors and Ergonomics (1997, 2005). While it was published two decades ago, Ken Boff‘s Engineering Data Compendium (Boff & Lincoln, 1986) is notable for the intense effort invested in creating engineering-related tabulations and plots designed to be immediately useful and usable by designers. Hal Booher’s Handbook of Human Systems Integration (Booher, 2003) is particularly valuable in that it addresses design, development, acquisition, and fielding of complex systems. The idea of packaging human-machine systems information concepts, principles, methods, tools, and data -- for use by designers is a rather obvious notion. Surprisingly, however, we have found that many publications in this domain are primarily targeted at readers who are researchers rather than designers. Perhaps not so surprisingly, researchers tend to write for their peers rather than the supposed users of their research results. As noted in Chapter 1, Ken Boff and I became attracted to this phenomenon in the early 1980s. We both were motivated by the very real disconnect between the stated aspirations of many R&D organizations and the actual knowledge, technologies, methods, and tools transitioning to design practice. This realization emerged as we, at first independently, tried to codify research outputs as inputs to design practice. This serendipitous alliance blossomed quickly and continues today, more than 25 years later. Our studies of design are discussed in Chapters 7 and 8. My interest in design was also advanced by another serendipitous contact. In 1988, I was contacted by an individual who wanted to know whether I would be willing to travel to an unnamed foreign country to give a series of lectures on human-machine interaction. Given that Search Technology at that time had several large defense contracts involving
170 People & Organizations
security provisions, I answered, “No, I would not travel to an unnamed foreign country.” Soon after, I was contacted by Joe Bitran who clarified that the country was South Africa and the lectures would be at the University of Pretoria. He indicated that his Israeli colleague tended to be a bit reserved in sharing information, but that this was a fully above-board inquiry. I agreed to travel to South Africa to discuss and plan the lecture series. During this visit I learned that they needed an up-to-date primer on the types of research discussed in Chapters 2-5 of this book, with a strong design orientation. As a result, I prepared a five-day short course on human-centered design. I first delivered this course in early 1989. Various colleagues at places like Boeing and NASA heard about this short course and I soon after delivered this course to these organizations. After quite a few deliveries, I formalized the material in Design for Success (Rouse, 1991a, 2001). This was serendipitous in that I never intended to write a book on this topic. Clients and students’ questions, comments, and suggestions drove the emergence of this book, as well as several books that followed.
Human/Machine Interaction Design for Success was not my first attempt to integrate a wide range of material in terms of design guidance, methods, and tools. I moved from a visiting junior faculty position at Tufts University to a regular junior faculty position at the University of Illinois at Urbana-Champaign in early 1974. It was immediately clear that moving up the tenure track would require that I establish a reputation as a thought leader in some area of importance. At that time, we were on the verge of computing becoming more pervasive with relatively low-cost minicomputers soon to be followed in the early 1980s by microcomputers. Human-computer interaction was a hot topic, with interactive computing dominated by DEC (Digital Equipment Corporation). I prepared a review paper, “Design of ManComputer Interfaces for On-line Interactive Systems,” that addressed human-computer stereotypes, displays and input devices, visual information processing, and mathematical models (Rouse, 1975). The paper appeared in the top electrical engineering journal - Proceedinm of With its 175 references, it was frequently cited in other engineering journals for quite a few years.
m.
Displays, Controls, Aiding, and Training 171
My sense was, and continues to be, that this type of article is a good way for a young assistant professor to gain professional attention beyond the handful of researchers who work in the same subdisciplinary niche. Preparation of this article also forced me to do my homework in order to write it and pass muster with reviewers. Over the years, I have written many papers with 50-100 references and value the process of seeing connections and distinctions among the works of others. Another important community involves behavioral and social scientists in general, and human factors and engineering psychology professionals in particular. Two notable efforts in this direction included a chapter in an undergraduate textbook that addressed human factors as a key element of system design - within a model of the system design process. Issues discussed include automation, electronic displays, and failure detection and diagnosis (Rouse, 1979b). The other effort was a report that Chris Wickens and I prepared on the role of human factors in military R&D, with topics ranging from human information processing to engineering design (Wickens & Rouse, 1987). This report was based on my first experience speaking to a group of congressional staff members. I anticipated this experience and expected it to have great significance. After my presentation, I ate lunch with several staffers, eager to get their reactions to my material. I learned an important lesson when I asked one staffer what he thought about the points I had made. He replied, “Great lunch!” Listening to me was the price he had paid for admission to a free lunch. Another effort to communicate design guidance focused on industrial and production engineers (Rouse, 1987a). This handbook chapter focused on authority/responsibility, information processing, input/output, and environmental compatibility. Inputloutput addressed sensing, affecting, displays and controls. Information processing was concerned with human information processing, manual and supervisory control, decision making and problem solving, and human-computer interaction. Design issues considered included operability, maintainability, supportability, staffing, selection, training and aiding. My experience in preparing and communicating these treatises on the nature of human-machine (or computer) interaction is that it is difficult to provide more than a broad appreciation for central issues and how they are best addressed. Ken Boff, Chris Wickens, and Gav Salvendy are much better prepared to discuss how best to deal with the details of these issues. The remainder of this section on system design discusses how our research
172 People & Organizations
informed the best ways to address several specific design issues that have interested me over many years.
Displays and Controls Central human-machine interaction design issues concern what information you should display, how you should display it (e.g., visually, aurally, etc.), what inputs you should of ask of humans, and how they should provide these inputs (e.g., joysticks, keyboards, etc.). Addressing these issues involves consideration of a wealth of factors far beyond the scope of this chapter; for interested readers, see the detailed display design handbook for the nuclear power industry developed by Paul Frey, Bill Sides, Russ Hunt, and me (Frey, et al,, 1984). As an example of a detailed display design issue, Ron Small and John Hammer conducted an extensive study of the differences between analog and digital symbols for air traffic control displays (Small, Hammer & Rouse, 1997). The crisper digital symbols led to faster target recognition, and were preferred by controllers. There are an enormous number of focused studies dealing with these types of specific questions. In the broader arena of mental models, mental workload, human error, failure detection, failure diagnosis, automation, we have reviewed and researched displays and controls for complex systems, with emphasis on multi-page displays (Rouse, 1986b). Building on the research of Dick Henneman discussed in Chapter 5, we focused on hierarchical multi-page electronic displays. Our idea was to design these types of displays using the aggregation and abstraction hierarchies developed by Jens Rasmussen (1986, 1994). Paul Frey addressed very large graphics on small screens for computerbased maintenance information by creating different displays for different levels in the aggregative-abstraction hierarchy. Three experiments were conducted that focused on the helicopter maintenance displays preferred by experienced maintenance personnel. We found that subjects used high abstraction displays for “thinking” tasks and low abstraction displays for “doing” tasks. Errors in display selection appeared to be due to difficulties getting to the displays they wanted (Frey, Rouse & Garris, 1992). The next experiments focused on maintenance performance rather than just display preference (Frey, Rouse & Garris, 1993). The fourth experiment indicated that performance was influenced by the availability of high abstraction displays and the influence depended on the experience
Displays, Controls, Aiding, and Training 173
level of the maintainer - low experience level led to poorer performance with high abstraction levels. The fifth experiment found that maintenance performance (time to solution and simulated maintenance performance) improved by increasing either abstraction or aggregation over baseline displays. Almost 90% of personnel indicated new displays would improve performance and almost two-thirds indicated these displays would provide much improvement. The resulting display system was called BGLS (Big Graphics and Little Screens). In light of the popularity of BGLS with both trainers and trainees at our experimental site - Jacksonville Naval Air Station - we attempted to provide a system to the training unit. Both the computer and software would have come at no cost as the Office of Naval Research had already purchased these assets. However, this deployment was prohibited because the BGLS system was not an approved Navy training device. None of the participants, including us, had the resources or the mandate to seek this approval. The opportunity was lost. On the controls side, hierarchical multi-page displays were, in the past, accessed by command languages. Now, of course, much of this access control involves windows displays and mouse clicks. Early on, however, people had to learn command languages with mnemonics such as DPN for Display Page N. John Hammer conducted a very interesting study of command language use for word processors. He modeled the choice of commands for intraline text editing, assuming that a human chooses optimal sequences of commands given the commands the human knows. John found that humans, in general, employ a small subset of available commands and that this greatly increases the number of keystrokes needed to accomplish their editing goals (Hammer & Rouse, 1982) These results argue for display editors which, of course, are now the standard. However, the issue of primary interest is not editors, but instead humans’ understanding and use of command languages, which affect performance in many complex systems. This becomes particularly important when typical users of information systems are far from experts. This brief discussion of displays and controls relates to my studies of systems dynamics and control in both undergraduate and graduate degree programs. During my graduate studies at MIT, I was immersed in formal models and methodologies associated with system dynamics and control. This resulted in exactly one research paper involving the design of nonlinear controllers due to performance indices that include quadratic plus
174 People & Organizations
quartic elements - the latter to assure very rapid response for key variables (Rouse & Garg, 1973). This experience showed me that departing, even modestly, from the straight and narrow linear quadratic Gaussian assumptions makes everything much more complicated. If we feel compelled to address the full reality of an overall complex systems problem, the formalisms of system dynamics and control may have limited, albeit non-zero, benefits. Despite this conclusion, control systems models are woven into my views of the world. Thus, for example, in Henk Stassen’s festschrift, I argued for control models of design processes, with implications for supporting designers via a Designer’s Associate (see Chapter 7) and an Enterprise Support System (see Chapter 11) (Rouse, 1997). This fits into Ken Boff s and my quest to understand the nature of designers and design (Rouse & Boff, 1987) Design of Aiding Chapters 2-5 discussed many examples of aiding - system functionality focused on directly augmenting human performance. With a goal of synthesizing guidance for the design of aiding, we reviewed a large variety of reports and papers on aids for operators and maintainers of a wide range of systems, as well as numerous methods and tools for supporting designers and managers. We concluded that most aids support one or more of the 13 general human-system tasks listed in Table 2 (Rouse, 1986a, 1988b, 1991a). Much of human behavior and performance in complex systems involves execution of informal or formal procedures, and monitoring their execution, in situations that can be characterized as familiar and frequent. When deviations from expectations become sufficient, humans will reject these deviations as unacceptable. Now they must seek information to explain the deviations. In frequent and familiar situations (see Table 1 in Chapter 5), they find sufficient information in the pattern of system outputs to jump immediately from Task 5 to Task 13. For example, the combination of noise and steering difficulties leads immediately to the conclusion that they have a blown tire. When situations are not familiar and/or not frequent, Tasks 6-12 come into play as people search for information, an explanation of the situation, and a course of action. Gary Klein and I have debated for years the extent to which these tasks are evoked (Klein, 1993, 1998, 2002). For much of
Displays, Controls, Aiding, and Training 175
life, pattern recognition and intuition have to be the primary source of courses of action; otherwise, life would be indeed very complicated. However, there are many situations - infrequent and/or unfamiliar - where relying solely on pattern recognition and intuition can lead one astray.
Tasks 1
Execution & Monitoring Implementation of plan Observation of consequences
3
Evaluation of deviations from expectations
4
Selection between acceptance and rejection
Tasks
Situation Assessment: Information Seeking
5
Generatiodidentification of alternative information sources
6
Evaluation of alternative information sources
7
Selection among alternative information sources
Tasks
Situation Assessment: Explanation
8
Generation of alternative explanations
9
Evaluation of alternative explanations
10
Selection among alternative explanations
Tasks
Planning & Commitment
11
Generation of alternative courses of action
12
Evaluation of alternative courses of action
13
Selection among alternative courses of action Table 2. General Human-System Tasks
176 People & Organizations
The demands of the tasks in Table 2 depend on the context, as illustrated in Table 3. While we do not discuss managers and designers until Chapters 7-11, these types of humans are included in Table 3 to illustrate the contrast. Implementation and observation are much more demanding for operators and maintainers than they are for managers and designers. Selecting information sources, explanations, and courses of action are more demanding for managers and, to an extent, designers, than they are for operators and maintainers. From a design perspective, Table 3 provides a starting point for assessing where the humans of interest may need support. Table 4 suggests how to provide this support.
~~
Human-System Tasks 1. Implementation I
2. Observation
~
~
Type of
Humans
Operations
Maintenance
Management
Design
High
High
Low
LOW
High
Moderate
I
LOW I
LOW I
I
3. Evaluation
Moderate
Moderate
Moderate
Moderate
4.Selection
Moderate
High
High
Moderate
5. Generation
Low
Low
High
Moderate
6. Evaluation
Moderate
LOW
High
High
7. Selection
Moderate
Moderate
Moderate
High
8. Generation
Moderate
Moderate
High
Low
9. Evaluation
LOW
Moderate
High
LOW
10. Selection
LOW
Moderate
Moderate
LOW
11. Generation
LOW
LOW
High
High
12. Evaluation
Moderate
Moderate
High
High
13. Selection
Moderate
Moderate
Moderate
Moderate
Table 3. Demands of Human-SystemTasks
Displays, Controls, Aiding, and Training 177
Support
Generation of Alternatives Situation 3 Previous Alternatives
1
Attributes 3 Relevant Alternatives
2
Assessment 3 New Alternatives
"
~~
Support
~
Evaluation of Alternatives Alternatives 3 Characteristics
4
Situation & Alternatives 3 Applicability
5
Alternatives 3 Consequences
6
Multiple Alternatives 3 Comparison
7
Assessment 3 New Methods
8
Support
Selection Among Alternatives Criteria & Alternatives 3 Best Selections
9 10
Support 11 12 13 14
Support 15 16 17
Assessment 3 New Criteria
Inputs to Humans Information 3 Transform, Format & Code
Evaluated Information 3 Filter & Highlight Sampled Information 3 Model
Constraints & Individual Differences 3 Adapt
Outputs From Humans Plan 3 Monitor for Inconsistencies Plan & Intentions 3 Execute
Intentions & Resources 3 Adapt
Table 4. Approaches to Supporting Human-System Tasks
178 People & Organizations
The entries in this table were gleaned from a large number of reports and articles on systems for supporting a wide range of personnel (Rouse & Rouse, 1983; Rouse, et al., 1984). These possibilities are explained in detail in Chapter 8 of Designfor Success (Rouse, 1991a). Sage and Rouse ( 1986) provide a broad interdisciplinary view of alternative approaches to aiding. Further, it is not difficult to place many of thc types of aiding discussed in Chapters 2-5 of this book in this taxonomy. The difficulties of successfully applying aiding concepts depend on the context as shown in Table 5. Three broad classes of difficulty relate to the knowledge needed to enable the aiding, the nature of human-aid interaction, and various risks associated with the use of aiding.
Nature OF Knowledge Base 1. Lack of Structure
LOW
LOW
High
Moderate
1. Potential immediate misfortune
High
Moderate
LOW
LOW
8. Potential long-term misfortune
Low
Moderate
High
High
9. Lack of traceability
LOW
LOW
High
Moderate
Table 5. Obstacles to Applications
Displays, Controls, Aiding, and Training 179
This brief summary of our methodology for design of aiding serves to illustrate several key elements of how we have usually approached design guidance for a range of areas within human-machine interaction. First and foremost, the guidance is presented as a framework rather than a prescription. The framework is intended to assure that designers address importance questions and are aware of alternative answers. Second, the intent is to inform designers of the likely implications of alternative answers. Third, the goal has been to enable designers to apply best practices without having to resort to the research literature. Fourth, when possible, we have tried to provide tools that support making design decisions within the framework. Such tools are discussed in depth in Chapter 8. It is essential to emphasize that this design methodology, or others discussed throughout this book (e.g., design of adaptive aiding in Chapter 4), are not intended to be “one pass” design prescriptions. Instead, these methodologies provide the backdrop for evolutionary design processes that involve “spirals” to both better understand the need and to yield a better designed solution (Rouse & Valusek, 1993). This type of spiraling is discussed in Chapter 8. It is also important to stress the fact that the design of a method or tool is also an opportunity for human-centered design where, in this case, the primary stakeholders are designers. Our extensive studies of what designers want from software tools (Rouse, 1998) led to significant insights that strongly influenced the design, development, and deployment of our human-centered methods and tools. These impacts are discussed in Chapters 7.
Design of Training Rather than directly augment human behavior and performance with aiding, you can create the potential to perform with training. We developed a methodology for design of training that parallels the aiding design methodology (Johnson, et al. 1985; Rouse, 1991a, Chapter 9). The objectives of training are imparting knowledge and initiating the development of skills. High levels of skill tend to come from practice and experience subsequent to training. The training systems we have developed have tended to focus on two types of knowledge: Operational Knowledge: How to work the system
180 People & Organizations 0
System Knowledge: How the system works
The role of these two types of knowledge in human performance was discussed in Chapter 5 in the context of failure diagnosis. Our methodology for training design includes three overa!l steps: 0
Characterize knowledge requirements knowledge and system knowledge.
in terms of
operational
Map knowledge requirements to approaches to training ranging from passive to active training to actual experience. Assess effectiveness and efficiency of approaches to training. These three steps are supported by several tabulations that facilitate the process of characterization, mapping, and assessing. The result is a conceptual design for training. Detailed design is left to more traditional training design methods. Chapter 9 of Design for Success elaborates this design methodology (Rouse, 1991a).
Training vs. Aiding Tradeoffs The balance of training and aiding is a central tradeoff in the design of a complex system. Indeed, it is quite reasonable to argue that the training system is part of the overall system. However, this perspective is seldom adopted in any significant depth. This tradeoff has importance both before personnel take responsibility for a system (i.e., as they are being prepared), and once they have taken responsibility for the system (i.e., in operation). The general question is when should people be trained and when should they be aided? As with much of the research in this book, we first addressed this question with a simulation model (Rouse, 1987b). The simulation represented an integrated support system for aiding, explaining, and tutoring. This model included probabilistic task completion and time until completion. The impact of each type of support on these measures was hypothesized. Numerous sensitivity analyses were performed to assess the impacts of a wide variety of functional relationships. The overall conclusion was that parameters such as task time and frequency and human
Displays, Controls, Aiding, and Training 181
aptitude affected the mix of training and aiding that yields the best performance. These insights gained us support from the Air Force to develop an overall methodology for addressing this tradeoff. The methodology and associated software tool was called TRAIDOFF (Rouse, 1991b; Zenyuh, et al., 1993). Without doubt, this has to be one of the best acronyms we have created across the decades of research discussed in this book. The TRALDOFF Framework include 15 specific design tasks organized within the following six categories Identify Tasks 0
Assess Human Abilities, Limitations, and Preferences
0
Determine Training and Aiding Alternatives Formulate Tradeoffs Analyze Tradeoffs Integrate Tradeoffs
Key to tradeoff analysis is the ability to predict the impact of the alternatives on performance. Approaches to prediction provided include behavior predictions (from which performance can be calculated), performance predictions, and training/aiding guidelines. The supporting material for this methodology includes alternative sources for each approach and an evaluation of many sources. The Air Force requested that we develop versions of the TRAIDOFF methodology for three types of users -- novice, journeyman, and experts. Thus, we had to characterize the abilities, limitations, and preferences for three levels of analysddesigner. This provided a very rich opportunity to pursue human-centered design of a methodology and software tool intended, in itself, to support human-centered design. This provides a wonderful illustration of why traditional user-centered design can be inadequate. The result was that each of the 15 steps in the methodology was supported differently depending on the level of the user of TRAIDOFF. The spiral design process yielded three successive prototype software
182 People & Organizations
systems, the last of which was delivered to the Air Force for their use. The final version includes a rather unusual support concept. We wanted to provide expert advice to users gleaned from a wide range of training and aiding experts. In the process of compiling the knowledge to create this expert system, we discovered that the experts did not agree. For example, training experts would try to address all needs with training, while aiding experts focused on aiding as the solution. At first this result, which was not surprising in retrospect, presented a conundrum. What should the expert system advise? Our dilemma disappeared when we realized we could provide a panel of experts rather than just one expert. Users of TRAIDOFF could consult multiple experts, knowing the background and orientation of each panel member. The richer multi-disciplinary guidance that resulted was viewed as rather innovative. Serendipity was the source of this innovation - we were just trying to find our way out of a dilemma. My chapter in Hal Booher’s MANPRINT Handbook (Rouse, 1990a) brings together the methodologies for design of aiding and training, as well as the key elements of overall TRAIDOFF framework for addressing training vs. aiding tradeoffs. See also Chapter 9 of Human-Centered Design (Rouse, 1991a).
Design Evaluation Evaluation is essential to assuring that designs solve the problems for which they are targeted, solve these problems in acceptable ways, and solve them in ways that are worth it in terms of benefits and costs. In some situations, solutions must be certified in order to be deployed (Small & Rouse, 1994). In many situations, the ultimate evaluation occurs in the marketplace where the solution succeeds or fails with customers. For the purpose of developing design guidance on evaluation, we reviewed almost 200 documents reporting studies of complex humanmachine systems (Rouse & Rouse, 1984). We found that only 115 of these documents had sufficient information to assess the study reported. The 85 reports that lacked sufficient information were almost all conference papers and technical reports. This serves to emphasize the importance of the journal review process to subsequent use of the literature. We compiled information on the domain of study (vehicle control, process control, system maintenance, and other), the means of study (simulation, field study, etc.), and the product and process measures of
Displays, Controls, Aiding, and Training 183
performance employed. Of particular importance was the extent to which the study yielded definitive results regarding the impacts of the manipulations performed, that is, independent variables. We defined “definitive” to include lack of significant differences if these assessments included consideration of the statistical power of the results. In terms of abilities to yield definitive results, we reached three overarching conclusions: Full-scope and part-task simulator methods are more likely to yield definitive results, particularly for studies in the domains of vehicle control and other. Disaggregate measures are more likely to yield definitive results, particularly for studies in the domains of vehicle control and other. If aggregate measures must be used, simulator methods are preferred, with part-task simulators being less problematic than full-scope simulators. These conclusions were key ingredients in the Evaluation Guidelines we prepared for the Electric Power Research Institute (Rouse, 1984). These guidelines provide detailed advice on choosing the means of study and the product and process measures to employ, as well as material on planning evaluation protocols. This material is reviewed in Chapter 7 of Designfor Success (Rouse, 1991a). In the process of developing these guidelines, we interviewed a variety of nuclear power plant personnel to assess their perceptions of evaluation. One engineer’s summary of evaluation was particularly succinct, “You flip the switch. If the red light comes on, you’ve got it!” This statement reflects the “it works” school of thought. However, as we all have experienced with computers, VCRs, etc., this level of evaluation is often insufficient. One area with considerable subtlety is human acceptance - the third central objective of human-centered design. Nancy Morris and I studied this issue with particular emphasis on acceptance of automation decisions in terms of levels, functions, and impacts of automation (Rouse & Morris, 1986). The result was a set of 12 design guidelines involving three phases: Given candidate functions for computer aiding or computerizing
184 People & Organizations
Front-end analysis to identify potential acceptance problems Making automation decisions in light of potential acceptance problems
Implementing change in terms of computer aiding or computerizing The guidelines are as follows: Front-End Analysis 1. Characterize the functions of interest in terms of whether or not these
functions currently require humans to exercise significant levels of skill, judgment, and/or creativity. 2. Determine the extent to which the humans involved with these functions value the opportunities to exercise skill, judgment, and/or creativity.
3. Determine if these desires are due to needs to feel in control, achieve self-satisfaction in task performance, or perceptions of potential inadequacies of technology in terms of quality of performance and/or ease of use.
4. If need to be in control or self-satisfaction are not the central concerns,
determine whether the perceived inadequacies of the technology are well founded. If so, eliminate the functions in question from the candidate set; if not, provide demonstrations or other information to familiarize personnel with the actual capabilities of the technology.
Automation Decisions 5 . To the extent possible, only change the system functions that personnel in the system feel should be changed (e.g., those for which they are willing to lose discretion). 6. To the extent necessary, particularly if number 5 cannot be followed, consider increasing the level and number of functions for which personnel are responsible so that they will be willing to change the functions of concern (e.g., expand the scope of their discretion).
7
Assure that the level and number of functions allocated to each person or type of personnel form a coherent set of responsibilities, with an
Displays, Controls, Aiding, and Training 185
overall level of discretion consistent with the abilities and inclinations of the personnel.
8. Avoid changing functions when the anticipated level of performance is likely to result in regular intervention on the part of the personnel involved (e.g., assure that discretion once delegated need not be reassumed). Implementing Change
9. Assure that all personnel involved are aware of the goals of the effort and what their roles will be after the change. 10. Provide training that assists personnel in gaining any newly required abilities to exercise skill, judgment, and/or creativity and helps them to internalize the personal value of having these abilities. 11. Involve personnel in planning and implementing the changes from both a system-wide and individual perspective, with particular emphasis on making the implementation process minimally disruptive. 12. Assure that personnel understand both the abilities and limitations of the new technology, know how to monitor and intervene appropriately, and retain clear feelings of still being responsible for system operations.
This set of guidelines provides another illustration of the way in which we have come to believe design of human-machine interaction should be supported. As noted earlier, the goal is to assure that designers address important questions and are aware of alternative answers and the likely implications of alternative answers so as to be able to apply best practices without having to resort to the research literature.
Process Control and Manufacturing Much of the research discussed thus far in this book was primarily focused on aerospace and defense domains. The reason for this is simple. These were the domains for which research funding could be obtained. We did, however, conduct significant research in process control and manufacturing.
186 People & Organizations
The process control research was motivated by European connections, first with John Rijnsdorp in The Netherlands (Rijnsdorp & Rouse, 1977), and later Tom Martin in Germany (Martin, et al., 1991). Our processrelated research in the U.S. primarily focused on the electric utility industry for which we prepared guidelines for display design (Frey, et al. 1984), evaluation (Rouse, 1984), and design of training (Johnson, et al. 1985). My colleagues at Georgia Tech strongly recommended that we pursue manufacturing as an avenue for applying the ideas elaborated in Chapters 4 and 5 . These pursuits began with contributing a chapter to a National Academy of Engineering book on advanced manufacturing (Rouse, 1988d). We knew that we also needed a compelling manufacturing case study to support our story. This case study was developed by applying the decision support design methodology discussed earlier in this chapter to production planning and control as illustrated in Figure 1 (Rouse, 1988a). The overall support concept was motivated by Jens Rasmussen’s hierarchical model. Our version of this model is shown in Figure 2. The application of this methodology yielded the support system architecture shown in Figure 3. Notice how the terms generate, evaluate, and select appear throughout this five-level architecture. Each of the boxes with these labels can involve one or more of the aiding methods summarized in Table 4. See Rouse (1988a) for elaboration of the methods chos RUSH ORDERS
PRODUCTION PLANNING
t
RESOURCE PROBLEMS
Figure 1. Production Planning, Scheduling and Operations
Displays, Controls, Aiding, and Training 187 EXISTENCEOF RESOURCES
DEMAND----,
u 1
PU~NING
PLAN‘
AVAllABlLlN
OF RESOURCES
REOUIREMENTS AND REASONS (e.g., rush orders:
RESOURCES AND RESULTS (e& resourn problems)
PLAN
STATE OF RESOURCES SCHEDULE?
c
MAINTA,NING OPERATING/
PRODUCTS
Figure 2. Hierarchical View of Production
I presented this concept at a meeting on intelligent manufacturing systems. The intent was to demonstrate to the audience that a systematic methodology could yield innovative concepts of significant value to the manufacturing systems they designed, constructed, and operated. While this objective was achieved with some members of the audience, the more frequent reaction upon seeing Figure 3 was that this concept “magically” emerged from our methodology. People could not imagine creating such an outcome themselves. This unexpected response led me to carefully track the reactions of people in manufacturing as we explored the possibilities with a variety of potential customers. We subsequently chronicled our attempt to transition intelligent interface technology and simulation-based training to manufacturing applications (Rouse & Hunt, 1991). We organized this story in terms of experiences with people in different roles (executive, manager, and engineer), different disciplines (management, industrial engineering, and mechanical engineering), and different perspectives on interface technology.
Figure 3. Overall Support System Architecture
188 People & Organizations
Displays, Controls, Aiding, and Training 189
Our conclusions are discussed in the context of technology transition in Chapter 9. At this point, however, it is useful to note a few conclusions. First, while our concepts were viewed as innovative in aerospace, they were viewed as risky in manufacturing. Several managers told us that they would like to be our second application. Second, providing support to fighter aircraft pilots that cost $1 million to train was easier to argue than providing support to production workers making minimum wage. Third, at that time, the aerospace industry was much more sophisticated regarding advanced software than was the manufacturing industry. Primarily for these reasons, our foray into manufacturing was not successful.
CONCLUSIONS The concept of human-centered design was outlined in Chapter 1. Research to understand and support humans was discussed in Chapters 2-5. This chapter has focused on design methodologies and supporting models and tools. Of particular note, we addressed the designer as a stakeholder in the process of creating supports for operators and maintainers. Humancentered design must consider the abilities, limitations, and preferences of designers if the value created by research on operators and maintainers is to be realized in practice. Designers are discussed in much more detail in the next chapter. The transition from this chapter to the next represents a transition in my career. In the late 1980s and early 199Os, I moved from primarily studying operators and maintainers to focusing almost totally on designers and managers. Ken Boff and I pursued the nature of design and designers because we felt that our abilities to benefit operators and maintainers were limited by our influence of designers. In time, design and design support systems became focal topics, as will be discussed in Chapter 7-9. My work with design teams in a variety of industries inevitably led to having to address the interactions of technical and business issues. Serendipity intervened when several business executives asked us to also address their business processes. They said, quite simply, that they could not take full advantage of our design methods and tools without reconsidering their business processes. This was their idea, not ours. But we quickly recognized that we were on the path of serendipity and the research in Chapters 10-11 resulted. I firmly believe that the core principles of human-machine interaction apply whether the humans of interest are operators, maintainers, designers,
190 People & Organizations
or managers, and whether the machine of interest is an airplane, factory, business process, or enterprise. The behavior and performance of people and organizations are central aspects of complex systems. Understanding behavior and performance is essential to successful systems in terms of performance, productivity, profit, innovation, safety, and sustainability at all levels.
REFERENCES Boff, K.R., & Lincoln, J.E. (Eds.).( 1986). Engineering data compendium. Wright-Patterson Air Force Base, OH: Air Force Armstrong Research Laboratory. Booher, H.R. (Ed.).(2003). Handbook of human systems integration. New Y ork: Wiley . Booher, H.R., & Rouse, W.B. (1990). MANPRINT as the competitive edge. In H.R. Booher (Ed.), MANPRINT: An approach to systems integration (Chap. 20). New York: Van Nostrand Reinhold. Frey, P.R., Rouse, W.B., & Garris, R.D. (1992). Big graphics and little screens: Designing graphical displays for maintenance tasks. Transactions on Systems, Man, and Cybernetics, SMC-22( l), 10-20. Frey P.R., Rouse, W.B., & Garris, R.D. (1993). Big graphics and little screens: Model-based design of large-scale information displays. In W.B. Rouse (Ed.), Humaflechnolow Interaction in Complex Systems (Vol. 6, pp. 1-57). Greenwich, CT: JAI Press. Frey, P.R., Sides, W.H., Hunt, R.M., & Rouse, W.B. (1984). ComputerGenerated Display System Guidelines: Vol. 1. Display Design. Palo Alto, CA: Electric Power Research Institute. Hammer, J.M., & Rouse, W.B. (1982). The human as a constrained optimal text editor. IEEE Transactions on Systems, Man, and Cybernetics, SMC-12(6), 777-784. Johannsen, G., & Rouse, W.B. (1979). Mathematical concepts for modeling human behavior in complex man-machine systems. Human Factors, 21(6), 733-747.
Displays, Controls, Aiding, and Training 191
Johnson, W.B., Maddox, M.E., Rouse, W.B., & Kiel, G.C. (1985). Diagnostic training for nuclear plant personnel. Volume 1: Courseware development. Palo Alto, CA: Electric Power Research Institute. Klein, G.A. (1998). Sources of power: How ueoule make decisions. Cambridge, MA: MIT Press. Klein, G.A. (2002). Intuition at work: Why develouing your Put instincts will make YOU better at what YOU do. New York Currency. Klein, G.A., Orasanu, Calderwood, J.R., & Zsambok. C.E. (Eds.). (1993), Decision making in action: Models and methods. Norwood, NJ: Ablex. Martin, T., Kivinen, J., Rijnsdorp, J.E., Rodd, M.B., & Rouse, W.B. (1991). Appropriate automation - integrating technical, human, organizational, economic, and cultural factors. Automatica, 27(6), 901917. Mavor, A.S., & Pew, R.W. (1997). Reuresenting human behavior in military simulations. Washington, DC; National Academy Press. Moray, N.P., Ferrell, W.R., & Rouse, W.B. (Eds.). (1990). Robotics, control, and society. London: Taylor & Francis. Morris, N.M., & Rouse, W.B. (1993). Human operators’ response to errorlikely situations in complex engineering systems. In W.B. Rouse, Ed., Human-Technology Interaction in Comdex Svstems (Vol. 6). Greenwich, CT: JAI Press. Rasmussen, J. ( 1986). Information Processing and Human-Machine Interaction. New York: Elsevier. Rasmussen, J., Pejtersen, A.M., & Goodstein, L.P. (1994). Cognitive Systems Engineering. New York: Wiley. Rasmussen, J., & Rouse, W.B. (Eds.). (1981). Human detection and diagnosis of svstem failures. New York: Plenum Press. Rijnsdorp, J.E., & Rouse, W.B. (1977). Design of man-machine interfaces in process control. In H.R. Van Nauta Lemke & H.B. Verbruggen (Eds.), Digital comuuter auulications to process control (pp. 705-720). New York: North-Holland. Rouse, W.B. (1975). Design of man-computer interfaces for on-line interactive systems. Proceedings of the IEEE, Special Issue on Interactive Computer Systems, 63(6), 847-857.
192 People & Organizations
Rouse, W.B. (1979a). Approaches to mental workload. In N. Moray (Ed.), Mental workload: Its theory and measurement (pp. 255-262). New York: Plenum Press. Rouse, W.B. (1979b). Human factors engineering. In K. Connolly (Ed.), Psvchologv survey (Vol. 2, Chap. 10). London: George Allen and Unwin. Rouse, W.B. (1980a). Alternative approaches to modeling man-machine interaction. Journal of Cybernetics and Information Science, 3, 175-195. Rouse, W.B. (1980b). Systems engineering models of human-machine interaction. New York: Elsevier. Rouse, W.B. (1984). Computer-Generated Display System Guidelines: Vol. 2. Developing an Evaluation Plan. Palo Alto, CA: Electric Power Research Institute. Rouse, W.B. (1985). On better mousetraps and basic research: Getting the applied world to the laboratory door. IEEE Transactions on Systems, Man, and Cybernetics, SMC-15( l), 2-8. Rouse, W.B. (1986a). Design and evaluation of computer-based decision support systems. In S.J. Andriole (Ed.), Microcomputer decision support systems (Chap. 11). Wellesley, MA: QED Information Systems, Inc. Rouse, W.B. (1986b). Supervisory control and display systems. In J. Zeidner (Ed.), Human productivity enhancement (Vol. 1, Chap. 8). New York: Praeger. Rouse, W.B. (1987a). Man-machine systems. In J.A. White (Ed.), Production handbook (Chapter 2.3). New York: Wiley. Rouse, W.B. (1987b). Model-based evaluation of an integrated support system concept. Large Scale Systems, l3,33-42. Rouse, W.B. (1988a). Intelligent decision support for advanced manufacturing systems. Manufacturing Review, 1(4), 236-243. Rouse, W.B. (1988b). Design and evaluation of computer-based decision support systems. In N.E. Malagardis & T.J. Williams (Eds.), Standards in information technology and industrial control (pp. 169-190). Amsterdam: Elsevier. Rouse, W.B. (1988~).Ladders, levels, and lapses, and other distinctions in the contributions of Jens Rasmussen. In L.P. Goodstein, H.B. Anderson, &
Displays, Controls, Aiding, and Training 193
S.E. Olsen (Eds.), Tasks, errors, and mental models (pp. 315-323). London: Taylor & Francis. Rouse, W.B. (1988d). The human role in advanced manufacturing systems. In W.D. Compton (Ed.), Design and analysis of integrated manufacturing systems (pp. 148-166). Washington, DC: National Academy Press. Rouse, W.B. (1990a). Training and aiding personnel in complex systems: Alternative approaches and important tradeoffs. In H.R. Booher (Ed.), MANPRINT: An approach to systems integration (Chap. 14). New York: Van Nostrand Reinhold. Rouse, W.B. (1990b). Human resource issues in system design. In N.P. Moray, W.R. Ferrell, & W.B. Rouse (Eds.), Robotics. control, and society (Chap. 18). London: Taylor & Francis. Rouse, W.B. (1991a). Design for success: A human-centered approach to designing successful products and systems. New York: Wiley. Rouse, W.B. (1991b). Conceptual design of a computational environment for analyzing tradeoffs between training and aiding. Information and Decision Technologies, 17,227-254. Rouse, W.B. (1995). Network models of human-machine interaction (pp. 109-126). In D. Batten, J. Casti, & R. Thord (Eds.), Networks in action. Berlin : Springer-Verlag . Rouse, W.B. (Ed.). (1984-1996). Humadtechnology interaction in comulex systems (Vols. 1-8). Greenwich, CT: JAI Press. Rouse, W.B. (1997). Control models of design processes: Understanding and supporting design and designers. In T.B. Sheridan & T. van Lunteren (Eds.), Perspectives on the Human Controller. Mahwah, NJ: Erlbaum. Rouse, W.B. (1998). Computer support of collaborative planning. Journal of the American Society for Information Science, 49 (9), 832-839. Rouse, W.B. (2001). Human-centered product planning and design. In G. Salvendy (Ed.), Handbook of industrial engineering (3rdEdition, Chapter 49). New York: Wiley. Rouse, W.B., & Boff, K.R., (Eds.). (1987). System design: Behavioral perspectives on designers, tools, and organizations. New York: Elsevier.
194 People & Organizations
Rouse, W.B., & Boff, K.R. (1997). Assessing costhenefits of human factors. In G. Salvendy (Ed.), Handbook of Human Factors and Ergonomics (Chapter 49), New York: Wiley. Rouse, W.B., & Boff, K.R. (2003). Costhenefit analysis for human systems integration: Assessing and trading off economic and non-economic impacts of HSI. In H.R. Booher, (Ed.), Handbook of human systems integration (Chapter 17). New York: Wiley. Rouse, W.B., & Boff, K.R., (2005). Cost/Benefit Analysis for Human Systems Investments: Assessing and Trading Off Economic and NonEconomic Impacts of Human Factors and Ergonomics. In G. Salvendy, Ed., Handbook of Human Factors and Ergonomics (Chapter 43). New York: Wiley. Rouse, W.B., & Garg, D.P. (1973). Suboptimal design of a class of nonlinear controllers. ASME Transactions, Journal of Dynamic Systems, Measurement, and Control, 95(4), 352-355. Rouse, W.B., & Gopher, D. (1977). Estimation and control theory: Application to modeling human behavior. Human Factors, 19(4), 3 15-329. Rouse, W.B., & Hunt, R.M. (1991). Transitioning advanced interface technology from aerospace to manufacturing applications. International Journal of Industrial ErPonomics, 7(3), 187-195. Rouse, W.B., Kisner, R.A., Frey, P.R., & Rouse, S.H. (1984). A method for analytical evaluation of computer decision aids. Technical Report NUREGKR-3655; ORNLRM-9066. Oak Ridge, TN: Oak Ridge National Laboratory. Rouse, W.B., Kober, N., & Mavor, A. (Eds.).( 1997). The case for human factors in industry and government. Washington, DC: National Academy Press. Rouse, W.B., & Morris, N.M. (1986). Understanding and enhancing user acceptance of computer technology. IEEE Transactions on Systems, Man, and Cybernetics, SMC-16(6), 965-973. Rouse, W.B., & Rouse, S.H. (1983). A framework for research on adaptive decision aids. Technical Report AFAMRL-TR-83-082. Wright-Patterson Air Force Base, OH: Air Force Aerospace Medical Research Laboratory.
Displays, Controls, Aiding, and Training 195
Rouse, W.B., & Rouse, S.H. (1984). A note on evaluation of complex man-machine systems. IEEE Transactions on Systems, Man, and Cybernetics, SMC- 14(4), 633-636. Rouse, W.B., & Valusek, J. (1993). Evolutionary design of systems that support decision making. In G.A. Klein, J. Orasanu, R. Calderwood, & C.E. Zsambok (Eds.), Decision making in action: Models and methods (Chap. 16). Norwood, NJ: Ablex. Rouse, W.B., Edwards, S.L., & Hammer, J.M. (1993). Modeling the dynamics of mental workload and human performance in complex systems. IEEE Transactions on Systems, Man, and Cybernetics SMC-23(6), 16621671. Rouse, W.B., Kober, N., & Mavor, A. (Eds.).(1997). The case for human factors in industry and government. Washington, DC: National Academy Press. Sage, A.P., & Rouse, W.B. (1986). Aiding the human decision maker through the knowledge-based sciences. LEEE Transactions on Systems, Man, and Cybernetics, SMC-16(4), 51 1-521. Sage, A.P., & Rouse, W.B. (2004a). Introduction to systems engineering. In D. Christensen, Ed., Handbook of Electricial Engineering (Chapter 5.1). New York: Wiley. Sage, A.P., & Rouse, W.B. (2004b). Elements and techniques of systems engineering and management. In D. Christensen, Ed., Handbook of Electricial Engineering (Chapter 5.2). New York: Wiley. Sage, A.P., & Rouse, W.B. (Eds.).(1999). Handbook of systems engineering and manapement. New York: John Wiley. Salvendy, G. (Ed.). (1997). Handbook of Human Factors and Ergonomics (1'' Edition), New York: Wiley. Salvendy, G. (Ed.).(2005). Handbook of Human Factors and ErPonomics (2ndEdition). New York: Wiley. Small, R.L., & Rouse, W.B. (1994). Certify for success: A methodology for human-centered certification of advanced aviation systems. In J.A. Wise, V. D. Hopkin, & D. J. Garland (Eds.), Human factors certification of advanced aviation technologies (pp. 125-133). Daytona Beach Florida: Embry-Riddle Aeronautical University Press.
196 People & Organizations
Small, R.L., Hammer, J.M., & Rouse, W.B. (1997). Comparing display symbology for an advanced air traffic control tower application. Transactions on Systems, Man, and Cybernetics, 22 (6), 783-790. Wickens, C.D., & Rouse, W.B. (1987). The role of human factors in military R&D. In F. Farley & C.H. Null (Eds.), Using psvcholopical science: Making the public case (pp. 167-178). Washington, DC: Federation of Behavioral, Psychological, and Cognitive Sciences. Zenyuh, J.P., Rouse, W.B., Duncan, P.C., & Frey, P.R. (1993). Analysis of training versus aiding tradeoffs. In W .B. Rouse (Ed.), Humaflechnolosw Interaction in Complex Systems (Vol. 6, pp. 137-232). Greenwich, CT:JAI Press.
Chapter 7
INFORMATION, KNOWLEDGE, AND DECISION MAKING
INTRODUCTION "I can't possibly deal with another information source. In fact, I can't deal with the sources I already have. It's just too much. All this stuff that I should be reading and digesting -- I never get around to it. The pile just gets higher and higher until I throw it all out. Then, I start again with good intentions. At least electronically, the stuff is out of sight." I have listened to many versions of this monologue. The nouns change for business managers vs. product designers vs. research scientists. But the story plot remains the same. Too much information. Too little time. Lots of frustration. Perceived needs to know, but less and less time to know. Often ill-informed; occasionally uninformed. From an information and knowledge' provider's perspective, the challenge is to gain and capture people's attention. How can you assure that your content and tools are noticed and valued? How do you foster needs to know for what you provide? Recent experiences have provided a wide range of wrong answers to these questions. On the surface, it might appear that the Internet, perhaps laced with push technology and intelligent agents, is exactly the type of support needed. However, recent studies suggest that the Internet is not the answer. While the Internet greatly enhances access, its impact on utilization has been more limited. Fire hoses are not the most effective way to cure thirst. The role of information in decision making is much more subtle than simply more is better. In fact, "more" is the source of many of the aforementioned frustrations. Rather than more information, people want
'
Distinctions among data, information, and knowledge are addressed later in this chapter. I often use the term information to connote both information and knowledge.
197
198 People & Organizations
what they need, when they need it, in an easily accessible and usable manner. The best ways to satisfy these criteria when providing information are, by no means, as straightforward as one might hope. To satisfy the need to know, we have to understand how information and knowledge are used to make decisions. Fundamental understanding of why and how people value information is essential to maximizing chances to meet their needs successfully. It is important to distinguish between non-discretionary and discretionary needs to know. Information and knowledge that are absolutely required to proceed will always be sought, accessed, and used attention is demanded. Discretionary information, in contrast, must "earn" its way into decision making processes. I have long been particularly interested in people's inclinations and behaviors in discretionary access and utilization of information as bases for making decisions. The need to know is the essential phenomenon addressed in this chapter (Rouse, 2002). Several questions have been especially interesting: Why are so many decisions apparently under-informed due to lack of access and use of what would seem to be high value, but nevertheless discretionary information and knowledge? What factors underlie decision makers' perceived needs for information and knowledge as a basis for formulating, choosing among, and committing to courses of action? What types of support systems would enable decisions to be better informed via high-value information and knowledge? How could people be influenced to access and use such information and knowledge support systems? Answers to these questions should enable identifying those aspects of information and knowledge management systems most important to successful satisfaction of needs to know. This chapter focuses on the subtleties of how information and knowledge affect - and often do not affect - decision making. This exploration shows that more is only better under particular circumstances. Further, these circumstances are very context-dependent and not particularly open to "one size fits all" solutions.
Information, Knowledge, and Decision Making 199
Figure 1 elaborates the contexts of interest. The focus of this chapter is on research, design, and management rather than production, sales, and support. Hence, I do not address business-to-business and business-toconsumer e-commerce. Solutions specific to these contexts, for example, Enterprise Resource Planning (ERP), Material Resource Planning (MRP), Sales Force Automation (SFA), and Customer Relatiocship Management (CRM), are not of concern here. I cannot help but comment briefly on the notion of these systems being “solutions.” A well-known aerospace company retained me to address the question of what Knowledge Management (KM) “solution” they should install. My first question was, “Where is a lack of knowledge hurting your business?” The information technology (IT) personnel had no idea of how to answer this question. They were just responding to a request from the executive team. In pursuing this question, we discovered that a lack of sharing of knowledge was putting major sales opportunities at risk. We then addressed the question of how KM could help with this problem. We concluded that there were software solutions to this problem. At this point, it was reasonable to say that the chosen software system was a solution - to a specific business problem.
Our FOCUS/’’
****
..................................................... .... #.
... Design -Production ...................
*f................*........... ..desearch
- -
c,Sales w Support
B-to-B ERP MRP Etc.
Figure 1. Context of Decision Making
-
B-to-C SFA CRM
Etc.
d
200 People & Organizations
This was not an isolated experience. An appliance company asked me the same question, to which I provided the same response. In their case, inaccuracy in demand forecasts was the specific business problem for which KM was the solution. Bottom line - there are no general solutions to all problems. Figure 2 clarifies the information requirements cf interest in this chapter. The concern is with long-term issues and the use of discretionary information and knowledge to address these issues. This allows for the possibilities of researchers, designers, and managers ignoring the issues and avoiding the information and knowledge. The overarching question is how to provide sufficient value that they will go beyond “muddling through” in this way.
L
LongTerm
Our
Next Qtr Or Later
Focus
..........................................
NearTerm mis Week
Or Month
lmmediat Today
Mandatory Cannot Proceed Withcut It
Standard Practice Everyone Uses It
Discretionary’ Knowledge Value Must Be Clear
Figure 2. Information Requirements Versus Time
Information, Knowledge, and Decision Making 201
It is essential to emphasize that this chapter is focused on one phenomenon - the need to know. There are many other phenomena associated with information and knowledge management that are not addressed here (Rouse, 1994b; Rouse & Sage, 1999, Sage & Rouse, 1999a, 1999b). Nevertheless, my experience is that lack of understanding of decision makers’ needs to know - and provision of support systems accordingly - is precisely the missing link in the success of many information and knowledge management systems. The next section considers the nature of information and knowledge. The key distinctions between these two constructs are often confused. This results, for example, in labeling many systems as knowledge management systems when they are really information systems. Of course, this is due in part to where we are in the “knowledge management hype cycle” (Caldwell & Harris, 2002). Following these clarifications, I consider information and knowledge seeking as they influence decision making in three domains: R&D, design, and management. Our extensive studies in these domains over the past 20 years are discussed and summarized. The emphasis in these discussions is on fundamental understanding of the need to know. Finally, I outline the implications of understanding the need to know for design, development, and deployment of information and knowledge support systems. A key element of this discussion is an information and knowledge “value space.” This construct provides a framework for supporting access and utilization of high-value information and knowledge.
INFORMATION AND KNOWLEDGE It is important at the outset, to clarify definitions and distinctions among data, information, and knowledge:
.
Data are the results of measurements of variables, for example, voltages, response times, or opinions. Information is an assembly of data in a comprehensive form capable of communication and use, for instance, tables or charts of statistics or trends.
202 People & Organizations
.
Knowledge is information evaluated and organized by the human mind so that it can be used purposefully -- conclusions or explanations are examples.
For design, information might include the identity of an online handbook and how to access it. Upon accessing this hacdbook, one might gain the information that the yield stress of a particular material is 30,000 psi. How and why to use this parameter would constitute knowledge. The user may already know “how” and “why,” and, therefore, all they gain is information. Or, they may gain this knowledge from the handbook. Managers can identify an online source of competitors’ publicly filed financial statements. Accessing this information, a manager might learn that a key competitor’s SG&A (sales, general and administrative) costs have been steadily decreasing over the past several years. This may provide the knowledge, with some uncertainty, that the competitor has successfully undertaken business process reengineering efforts. As these examples illustrate, information is a bit more straightforward than knowledge. In general, information answers “what” questions - what sources, what magnitudes, etc. Knowledge tends to answer “how” and “why” questions relative to, for example, research results, design parameters, and financial reports. Often users already know “how” and “why.” Sometimes sources do not provide this knowledge. Frequently, such answers are only found by asking other people rather than more formal sources. Why does this distinction matter? To a great extent, the importance of the many results of studies reviewed in this chapter does not depend on whether the distinction between information and knowledge is invoked. However, the ways in which people are best supported are highly influenced by this distinction. Information can be captured and stored in databases. Knowledge, in contrast, is often only available from people who know. The “how” and “why” may be tacit, only accessible via context-specific questions. Table 1 illustrates the range of information and knowledge of interest for the three domains discussed here. This representative set of examples portrays the great variety of needs of researchers, designers, and managers. Meeting these needs -- and perhaps making a profit while doing it -requires that value be provided.
Information, Knowledge, and Decision Making 203
Type of
underlying phenomena critical issues
projections of important variables
Research
Design
Management
e.g., previous studies of phenomena of interest e.g., interactions among key variables e.g., tabulations of properties
e.g., compilation of data & equations e.g., cost vs. performance tradeoffs e.g., operating curves & conditions
e.g., company case studies & lessons learned e.g., crosscompany comparisons e.g., financial performance = function
I I
e.g., alternative theories of characteristics phenomena of alternatives Inputs to e.g., computational tradeoff requirements analyses & optimization Eggestions
~~~
Identity & assessment of competitors Forecasts of tangible & intangible impacts
e.g., publications of competing investigators e.g., reports of limiting conditions
(R&D
investment) e.g., alternative e.g., technologies & alternatives strategies & processes tactics e.g., e.g., technology projections of risks & maturity, production returns of learning curves portfolio e.g., market e.g., competitors' players, competitive product characteristics Dositions e.g., market e.g., test data & usability size & share, studies impacts of uncertainties
Table 1. Types of Information & Knowledge
204 People & Organizations
The value of information and knowledge can be viewed in terms of three attributes (Rouse & Rouse, 1984; Rouse, 1986a): 0
Relevance to tasks and decisions over some time horizon
0
Reduction of uncertainty for recipients
0
Appropriateness of form for access and use
Information and knowledge are, therefore, valuable to the extent that they relate to users’ goals and plans, tell users something about which they were uncertain, and are provided in an easily digestible and usable form. Thus, for example, information that does not relate to tasks within a particular user’s time horizon - which might be only minutes or hours will not be perceived as valuable. Knowledge about the “why” associated with a particular phenomenon will not be seen as valuable if the user is not interested or already knows the answer. Value, therefore, is rather user specific. Human seeking and use of information and knowledge has received considerable attention (Rouse & Rouse, 1984; Rouse, 1994b). Human behaviors often exhibit the following: 0
Lack of perceptions of a large amount of untapped and potentially valuable information and knowledge Little time devoted to formal information and knowledge seeking, despite awareness of the existence and relevance of these sources
0
0
Heavy reliance on other people rather than formal sources - people who “know” can often answer what, how, and why questions Difficulty translating information and knowledge across domains and across disciplines, limiting digestibility and hence value
Poor followup in digesting information and knowledge accessed as well as using it to inform decisions All of the above undermine perceived value. Hence, substantially improving accessibility, for instance, via Internet, does not necessarily yield commensurate improvements in utilization and willingness to pay. In the remainder of this chapter, I explore these and related phenomena in the domains of research, design, and management, concluding with a range of observations on the implications for creating successful support systems.
Information, Knowledge, and Decision Making 205
R&D DECISION MAKING The domain of R&D provides a good starting point for exploring the need to know. Most of the early attempts to develop computer-based approaches to information and knowledge seeking began in research libraries (Bush, 1945; Licklider, 1965). My foray into this area was motivated by our long series of studies of managing library operations, including networks of libraries, discussed in Chapter 3. At that time, computer-based databases were just becoming prevalent. A central element of research involves identifying, digesting, and generating information and knowledge to “fuel“ the broad research enterprise. In particular, an important element of research involves researching the literature; another important element concerns contributing to this literature. Throughout the 1980s, my colleagues and I conducted a series of studies of information and knowledge seeking in R&D. These studies focused on how researchers perceive value and how computerbased approaches can be used to enhance value received when searching the research literature. Dave Morehead conducted five controlled experiments in three different online database environments and topical domains - agriculture, fiction and operations research - using the DBASE (Database Access and Search Environment) environment shown in Figure 3. These experiments focused on researchers as users of DBASE and involved 60 engineering students and faculty members across the five experiments. The results of these experiments provided a variety of significant insights into people’s abilities to specify, recognize, access, and utilize valuable information. Not surprisingly, the nature of search questions and structure of information available in databases strongly affect performance (Rouse, Rouse & Morehead, 1982; Morehead & Rouse, 1982). When elements of information explicitly reference each other, for instance, via reference lists, it makes it easier for users to find significant portions of relevant material. On the other hand, structure can be overwhelming and perhaps deceiving. The network of references from and to elements of information can be quite rich. Aiding in “mining the veins” in such networks can make the difference between this richness enhancing vs. degrading performance. In other words, more information about relationship networks may only lead to finding more valuable knowledge if help is provided in using this information (Morehead & Rouse, 1983; Morehead, Pejtersen & Rouse, 1984).
206 People & Organizations
Figure 3. The DBASE Environment
Users often have difficulty adequately specifying the value they seek in terns of keywords, perhaps augmented by Boolean logic. User-tailored value functions can be useful to enhance performance by specifying characteristics of value and weightings of these characteristics, as well as topical keywords. However, users may not be fully able to specify feature sets that define value (Morehead, Pejterson & Rouse, 1984; Morehead & Rouse, 1985a). In general, users are much more capable of recognizing value than specifying it. Users readily recognize examples of value but have difficulty specifying the attributes of value. Users’ abilities can be leveraged by having them classify retrieved examples - for example, in categories of excellent, good, fair, and poor -- and then having the computer reformulate value functions based on these discriminations. Reformulated value functions can then be used to retrieve new candidates, that the user then classifies, and so on (Morehead & Rouse, 1985b). One experiment focused on users’ behaviors several weeks after they had identified high-value information (i.e., article titles and abstracts) regarding potentially high-value knowledge (i.e., article contents) relative
Information, Knowledge, and Decision Making 207
to research in which they were currently involved. Very few users had actually retrieved the articles of interest (Morehead & Rouse, 1985a). The most common reason stated was that it was too much effort to walk across campus to the library. Thus, very high relevance and uncertainty were not enough without very easy access and use. In this series of studies and others we repeatedly fwnd that ease of access was a dominant factor once the relevant information was found. Of course, we currently are able to access information very easily, usually without leaving our desk. This presents another dilemma, however, in that we can now readily access enormous amounts of information and knowledge, but our abilities to consume this material has not necessarily improved. In summary, researchers’ decisions regarding access and use of potentially valuable information and knowledge involve both specifying and recognizing value, as well as taking advantage of this value in informing and guiding their research. While people have substantial difficulties specifying what they want, they are reasonably good at recognizing value in particular items. This value may, or may not, influence the course of their research. Thus, information and knowledge management systems are but one element of satisfying needs to know.
DESIGN DECISION MAKING Research both consumes and produces information and knowledge. Other researchers consume much of this. However, some of this information and knowledge eventually makes its way to use by designers and other practitioners to inform their decisions (Rouse, 1985). Interest in the transition of information and knowledge to use in this way led us to many years of study of systems design (Rouse, 1991; Cody, Rouse & Boff, 1995; Rouse & Boff, 1987, 1998). Design is also a natural consumer and producer of information and knowledge. Market studies, customer requirements, and technology characteristics are commonly consumed. Drawings, parts lists, packaging designs, and process plans are typical outputs. Compiling a complete listing of all the information and knowledge consumed and produced is an immense task. Most design environments - markets, companies, etc. -- are sufficiently rich to preclude gathering all relevant information and knowledge. In addition, time pressures are such that “good enough” often precludes better
208 People & Organizations
or best. As a consequence, much seemingly relevant information and knowledge does not influence design decisions. Throughout the 1980s and 1990s, Bill Cody, Ken Boff, and I conducted extensive studies of information and knowledge seeking in the context of design decision making. Using questionnaires, interviews, and direct observation, we studied 240 system designers ir. seven aerospace companies as well as procuring agencies. In these ways, we determined the nature of design questions asked and how information and knowledge are used to provide answers. There were some wonderful surprises in the course of these studies. Perhaps the most notable occurred when interviewing cockpit designers at a major aerospace company. We asked designers to read and comment on the design scenario we had developed to test our ideas; see the appendix of Cody, Rouse, and Boff (1995). After reading this scenario, we asked one senior designer if we had captured the essence of design in the story. He responded, “This does not capture my job at all.” We were crestfallen. However, he followed immediately with, “I sure wish it did!” Serendipitously, we had extrapolated well beyond current work practices and captured how this designer and others would like to do their work. We had not realized how futuristic our portrayal would appear. This designer added a caution with his endorsement. He asked, “Have you estimated what it would actually cost to make available all this information and knowledge?” We had not, but soon did and then knew that we could not create the monolithic system we portrayed. This fundamental insight led to the tools discussed in Chapter 8. Our studies of designers yielded an evolving theory of design (Rouse, 1987b, 1991; Rouse & Cody, 1988, 1989a). This theoretical framework provided a means for understanding designers’ use of information and knowledge (Rouse, 1986a, b; Rouse, Cody & Boff, 1991) as well as design tools (Rouse & Cody, 1989c; Cody & Rouse, 1989; Rouse, Cody & Frey, 1992). This understanding enabled development of requirements for design support (Rouse, 1987a; Rouse & Cody, 1989b; Rouse, Cody, Boff & Frey, 1990) and eventually the overall design support concept (Cody, Rouse & Boff, 1995) shown in Figure 4. This concept is premised on the designer and design support system interacting via a design information world. The support system receives, observes, and infers the designer’s information and knowledge needs. The support system also constructs portrayals of information and knowledge. Further, it executes design procedures concerned with providing support and achieving support goals.
Information, Knowledge, and Decision Making 209
DESIGN SUPPORT SYSTEM
DESIGNER
PROCEDURES
1
DISPLAYS
1
Figure 4. Nature of Design Support
The overall concept in Figure 4, as well as its several specific incantations (Rouse, 199l), reflects a variety of conclusions gleaned from our many studies of designers, tools, and organizations. These conclusions concern the environment within which design occurs, the significant challenges that must be addressed, the nature and roles of design methods and tools, and the implications for design support systems, including information and knowledge management.
Environment of Design The environment within which design occurs is often very complex. The envelope of acceptable design decisions, for instance, technology choices, is strongly affected by pervasive market and business drivers. Tradeoffs among conflicting drivers typically involve substantial cross-organization
210 People & Organizations
debates. Such debates are usually laced with significant market and business uncertainties. Considering the wealth of both internal and external factors, design activities can be characterized as multi-attribute, multi-stakeholder, timepressured, information and knowledge-rich problem solving and decision making. In light of this complexity, design often involves many people from a wide range of disciplines. This leads to needs for collaboration and provides many opportunities for cross-disciplinary friction. As a consequence of this environment, designers cannot systematically seek and use the range of information and knowledge that might be expected from external perspectives. There is too much information, too little time, and too many players to approach most design issues in depth. Many design decisions are based on negotiation more than optimization.
Design Challenges A central design challenge concerns understanding high-impact uncertainties and risks. Market uncertainties such as likely demand and competition are initially predominant. Technology uncertainties - for example, technology maturity -- can be significant when new products and services are being entertained. Business uncertainties involving human and financial resources may also be of concern. When the consequences of decisions warrant the investment of time and money, uncertainties may be formally characterized in terms of probability distributions. These characterizations may serve as inputs to formal decision analytic methods and tools. However, most decisions are not addressed with such rigor, in part because the consequences of lessthan-optimal decisions are acceptable. It is also often the case that there are few data upon which to base characterizations of uncertainty. The multi-disciplinary nature of design can make it difficult to represent, manipulate, and perhaps optimize alternative solutions. Crossdisciplinary methods and tools are usually limited to spreadsheet models of simple form. Considerable effort is, therefore, often invested to decompose problems into pieces addressable by specific disciplines. When system design problems inherently cross disciplines, designers spend much more time communicating and negotiating than seeking information and knowledge from formal sources. Large design teams and seemingly endless design meetings become the norm. Capturing and managing information about the evolving design is a significant challenge.
Information, Knowledge, and Decision Making 211
Knowledge about why certain design decisions were made usually remains solely captured in participants’ heads.
Design Methods and Tools Most design organizations have come to recognize the importance of spiral discovery, prototyping, and evaluation processes. Traditional waterfall approaches assume that objectives can be chosen and then fixed, requirements can next be determined and then fixed, and design can proceed with objectives and requirements fixed. This approach has never worked as well as hoped and, when it does work, tends to yield solutions to yesterday’s problems. Pursuit of these design spirals is usually enabled by a toolbox of targeted, specialized tools with, ideally, compatible representations and information flows across tools. The “holy grail” of monolithic, integrated models and tools that cover the whole system and all design issues is seldom pursued now. Modularity and compatibility, rather than integration, are desired attributes of the elements of the toolbox. Consequently, designers do not pursue or expect global optimization of decisions. Instead they try to reach satisfactory resolutions of tradeoffs. The overall process is iterative, spiraling in on problem understanding and solution specification. Information and knowledge feed this process. Increasingly, the information is accessed online, but knowledge still primarily remains in designers’ heads.
Implications for Support To successfully support the design of complex systems, information and knowledge must be cross-disciplinary - for example, across engineering, marketing and finance. Such support should enable representation, computational experimentation, occasionally optimization, and typically negotiation of design solutions. This support must also assist multidisciplinary teams -- not just individuals - who are distributed in place and time. An overarching need is for compatible representation and manipulation of physical and preference “spaces” and determination of impacts of interactions and uncertainties. Representation and manipulation of these spaces should allow multi-disciplinary participation. People should be able
212 People & Organizations
to contribute their discipline-specific value to the overall community process. Consequently, designers want “workshops” of compatible, targeted tools rather than an ultimate “shopsmith” that does everything. This reflects both the fact that no single participant does everything and the recognition that repeated pursuits of the shopsmith solution have floundered in the complexity of realistic design problems. Thus, the “design information world” of Figure 4 becomes a rich assortment of representations of problems and solutions, compilations of inputs and outputs, and collections of models, methods, and tools from a wide variety of perspectives. Any one designer or design discipline need not access and use this full range of information and knowledge. However, the collective need to know can be satisfied by such capabilities.
MANAGEMENT DECISION MAKING Our studies of design led to a variety of design tools with which we helped a range of enterprises develop and pursue plans for new products and systems. Quite frequently, we encountered the influence of the market and business issues noted in the last section. Our customers often also asked for help with these issues. Immersion in these issues led to books on new product planning (Rouse, 199l), strategic business planning (Rouse, 1992), organizational belief systems (Rouse, 1993), foundations of planning (Rouse, 1994a), market situation assessment (Rouse, 1996), organizational delusions (Rouse, 1998a), and essential challenges of strategic management (Rouse, 2001). A primary concern was managers’ seeking of information and knowledge in the process of formulating strategic objectives, framing product and technology plans, making investment decisions, monitoring ongoing results, and adjusting allocations of resources to reflect changing opportunities and results. This process involves selectively consuming information and knowledge, as well as generating the guidance to be pursued by the whole enterprise. Use of most potentially relevant information and knowledge is almost completely discretionary - ill-informed decisions can still be made. Unlike researchers who have to cite other researchers and designers who must assure that products function properly, management has the freedom to “wing it” and commit to uninformed courses of action. This is due, in part,
Information, Knowledge, and Decision Making 213
to much desirable information and knowledge not being readily available, for example, competitors' intentions. Another very significant factor is the extent to which information and knowledge of interest are external vs. internal. Managers spend large portions of their time taking the pulse of their organization and pursuing remediation of various palpitations. This often leaves little time for gaining understanding and skill in best practices available from external sources. This section summarizes a series of studies of methods and tools for supporting managers' needs to know. Throughout the 1990s and into the 2000s, my colleagues and I have developed and evaluated tools for supporting strategic management and, in the process, evaluated whether and how these tools impact decision making. This has enabled determining the nature of the information and knowledge that thousands of executives and senior managers value. For the past few years, we have also been evaluating online approaches to satisfying managers' needs to know. This has included provision of a "reading service" and creation of a management journal and portal on the Internet. Evaluation of these offerings has provided numerous insights into what managers value and what they will pay to receive this value. We have also learned important lessons concerning what managers say they value vs. what their behaviors allow one to infer they value. A critical insight concerns the time-relevance of information. Much more so than researchers and designers, managers' agendas are driven by the days' events. Consequently, their highest priority tasks are often unexpected and require prompt decisions and results. This leads to desires for information and knowledge that are of immediate use, rather than contingent use in the future. Satisfying such needs requires mechanisms for assessing these immediate needs. Otherwise, managers are likely to conclude that information and knowledge are valuable in general, but not in particular. This challenge has important implications for decision support. Most of our studies of management decision making occur in the context of supporting executives and senior managers formulate strategic plans, develop product plans, determine needed organizational changes, and so on. We have worked with well over 100 enterprises in the private, public, and non-profit sectors. Two thirds are companies and one third government agencies and non-profits. Two thirds are in the United States and one third are international.
214 People & Organizations
Strategic Management Tools As indicated earlier, the insights gained, tools created, and books written were gleaned from interactions with several thousand executives and senior managers. The methods used included interviews, questionnaires, and especially notes from working sessions. A suite of four software tools for strategic management emerged to address the issues depicted in Figure 5 . These tools included;
Product Planning Advisor for product strategy (Rouse, 1991) Business Planning Advisor for business strategy (Rouse, 1992) Situation Assessment Advisor for market assessment (Rouse, 1996) Technology Znvestment Advisor for technology strategy (Rouse, et al. 2000). These tools are discussed in detail in Chapters 8-10, as well as in Essential Challenges of Strategic Management (Rouse, 2001).
Figure 5. Strategic Management Tools
Information, Knowledge, and Decision Making 215
An obvious question concerns the impacts of such tools on the senior management teams with which they were typically used. We addressed this question by compiling managers’ observations from the first 100 (of many hundreds) of the planning sessions we conducted with these tools. These 100 sessions are summarized in Table 2. Typical product planning sessions focused on satellite communications networks, or new lines of passenger vehicles, or next generation microprocessors. Example business planning sessions included formation of a health information systems enterprise, diversifying in markets for commercial aircraft systems, and orienting R&D toward internal markets. Illustrative consensus building sessions addressed abilities to convert from defense to commercial markets, impacts of impending environmental regulations, and success of ongoing total quality programs. We asked management teams what they wanted from computer-based tools for supporting collaborative planning and consensus building (Rouse, 1998b). Managers responded with four types of desires. First, from a process perspective, planning teams want tools to provide an understandable and straightforward process to guide their decisions and discussions, with a clear mandate to depart from this process whenever they choose. They want support, but not constraints.
Topic
No. of Sessions
Total Participants
Average Participants
Product Planning
43
905
21.0
Business Planning
52
105 1
5
82
Consensus Building
~
Totals
100
16.4
~~
203 8
Table 2. S u m m a r y of Initial Study
20.4
216 People & Organizations
Second, planning teams want tools to capture the information compiled, decisions made, and linkages between these inputs and outputs so that they can communicate and justify their decisions, as well as reconstruct decision processes. They want to avoid their past experiences recreating wheels because no one remembered previous issues and decisions. Third, planning teams want computer-aided facilitation of group processes via management of the nominal decision making process using computer-based tools and large screen displays. They reported that the extent to which tools externalize issues, that is, on the screen rather than between individuals, they felt better able to address and resolve conflicting perceptions. Finally, they expect surprises. Planning teams want tools that digest the information that they input, see patterns or trends, and then provide advice or guidance that the group perceives they would not have thought of without the tools. Otherwise, they question the “overhead” of feeding the tools all the information required to build the models that the tools manipulate. Thus, management teams want process, information, facilitation, and surprises. Tools that provide these benefits will be perceived as valuable as long as the cost of use - mainly effort - is acceptable. This was usually assured by employing trained facilitators who were highly skilled in use of the software tools. It is interesting to note that managers asked for information but not knowledge. This was certainly due in part to their not thinking in terms of this distinction. However, more importantly, they viewed the team members as the primary sources of knowledge. In fact, it was often commented that the tools helped them to “pick each other’s brains.” This knowledge enabled interpreting the information shown on the large computer screens. In the large number of planning sessions we have conducted since this initial study, several additional insights have emerged. Initially, we conducted strategy seminars with an emphasis on management training. In recent years, this has become untenable. Training is now only viable in context of doing real work, rather than just learning. Consequently, pressing problems and real decisions frame the typical agenda. Managers learn new methods and tools while also getting high-priority work done. We have also found it important to frame, represent, and evaluate decisions within a few days. If the process is distributed over weeks or months, the quality of the results seems to suffer. Participation becomes
Information, Knowledge, and Decision Making 217
uneven and shared understanding becomes diffuse. It becomes more difficult to reach firm conclusions that do not have to be repeatedly revisited. It is crucial that all key stakeholders be involved at the same time. This is essential to assuring support, both from top management and those responsible for implementation. The tools can, in part, compensate for uneven participation if everyone supports the chosen methods and tools. This makes it relatively easy to catch up with progress during one’s absence. Ironically, despite the strong emphasis on speed just portrayed, we have also found that final decisions to commit to plans often take longer and longer. Time to market has come to be dominated by time to decision. This was particularly true during the early years of the new century, which was a period of great uncertainty. The last couple of years have been more “normal.” Thus, effective decision making is not just a matter of superior information and knowledge management systems. The foregoing has significant implications for the nature of support for executives and senior managers. First and foremost, the value of the information and knowledge provided must be undeniably very high. Further, the ease of use in consuming this information and knowledge must be excellent. This often dictates involving highly skilled facilitation. Easy-to-learn powerful models, methods, and tools can be important. Knowledgeable facilitation can be key here also. Facilitators with strong domain knowledge can accelerate the overall process. The value of these models, methods, and tools should not depend on any user followup nothing can be “left to the student” when the student is a senior executive. This requires reaching definitive conclusions while facilitation is still available. We have found that virtually all executives and senior managers have one representation in common - spreadsheets. Consequently, we learned to stage all analyses in spreadsheets. Data and parameters could then be ported to tools such as depicted in Figure 5 . Results could then be ported back to the spreadsheet. The resulting spreadsheet provides the “minutes” of all the analyses. This spreadsheet assures a level of comfort for participants. Everything is there is a familiar format. The next step is to create a presentation, for instance, in PowerPoint, as a report of the session(s). This enables participants to brief their stakeholders on the nature and results of the session(s), as well as the basis of any action items that emerged. With
218 People & Organizations
the spreadsheet and presentation, participants have all they need to know to proceed.
Online Management Services The experiences summarized thus far caused us to perceive that we understood managers’ information and knowledge needs. To test this hypothesis, we created an online “article service.” This involved reading the top management journals and identifying articles, or perhaps just illustrations, that we believed would be of great value to particular executives and senior managers. More specifically, we tracked 30+ management journals each week, month, or quarter and identified material relevant to issues at hand for 60 targeted executives and senior managers. We then sent them a personal email with this information, with an explanation tailored to these needs. An example was,
On p . 117 of the recent Harvard Business Review you willfind a table that will help you address at next week’s Board of Directors meeting the growth-related issue we recently discussed. Overall, we sent 500 emails to these 60 managers over an 18-month period. The sources of these recommendations are shown in Figure 6. This pilot test was very positively received. Since we had ongoing working relationships with these executives and senior managers, we knew their issues and needs to know very well. We were well aware that it would be very difficult to scale up this concept for people with which we did not have ongoing working relationships. Nevertheless, we now knew that it was possible to get managers’ benefits/costs right for information and knowledge management. This small study provided the basis for founding a management journal, Information Knowledge Systems Management, targeted at the same types of executives and senior managers. This journal has a quarterly hardcopy edition as well as an online edition, IKSMOnline, shown in Figure 7. There are over 500 subscribers as of this writing, a large percentage of whom are receiving free trial subscriptions.
Information, Knowledge, and Decision Making 219
Article Recommendations
Journal or Magazine
Figure 6. Distribution of Recommendations
Figure 7. IKSMOnline
220 People & Organizations
Beyond hosting its namesake journal and two other full journals on related topics, for the first two years we provided additional online services. IKSMOnZine also reviewed articles in 30+ top management journals and emailed recommended articles to subscribers based on their personal profiles. There were also recommended websites, book reviews, interviews of thought leaders, a reading room with book recommendations, and selected software tools available. Ongoing evaluation of this journal involved email and telephone interviews and questionnaires. The conversion rate from trial to paid subscriptions was an unimpressive 12%. The most frequent reasons for non-conversion were: 0
The information provided did not match my immediate interests (33%)
I already have too many information resources (25%) I have no means to pay for this type of service (25%) There are too many free information resources available to justify paying (17%) It appears that managers’ time horizons for value are very short. Also relevance must be very high and ease of access and use must be excellent. Willingness to pay for this value is uncertain, although such users have repeatedly demonstrated willingness to pay huge sums for consultants who provide similar information and knowledge. Of course, the ease of use of a capable, motivated person sitting across the table from you is very difficult to beat. Consequently, while IKSMOnline continues, we have scaled back the labor-intensive elements.
INFORMATION AND KNOWLEDGE SUPPORT This exploration of information, knowledge, and decision making in research, design, and management has covered a wide range of issues and questions. Table 3 compares these three domains in terms of a range of characteristics. There are clearly very substantial differences among researchers, designers, and managers.
Information. Knowledge, and Decision Making 221
I
Characteristics of Domains
Research
Design
Management
New, fundamental knowledge must be created
Inherent
Occasional
Seldom
New knowledge must reference past knowledge
Inherent
Seldom
Seldom
New forms of representation must be formulated
Common
Occasional
Seldom
Existing forms of representation must be populated with information
Occasional
Common
Common
Formal sources of information must be considered
Common
Occasional
Seldom
Manipulation of representation constitutes the overall task
Inherent
Occasional
Seldom
Common
Common
Seldom
Occasional
Common
Common
Common
Occasional
Seldom
Seldom
Occasional
Common
Non-technical organizational considerations have a major impact
Seldom
Occasional
Common
Personal commitment to implications of results must be argued
Seldom
Occasional
Common
I
Optimal answer is the overriding goal Satisfactory answer is the
provide sufficient argument i Results must be "sold" to a
'
wide range of stakeholders
Table 3. Comparison of Three Domains
222 People & Organizations
On the other hand, there are underlying phenomena in common. Human information and knowledge seeking, in all three domains, is affected by the nature of questions and associated uncertainties. The extent and structure of information and knowledge affect how people search and the success of their searches. Behaviors are also affected by external and internal drivers associated with markets, technologies, organizations, and so on. The group-oriented nature of most organizational processes also affects needs to know. Information and knowledge seeking is, in general, made difficult by humans’ poor abilities to specify value. This lack of ability is greatly compensated for by people’s excellent abilities to recognize value. Incompatible representations across domains and disciplines also cause problems. Lack of time and the difficulty of convening groups are also problematic. Finally, the primacy of immediate and mandatory requirements often preempts access and use of valuable information and knowledge. The value of information and knowledge appears to be conceptually similar in all three domains. As shown in Figure 8, usefulness, usability, and urgency are the primary dimensions of value. These dimensions can be defined as follows: Usefulness: Extent to which information and knowledge help users to pursue their intentions Usability: Extent to which information and knowledge are easily accessed, digested, and applied Urgency: Extent to which information and knowledge help users to pursue near-term plans The value space in Figure 8 has important implications for supporting users with high-value information and knowledge. Choices among information and knowledge sources provided should be tailored to users’ intentions. Forms of information and knowledge provided should be tailored to users’ expertise and preferences. Choices among useful information and knowledge sources should be tailored to users’ near-term plans. Mechanisms are needed to enable easy assessment of users’ intentions, expertise, preferences, and plans. Building upon this notion of value, as well as the many research results summarized here, the basic principles for supporting information and
Information, Knowledge, and Decision Making 223
knowledge seeking and use become fairly clear. Tailoring support to users’ intentions, expertise, preferences, and plans can enhance seeking and use. Aiding should be provided to exploit the structure of information and knowledge. Aiding should also support generalization of specific instances of value recognized by users. Finally, aiding should facilitate transformations of terminology, representations, and so on across domains and disciplines. Beyond these basic principles, there are implications for functionality of information and knowledge support systems. Support systems should enhance decision processes and not be premised on the notion that seeking and use of information and knowledge are ends in themselves. Support should be provided for tracking and capturing information and knowledge generated throughout decision making processes. Support systems should facilitate collaboration - across space and time - of multi-disciplinary teams.
Usefulness
t
Information & knowledge are valuable to the extent that they are USEFUL and USABLE and URGENT (ly needed)
Figure 8. Information and Knowledge Value Space
224 People & Organizations
Finally, to assure indisputable value, support systems should digest information captured - in the context of processes and teams - and provide insights that would not otherwise be possible. This could range from consistency checks of user-created models, to generalization of specific examples provided by users, to generation of new alternatives by piecing together numerous relationships articulated by users. I have experienced all of these types of computer-initiated “surprises” and seen the very positive effects such surprises have on teams. The desired enhancements and support functionality outlined above is a “tall order.” There is, however, one more requirement. All of the above has to be provided for near-zero costhenefit. People increasingly expect information, knowledge, and associated support for minimal, if any, cost. They expect support to be on target, timely, and free. While they have never experienced this nirvana, their expectations are very high. In light of the rich set of results presented here, I expect that on target and timely are quite possible; free may be much more difficult. Of course, there are lots of ways to pay for things. So, I am sure that this nut will also be cracked. Another issue has emerged more recently. People have come to realize that the information systems intended to yield great efficiencies can become traps when the business environment changes. Now, they want both efficiency and agility - the ability to respond to changes. What is really needed is agile decision making rather than just agile information systems (Rouse, 2006). This suggests that there is a central tradeoff between optimizing performance within the current business model and being agile in responding to needs for new business models.
CONCLUSIONS Much of information and knowledge management involves assuring that people get what they have to know - the key variables for flying the airplane, fighting the war, routing the trucks, running the business, and so on. There are, of course, vast amounts of information and knowledge that few if any people have to know. An increasing portion of this vastness seems to be migrating to websites, enabling many thousands of hits for seemingly focused search questions. Beyond information and knowledge that we have to know, there are other things that people - for instance, researchers, designers, and managers - perceive that they need to know. They need this information and knowledge to inform their decision making with regard to research
Information, Knowledge, and Decision Making 225
studies, design alternatives, investment opportunities, and an endless range of large and small choices. This chapter has explored the underlying factors than affect needs to know. This exploration provided insights into how to support information and knowledge seeking and use. Understanding such as articulated here is critical to providing people what they need to know and, quite possibly, creating viable products and services for meeting these needs. This chapter can also be viewed as a case study of human-centered design. Over 20+ years, we studied humans' abilities, limitations, and preferences relative to discretionary seeking of information and knowledge. The understanding we gained enabled creating products and services that we delivered to the marketplace. Some were successful and others were disappointing in terms of revenues and profits. Chapters 8-10 discuss some of these successes and disappointments.
REFERENCES Bush, V. (1945). As we may think. Atlantic, 176 (l), 101-108. Caldwell, F., & Harris, K. (2002). The 2002 knowledge management hype cycle. Gartner Research Note, SPA-15-065 1. Cody, W.J., & Rouse, W.B. (1989). A test of criteria used to select human performance models. In G.R. McMillan, D. Beevis, E. Salas, M.H. Strub, R. Sutton, & L. vanBreda (Eds.), Applications of human uerformance models to system design (pp. 5 11-531). New York: Plenum Press. Cody, W.J., Rouse, W.B., & Boff, K.R. (1995). Designers' associates: Intelligent support for information access and utilization in design (Vol. 7, pp. 173-260). In W.B. Rouse (Ed.), Humaflechnologv Interaction in Complex Svstems. Greenwich, CT: JAI Press. Licklider, J.C.R. (1965). Libraries of the future. Cambridge, MA: MIT Press. Morehead, D.R., Pejtersen, A.M., & Rouse, W.B. (1984). The value of information and computer-aided information seeking. Information Processing and Management, 20(5-6), 583-601. Morehead, D.R., & Rouse, W.B. (1982). Models of human behavior in information seeking tasks. Information Processing and Management, 18(4), 193-205.
226 People & Organizations
Morehead, D.R., & Rouse, W.B. (1983). Human-computer interaction in information seeking tasks. Information Processing and Management, 19(4), 243-253. Morehead, D.R., & Rouse, W.B. (1985a). Online assessment of the value of information for searchers of a bibliographic data base. Information Processing: and Management, 2l(2), 83-101. Morehead, D.R., & Rouse, W.B. (1985b). Computer-aided searching of bibliographic data bases: Online estimation of the value of information. Information Processing and Management, 2l(5),387-399. Rouse, W.B. (1985). On better mousetraps and basic research: Getting the applied world to the laboratory door. IEEE Transactions on Systems. Man, and Cybernetics, SMC-15(1), 2-8. Rouse, W.B. (1986a). On the value of information in system design: A framework for understanding and aiding designers. Information Processing and Management, 22(2), 217-228. Rouse, W.B. (1986b). A note on the nature of creativity in engineering: Implications for supporting system design. Information Processing and Management, 22(4),279-285. Rouse W.B. (1987a). Designers, decision making, and decision support. In W.B. Rouse & K.R. Boff (Eds.), System design: Behavioral persuectives on designers, tools, and organizations (pp. 275-283). New York: Elsevier. Rouse, W.B. (1987b). On meaningful menus for measurement: Disentangling evaluative issues in system design. Information Processing and Management, 23(6), 593-604. Rouse, W.B. (1991). Design for success: A human-centered auproach to designing: successful uroducts and systems. New York: Wiley. Rouse, W.B. (1992). Strategies for innovation: Creating successful products, systems, and organizations. New York: Wiley. Rouse, W.B. (1993). Catalysts for change: Conceuts and urinciules for enabling innovation. New York: Wiley. Rouse, W.B. (1994a). Best laid plans. New York: Prentice-Hall. Rouse, W.B. (1994b). Human-centered design of information systems. In J. Wesley-Tanaskovic, J. Tocatlian, & K.H. Roberts (Eds.), Exuandinq
Information, Knowledge, and Decision Making 227
access to science and technology: The role of information technology (pp. 214-223). Tokyo: United Nations University Press. Rouse, W.B. (1996). Start where YOU are: Matching your strategy to your marketplace. San Francisco, CA: Jossey-Bass. Rouse, W.B. (1998a). Don't iump to solutions: Thirteen delusions that undermine strategic thinking. San Francisco, CA: Jossey-Bass. Rouse, W.B. (1998b). Computer support of collaborative planning. Journal of the American Society for Information Science, 49 (9), 832-839. Rouse, W.B. (2001). Essential challenges of strategic management. New York: Wiley. Rouse, W.B. (2002). Need to know: Information, knowledge and decision making. IEEE Transactions on Systems. Man, and Cybernetics - Part C, 22 (4),282-292. Rouse, W.B. (2006). Agile information systems for agile decision making. In K. C. Desouza, Ed. APile Information Systems. New York: ButerworhHeinemann. Rouse, W.B., & Boff, K.R., (Eds.). (1987). System design: Behavioral persDectives on designers, tools, and organizations. New York: Elsevier. Rouse, W.B., & Boff, K.R. (1998). Packaging human factors for designers. Ergonomics in Desim, 6(1), 11-17. Rouse, W.B., & Cody, W.J. (1988). On the design of man-machine systems: Principles, practices, and prospects. Automatica, 24(2), 227-238. Rouse, W.B., & Cody, W.J. (1989a). A theory-based approach to supporting design decision making and problem solving. Information and Decision Tekchnologies, l5,291-306. Rouse, W.B., & Cody, W.J. (1989b). Information systems for design support: An approach for establishing functional requirements. Information and Decision Technologies, l 5 , 2 8 1-289. Rouse, W.B., & Cody, W.J. (1989~). Designers' criteria for choosing human performance models. In G.R. McMillan, D. Beevis, E. Salas, M.H. Strub, R. Sutton, & L. vanBreda (Eds.), Applications of human performance models to system design (pp. 7-14). New York: Plenum Press.
228 People & Organizations
Rouse, W.B., Cody, W.J., & Boff, K.R. (1991). The human factors of system design: Understanding and enhancing the role of human factors engineering. International Journal of Human Factors in Manufacturing, -1(1), 87-104. Rouse, W.B., Cody, W.J., & Frey, P.R. (1992). Lessons learned in developing human-machine system design tools. Informtion and Decision Technologies, l8,301-308. Rouse, W.B., Cody, W.J., Boff, K.R., & Frey, P.R. (1990). Information systems for supporting design of complex human-machine systems. In C.T. Leondes (Ed.), Control and dynamic systems: Advances in aeronautical systems (pp. 41-100). Orlando, FL: Academic Press. Rouse, W.B., Howard, C.W., Cams, W.E., & Prendergast, E.J. (2000). Technology investment advisor: An options-based approach to technology strategy. Information Knowledge Systems Mununernent, 2 (l), 63-81. Rouse, W.B., & Rouse, S.H. (1984). Human information seeking and design of information systems. Information Processing and Management, 20(1-2), 129-138. (Reprinted in W.A. Katz, Ed., 1986, Reference and Information Services. Metuchen, NJ: Scarecrow Press.) Rouse, W.B., Rouse, S.H., & Morehead, D.R. (1982). Human information seeking: Online searching of bibliographic citation networks. Information Processing and Management, l8(3), 141-149. Rouse, W.B., & Sage, A.P. (1999). Information technology and knowledge management. In A.P. Sage & W.B. Rouse (Eds.), Handbook of Systems Engineering- and Management. New York: Wiley. Sage, A.P., & Rouse, W.B. (1999a). Information systems frontiers in knowledge management. Information Systems Frontiers, I (3), 205-219.
Sage, A.P., & Rouse, W.B. (1999b). Information technology. In J. G. Webster (Ed.), Encyclopedia of Electrical and Electronics Enpineering. New York: Wiley.
Chapter 8
PRODUCTS, SYSTEMS, AND SERVICES
INTRODUCTION In earlier chapters, I have indicated several times that solely focusing on the end users of a system - whether they are pilots, mechanics, or consumers - is not sufficient for success. It is necessary to pay attention to end users, but not sufficient. Pilots, for example, may use what we develop, but they seldom manufacture or purchase these products, systems, or services. This insight fully crystallized for me during a presentation I gave at a NASA meeting organized by Kathy Abbott in Virginia Beach in the late 1980s. This meeting was laced with exhortations for “pilot-centered design.” Advocating a much broader view, I said, “Pilots may fly ‘em, but they don’t build ‘em or buy ‘em.” To get an innovation on a commercial airplane, you need the users, manufacturers, and owners to want your invention. In fact, it is even more complicated with maintainers and regulatory agencies included. The essential phenomenon in this chapter is “stakeholders,” those who have an interest and influence in the success of a product, system, or service. This insight led us to conclude that we needed to design for all stakeholders, not just end users. Thus, I articulated and continue to refine the need to move beyond user-centered design to human-centered design. I hasten to note that my broader view was, by no means, universally embraced at Virginia Beach or subsequently. As indicated in Chapter 1, there have been many books published on human-centered design where the focus is user-centered design (Billings, 1996; Booher, 2003; Card, Moran & Newell, 1983; Norman & Draper, 1986). The user is a very important stakeholder in design, often the primary stakeholder. However, in many domains, one needs to delight more than just users to succeed. As I elaborated in Chapter 1, human-centered design is defined as a process of assuring that the concerns, values, and perceptions of all 229
230 People & Organizations
stakeholders in a design effort are considered and balanced. human-centered design emphasizes three overarching objectives:
Further,
Enhancing human abilities Overcoming human limitations 0
Fostering human acceptance
These three objectives need to be pursued relative to the stakeholders whose roles are to use, operate, maintain, design, buy, regulate, recommend, and own the product, system, or service of interest. Finally, the target levels for these objectives need to be set relative to alternative means available to that these stakeholders for meeting their needs. This chapter elaborates human-centered design in terms of designing and deploying products, systems, and services that delight primary stakeholders and have the full support of secondary stakeholders. By “delight” I mean moving beyond the notion of meeting requirements and, instead, looking for the “sweet spot” where stakeholders are enthusiasts for the offering, perhaps even surprised by how well it meets their needs regardless of whether they are the user, operator, maintainer, and so on. This chapter discusses a human-centered approach to new product development (NPD), where “product” denotes products, systems, andor services. This discussion begins with consideration of market stakeholders and the attributes whereby they define value. I then consider how alternative offerings can be represented in terms of these attributes and the functions needed to provide desired levels of these attributes. These two representations are termed market and product models. These models are instantiated using multi-attribute utility theory and Quality Function Deployment. I then discuss the Product Planning Advisor (PPA), a software tool that enables creation and manipulation of these models. PPA is usually employed by NPD teams for the conceptual design phase of new offerings. A variety of case stories of PPA use is reviewed. The facilitation of teams using PPA is also considered.
MARKET AND PRODUCT MODELS Human-centered planning of new products, systems, and services begins by developing and manipulating market and product models of the structural
Products, Systems, and Services 231
form shown in Figure 1. As just noted, the purpose of these models is identification of products, systems, and services that delight primary stakeholders and assure the support of secondary stakeholders, while also assuring competitive advantages for those who invest in creating these products and systems. The portion of this figure labeled "What the Market Wants" relates to characterizations of stakeholders and their issues and concerns in terms of the Design for Success framework of viability, acceptability, and validity (Rouse, 1991a) and its many extensions (Rouse, 1991b, 1993, 1994, 1999, 200 1, 2003). The cells of this matrix include stakeholders' utility functions that map attributes to relative preferences. Each cell relates to one stakeholder and one attribute. The portion labeled "How We and Others Will Provide It" includes characterizations of product, system, and service functionality as it relates to attributes of interest to stakeholders. The elements of the functions vs. attributes matrix include ratings of the strength of relationships - either positive or negative -- between functions and attributes. This representation is similar to Quality Function Deployment, albeit much simplified. This enables providing users of PPA with product improvement advice in terms of the functional improvements most likely to support achieving relative competitive advantage.
- ......
I Solullon by
Relntibn ,
""
:..... ........................................
Figure 1. Structure of Market and Product Models
232 People & Organizations
Solutions are “composed” of collections of functions, as indicated in the lower portion of the matrix. This usually includes solutions provided or being entertained -- by users of PPA and often includes solutions provided or potentially provided by competitors. This enables competitive analyses and positioning of potential market offerings relative to competitors, possibly with different competitors in different market segments. The right-most matrix represents solutions vs. attributes. The cells of this matrix include the attribute values for each solution. These values may be based on empirical measurements, market requirements, or stakeholder perceptions. In the latter case, multiple attributes may be needed to characterize differences among different stakeholders’ perceptions of particular variables. Summarizing, the overall characterization in the above market and product representation involves an object-oriented model - in terms of the underlying software -- of the multi-stakeholder, multi-attribute nature of how multi-function solutions compete for stakeholders’ perceptions of value and, hopefully, influence their subsequent purchase decisions. PPA helps to both create such models and manipulate these models to determine the most competitive market offerings.
MULTI-ATTRIBUTE MODELS The models in Figure 1 need to be instantiated in two ways. First, the nature of value must be defined. We employ multi-attribute utility theory to model perceptions of value. Second, we need to define the relationships of the features and functions of the product, system, or service to perceived value. Quality Function Deployment is used, in part, to represent these relationships. I say “in part” because the nature of an interactive product planning tool is such that the perceptions of the NPD team can never be fully captured by QFD or any other model. Interactive team meetings inevitably bring out new perceptions and ideas.
Utility Theory Models Multi-attribute utility theory provides a broadly applicable way to represent how people perceive value (Keeney & Raiffa, 1976; Hammond, Keeney & Raiffa, 1998). Of particular importance, multi-attribute utility models
Products, Systems, and Services 233
provide a means for dealing with situations involving mixtures of economic and non-economic attributes. To illustrate, consider how to represent stakeholders' perceptions of value. Let attribute i at time j be denoted by xij, i = 1, 2, ... L and j = 0, 1, ... M. The values of these attributes are transformed to common utility scales using u(xij). These utility functions serve as inputs to the overall utility calculation at time j as shown below.
which provides the basis for an overall calculation across time using
Note that the time value of attributes can be included in these equations by dealing with the time value of attributes explicitly and separately from uncertainty. An alternative approach involves assessing utility functions for discounted attributes. With this approach, streams of attributes are collapsed across time, for instance, in terms of Net Present Value, before the values are transformed to utility scales. The validity of this simpler approach depends on the extent to which people's preferences for discounted attributes reflect their true preferences. The mappings from xi, to u(xij) enable dealing with the subjectivity of preferences for non-economic returns, as well as associated risks. In other words, utility theory enables one to quantify and compare things that are often perceived as difficult to objectify. Mappings from attributes to utility functions can take many forms. Typical forms are shown in Figure 2. These forms can be interpreted as follows: 0
Figure 2a: More is always better Figure 2b: Less than enough is not enough
0
Figure 2c: More than a little is enough
0
Figure 2d: Less is always better Figure 2e: Enough is enough
0
Figure 2f: More than a little is too much
234 People & Organizations
t
(a) More is
better (c) Diminishing
Attribute 4
Attribute
Attribute
4
.-
5
(e) Accelerating decline Attribute
Attribute
Attribute
Figure 2. Typical Utility Functions
We have found that that ten forms - the six in Figure 2 plus positive and negative thresholds, and positive and negative ranges -- can capture a large percentage of people’s expressions of preferences. Not surprisingly, models based on utility theory reflect the subjective ways in which humans perceive rewards and risks. Once one admits the subjective, one needs to address the issue of whose perceptions are considered. Most decisions involve multiple stakeholders. It is, therefore, common for multiple stakeholders to influence a decision. Consequently, the multi-attribute models need to take into account multiple sets of preferences. The result is a group utility model such as
u = u[ul(x),U 2 a h uN(x>l *.*
where N is the number of stakeholders. Formulation of such a model requires that two important issues be resolved. First, mappings from attributes to utilities must enable comparisons across stakeholders. In other words, one has to assume that u = 0.8, for example, implies the same value gained or lost for all
Products, Systems, and Services 235
stakeholders, although the mapping from attribute to utility may vary for each stakeholder. Thus, all stakeholders may, for instance, have different needs or desires for safety and, hence, different utility functions. They also may have different time horizons within which they expect benefits. However, once the mapping from attributes to utility is performed and utility metrics are determined, one has to assume that these metrics can be compared quantitatively. The second important issue concerns the relative importance of stakeholders. The above equation implies that the overall utility attached to each stakeholder’s utility can differ. For example, it is often the case that primary stakeholders’ preferences receive more weight than the preferences of secondary stakeholders. The difficulty of this issue is obvious. Who decides? Is there a super stakeholder, for instance? In many cases, shareholders can be viewed as these super stakeholders. Fortunately, we have found that NPD teams have little difficulty stating the relative importance of stakeholders, perhaps leavened by a bit of sensitivity analysis. Beyond these two more-theoretical issues, there can be substantial practical issues associated with determining the functional forms of u(xij), and the parameters within these functional relationships. This also is true for the higher-level forms represented by the above equations. As the number of stakeholders, attributes, and time periods increases, these practical assessment problems can be quite daunting. Nevertheless, the overall multi-attribute approach. provides a powerful framework for representing stakeholders’ perceptions of value. Further, as will be seen below, PPA can help manage these difficulties.
Quality Function Deployment Beyond representing value, one needs to represent how to affect value. Quality Function Deployment (QFD), originally developed by Mitsubishi, provides a means for representing relationships between desires of stakeholders and the means open to variation for achieving these desires (Hauser & Clausing, 1988). QFD involves a series of matrices. The rows of each matrix denote things to be affected, while the columns denote things that can be manipulated to cause the desired effects. For example, the rows of the initial matrix may be stakeholders’ desires and the columns many be possible product functionality. For the next matrix, the rows would be product functionality and the columns might be
236 People & Organizations
alternative technologies. For the third matrix, the rows would become technologies and the columns might be alternative manufacturing processes. The series of matrices usually continues until one has depicted the full causal chain from market desires to the variables that one intends to manipulate in order to satisfy desires. Above each matrix is also included a representation of the interactions of columns, for example, interactions of functions. These interactions are depicted in a triangular matrix, rotated to provide a “roof’ to the matrix below. The visual appearance resulting is why QFD is often termed the “house of quality.” As might be imagined, compiling the information necessary to complete the entries in a full series of such houses can be overwhelming. Nevertheless, the kernel of the QFD concept can be quite useful without having to construct whole villages of houses. QFD is applied within the market and product models of Figure 1 in terms of the relationships of functions to attributes. A strong positive relationship means that enhancing the function leads to preferred changes of the attribute. A strong negative relationship indicates the opposite. Thus, for example, enhancing engine horsepower positively affects a vehicle’s acceleration characteristics but negatively affects the vehicle’s fuel economy. Within the Product Planning Advisor, a QFD model is used to capture knowledge of the relationships between functions and attributes for the purpose of subsequently providing advice on how best to improve the product, Such advice is provided in the context of competing with other products, perhaps provided by competitor or yourself. In some cases, the status quo - do not buy any of the alternatives - can be a compelling competitor.
Summary With the models and their instantiations defined, we can now discuss how these formalisms are embodied in an interactive planning tool. It is essential to emphasize the relative role of these formalisms. Their manipulation can be invaluable for informing new product planning. However, even more valuable is the nature of the team discussions that these manipulations engender. Beyond creating market and product models, the NPD team also creates shared mental models of what matters, how to affect it, and potential risks relative to competitors’ likely plans.
Products, Systems, and Services 237
PRODUCT PLANNING ADVISOR The thinking elaborated thus far in this chapter matured in the writing of Design for Success (Rouse, 1991a) and its subsequent use with a large number of workshops focused on planning new products, systems, and services. Frequently during these workshops, participants would say something like, “I really like your book, but I’d rather have a software tool that embodied the book. I would like a tool such that when I am using it, I know that I am following the principles of human-centered design.” We discussed this possibility with many customers. At some point, one of them said, “We’ll buy a corporate license to this software tool, even though it does not yet exist. We’ll take the risk if you will.” As you might expect, this customer’s commitment pushed us (Search Technology) to commit to designing and developing a human-centered design tool that became the Product Planning Advisor . Following our own advice, we talked to the many stakeholders in this tool - users, customers, IT support folks, and our own staff members. The result was a long list of suggested features and functions. We took the union of all these suggestion lists and proceeded. In other words, we included any function and feature that anybody has suggested. This was a big mistake. The resulting tool was extremely cumbersome. You could do almost anything if you could figure out how to do it. Several customers said, “You gave us exactly what we asked for and, now that we have seen it and tried it, we know we were wrong. This tool is overwhelming!” We went back to the drawing board. We then took the intersection of all the suggestion lists. Thus, we only included features and functions that everyone had requested. The result was a much simpler tool - PPA, Version 1.0. We sold several thousand copies of this tool and its successor (PPA, Version 2.0) mainly through multi-user licenses sold with our product planning workshops. As an aside, it is interesting to assess the number of users associated with a software product. We were certain that we sold many more copies than we had active users. Many companies bought a 25-user license, or more. However, the number of active users seemed much smaller. So, my guess is that we had several hundred avid users. Nevertheless, this was enough to provide much feedback and many suggestions. This drove both upgrades of PPA and design and development of new tools. The Product Planning Advisor, shown in Figure 3, is intended to support formulation and manipulation of the market and product models
238 People & Organizations
discussed earlier (Rouse & Howard, 1995; ESS, 2000). Use of this tool begins with defining goals, which often include enterprise, market, and product goals such as revenue, market share, profit, and competitive position. The second through fourth steps, as well as associated substeps, involve defining the rows and columns of the matrices depicted in the underlying market and product models summarized in Figure 1. The fifth step, Assess Solutions, enables using the underlying models to calculate the expected utility of the alternative solutions, for each or all solutions, attributes, and stakeholders. A "How to Improve?" feature within this step performs sensitivity analyses relative to each attribute, rank orders attributes by impact on overall utility, and provides guidance on the functions needing attention to achieve improvements. A "What If?" feature enables assessing the impacts of particular combinations of attribute values, stakeholder preferences, and relative stakeholder importance.
Figure 3. Product Planning Advisor
Products, Systems, and Services 239
One might expect that users of PPA would first define goals, then stakeholders, and then attributes and measures. They would next devise the functions that provide the desired attributes and cluster these functions into solutions, which they would then assess and refine. This linear approach to product planning certainly can work. However, more often people jump back and forth between steps capturing the group’s ideas and trying pieces of solution concepts. In the process, they discover what the market really wants and how best to meet these needs.
USING THE PRODUCT PLANNING ADVISOR I have facilitated hundreds of planning sessions with PPA, often focused on new products, systems, and services, but also frequently addressing R&D portfolios as well as mergers and acquisitions. With highly technically oriented teams, I frequently find that one or more team members want to begin by gathering all the information needed to complete the models in Figure 1. Then, they typically argue, we can perform a wide range of sensitivity and “What if ..?“ analyses. Usually, but not always, I can persuade them that this is the wrong approach. By attempting to gather all the needed information at once, I argue, they will have missed the highly valuable process of discovering what really matters to the strategic decision being addressed. I have found the following guidance helps with this process.
Usage Guidelines Goals. Begin by discussing your goals in the marketplace rather than what you currently perceive to be the preferred offering or solution. Goals can include targeted revenues or market share, enhanced brand image, a “killer” price point, or perhaps a new level of service to a constituency for a public sector enterprise. In contrast, starting with the solution can lead to dysfunctional results as a later case story illustrates. Bootstrap. Rather than trying to agree on all stakeholders and attributes at first, begin with a very simplified version of the problem. For instance, start with just two stakeholders (customers and users) and two attributes (performance and price). Take this simple model through steps 2-6 of the process. See what insights you gain, keeping in mind that the model is far
240 People & Organizations
too simple to be useful at this point. Now’, think about this simple model’s biggest shortcoming. Whatever it is, expand the model to include it and repeat the steps. As teams “spiral” to more and more sophisticated models, it can be amazing to see the insights they gain. This process helps them to develop a shared mental model (see Chapter 2) of what matters, what does not matter, and the nature of the most important uncertainties and risks. Two examples help to illustrate how important this is. After a two-day session planning for a new graphics chip, a young electrical engineer approached me. The team of design and manufacturing engineers, marketing and finance folks, and product line management has just agreed on the conceptual design for the new chip. He said, ‘This was an amazing experience. I now know how what I do trades off with what manufacturing, marketing, and finance does. Beyond that, I now understand their metrics - terminology I had never heard before.” During the first day of another two-day session planning a new aircraft engine, we discovered that engineering and marketing did not agree on the units of measure of several key engine attributes. I was rather surprised because many of these people had worked together for more than 20 years. I expressed this amazement to the team. They responded, “We’ve never had these types of discussions before. We never tried to develop a shared model of the market and our offerings.” Clearly, PPA is not simply a means of manipulating a few matrices that you’ve assigned some assistant to fill in. The group process of creating and manipulating these models not only yields better and more competitive market offerings. This process also results in a better team. Studies. The lower left of Figure 3 shows an icon labeled “Studies.” This aspect of PPA supports the capturing of information necessary to building the objects and relationships in the models summarized in Figure 1. Studies can be design meetings, customer meetings, questionnaires, interviews, and so on. The results of studies are linked to objects and relationships, for example, stakeholders, attributes, and stakeholders’ utility functions for attributes. Studies and their results are often done outside group meetings and then linked to the models to make sure that key information is available for meetings. The use of studies provides a basis for explaining how the models were developed and parameterized. At any point in time, one can ask about the source of an object or relationship. This is invaluable when not all team members are available for all meetings. We have also found that this helps
Products, Systems, and Services 241
alleviate a major problem in multi-generation products and systems. When designing a new generation, people often ask how and why earlier generations were designed, for instance, “Do we really have to keep this function?” When such questions arise, product planning teams usually try to track down members of the teams that designed the earlier generations. This is often less than fully successful. In contrast, we have seen teams using PPA simply query the source of a decision and be linked back to the meetings or documents, for example, which underpin the decision in question. Training. We usually provide planning teams roughly one hour of training before diving into real work. This training consists of reviewing a product planning exercise specifically designed to highlight the uses and strengths of PPA, as well as illustrate how counterintuitive ideas and insights can emerge. The rest of a typical two-day planning session focuses on using PPA to address real work. I try to motivate participants unfamiliar with PPA by telling them that they should reserve judgment on the value of PPA until after the two days. They may decide that the approach embodied by PPA is not for them. However, regardless of their conclusions about PPA, they are going to accomplish a lot of work on the problems they brought with them. They will leave knowing how PPA can help them - and, they will leave with work done that they had to do anyways. The result is just-in-time training. People may have a question they want to address, for instance, “What if we included both functions X and Y? Would that trump the competition?’ To address these questions they have to learn more about what PPA can do. In the process, they get both answers to their questions and a deeper understanding of PPA. This follows the training principles discussed in Chapter 5. Functions. The “Define Functionality” step of PPA can get fairly complicated. In fact, functional thinking can be difficult compared to thinking in terms of features. If people spend too much time on this step during their first experience with PPA, they can get bogged down and frustrated. It may seem like they will never get to the “fun” step - Assess Solutions. Our experience is that it is best to skip this step the first time through. Instead, define solutions directly, without any functional decomposition. The power of the Define Solutions step includes “How to Improve” features driven by extensive sensitivity and “What if ?” analysis
242 People & Organizations
capabilities. These capabilities provide clear illustrations of the benefits of using PPA. Once people are convinced of the payoff, they are more willing to address the more difficult issues. FillinP Blanks. PPA includes many lists (i.e., goals, stakeholders, attributes, functions, etc.) and many matrices (i.e., stakeholders vs. attributes, functions vs. attributes, etc.). It is easy to get canied away trying to fill in the blanks of these list and matrices. I know this has happened when someone asks, “What does it want us to do now?” The answer to this question is, ”PPA does not want you to do anything. What do you want to do?” This tool is intended to serve users; not vice versa. This is just like asking what nail the hammer wants you to hit. Users have intentions, not tools. Despite the obvious nature of this observation, we regularly have to remind people of this.
Problem Representation PPA is a modeling tool that helps users create and manipulate an objectoriented model with the structure defined by Figure 1. In essence, users of PPA are representing and exploring a problem space defined in terms of what the market wants, as well as how they and others will provide it. The problem space can be thought of as including both a preference space and a physical space. The preference space is a multi-stakeholder, multi-attribute model of what the market values. The term market refers to the preferences of all stakeholders, not just the “customers.” The physical space is represented by a QFD model of the relationship of functions to attributes. This is a rather minimalist view of the physical nature of a product, system, or service. This view does not, for example, deal with the question of whether the physical depiction represents a feasible offering. The question of feasibility must be addressed by other domain-specific models. For example, one automobile manufacturer with whom we worked extensively employed a model that assessed whether or not a vehicle concept was feasible. Inputs required were targeted vehicle weight, engine horsepower, acceleration characteristics, fuel consumption, etc., and outputs related to the extent to which such a vehicle could likely be realized. The key point is that PPA does not address important physical characteristics that must be addressed by other means. PPA is focused on
Products, Systems, and Services 243
finding the “sweet spot” in the marketplace, not with the physical engineering needed to create a successful offering to occupy this sweet spot. This engineering is done via different models depending on whether the product is a vehicle, computer, or medical device. As with most modeling activities, it is best to “grow” the problem representation from simpler to more elaborate models. One can then explore solutions iteratively to foster understanding as the process of growing the model larger evolves. This relates to the bootstrapping discussed above. More specifically, however, the idea of growing a model avoids the difficulty of the model becoming a “black box” that no one any longer understands. Some problems are, of course, inherently complex. Such problems can be approached by partitioning them in terms of levels of aggregation or abstraction and representing and solving each partition separately. Thus, for example, the representation of the overall vehicle might include a very simple power train representation, for example, horsepower and fuel economy. Another model might represent the power train in considerable detail, but only include a few vehicle characteristics, for instance, weight and length. The integration of multiple models seldom involves direct integration of the computational representations. Instead, integration often is provided by the product planning team in a meeting where tradeoffs are negotiated as discussed in Chapter 7. We have also found that intermediate spreadsheet models can be used to stage analyses for component models and compile results of these analyses. This was also discussed in Chapter 7.
Interpreting Results The “Define Measures” step in PPA suggests grouping attributes and measures in terms of validity, acceptability and viability. These terms are discussed at length in Design for Success (Rouse, 1991a). Succinctly, validity concerns the extent to which a product, system, or service solves the problem of interest; acceptability relates to the extent to which a problem is solved in a way that fits in with the market’s preferred approaches; and viability addresses the extent to which a solution solves a problem in a way that is worth it in terms of relative costs and benefits. Technology-centered product planning often focuses on issues such as “Does it work?” In contrast, human-centered product planning emphasizes
244 People & Organizations
questions such as “Does it matter and to whom? How much do they matter?” The human-centered approach minimizes the risk of having great technical (valid) solutions that do not fit in and are not worth it. The “Assess Solutions” step results in a tabulation of expected utility for each of the solutions considered, usually the idea users are pursuing, competing ideas, and perhaps the status quo. As an aside, the status quo is often a strong competitor. People already own it, know how to use it, and may be realizing acceptable benefits by using it. Consequently, it may not be easy to get them to change. Below the overall expected utility tabulation, there is another tabulation partitioned by validity, acceptability and viability. By comparing these metrics across solutions, users can see where there are competitive advantages and disadvantages. For example, one might see that your primary advantage (or disadvantage) is performance, price, or usability. Exploring such differences can also help in troubleshooting the underlying models. It is not unusual, especially in the early stage of a planning effort, to discover that one or more results do not make sense. PPA has capabilities to “peel back the onion” to determine the source(s) of particular results. In this way, modeling errors can be quickly remediated. The “How to Improve” capabilities of PPA are quite powerful. A typical question is, “If I could invest in improving any attribute by 10% (of range or value), where would this investment have most competitive impact?” To answer this question, PPA uses the underlying models to assess the sensitivity of expected utilities to improvements in each attribute. With the typical 30-50 attributes, one would not want to do this one at a time manually. The result is a rank-ordered list of attributes that would be the most attractive investments. One can choose any elements of this list and ask how these improvements could best be accomplished. Using the underlying QFD model, PPA determines which product function or functions should be targeted for improvement. One can vary the percent of range or value varied to assess the effects of the typical nonlinearities in the model due to relationships involving instances of the nonlinear utility functions in Figure 2. Thus, for example, one may find that small improvements should focus on usability while large improvements should address the overall business proposition, for instance, by leasing rather than selling the systems. This step in the PPA process also provides several “control panels” where users can manipulate elements of the underlying models. One can
Products, Systems, and Services 245
vary the importance of stakeholders and the importance of attributes to particular stakeholders. You can also temporarily remove stakeholders and/or attributes from the analysis. The central goal in all of these aspects of the “How to Improve” question is to develop a strong sense of what really matters. With 3-5 stakeholders and 30-50 attributes, there may be hundreds of relationships that are potentially important to planning a new product, system, or service. Many of these relationships within the models will be based on the opinions of members of the product planning team rather than market data. This situation usually leads the team to consider the extent to which they should collect market data. By knowing what really matters, that is, what really affects one’s competitive position, the team can usually choose 2-3 variables for which they need more information. They also can determine that there are 100-200 relationships for which additional information is not needed. Planning teams find the ability to make such decisions very appealing.
Typical Uses of PPA From hundreds of planning sessions working with clients across several industries, we have found a small set of common problems people tend to address using PPA. One type of problem is goal-driven planning. They spend considerable time on the first step of PPA and then, when pursing the subsequent steps, continually reference the chosen goals. This emphasis is often chosen by teams that have past experiences of losing track of their market goals, in some cases resulting in a highly valid solution that is not acceptable and/or viable. The most frequent type of problem addressed with PPA is product functiodfeature tradeoffs. Rather than planning a product with every “bell and whistle” imaginable, these teams want to craft a solution with just the right functions and features needed for competitive advantage. The dominant stakeholders in such analyses tend to be users and customers. Attributes change, of course, with market domain. Another type of problem is market selection. For example, the issue of interest may be the sequencing of markets for the roll out of a new product, or new product generation. Competitors get significant attention in such analyses. One’s current generation offering also is important. Cannibalization may be more attractive in market segments where
246 People & Organizations
competitors are attacking your market share than in markets where you are the only strong player. Selection of R&D/technology investments is another common problem. In this case, the stakeholders may be your future product lines. Managers of these product lines may have defined how they want to compete with other players. Technology investments are then judged in terms of abilities to create the advantages sought. Uncertainty is usually a very significant factor in such analyses. The nature of these uncertainties is discussed in Chapter 9. Finally, PPA has sometimes played a role is selecting merger and acquisition targets. The stakeholders of most interest are typically shareholders and perhaps customers. The attributes, as you might expect, usually emphasize revenues, costs, earnings, share price, and market synergies. I hasten to note that analyses with PPA certainly do not replace the due diligence activities typically associated with mergers and acquisitions. Occasionally, users have emailed me to tell me how they have used PPA for different types of problems. The most common instances have been selecting a new job, and large consumer purchases. They have reported that stakeholders include themselves, their spouse or partner, their children, and grandparents. These examples provide glimpses into humancentered design of one’s personal life.
Case Stories The following four examples of how PPA has been used illustrate the ways in which this tool is applied and the types of insights that are gained. In particular, these examples depict tradeoffs across stakeholders and how the impacts of assumptions can be explored. It is important to note that these examples show how product planning teams have reached counter-intuitive conclusions using PPA. However, use of PPA does not, by any means, always result in such conclusions. Automobile Engine. A team working on new emission control systems decided to evaluate an earlier technology investment using PPA. They compared the chosen approach to four other candidates that had been rejected with the earlier decision. Development and use of the markedproduct models resulted in the conclusion that the chosen approach was the worst among the five original candidates. This surprising
Products, Systems, and Services 247
conclusion led to in-depth exploration of the assumptions built into their PPA models. This exploration resulted in further support for these assumptions. Reviewing these results, the team leader realized that the earlier decision had not fully considered the impact of the alternatives on the manufacturing stakeholder. The earlier choice had been of high utility to customers and other stakeholders, but was very complex to manufacture. As a result of this insight, a new approach was adopted. Microprocessors. A major competitor in the semiconductor market was planning a new high-end microprocessor. They were very concerned with time to market, worried that their next generation might be late relative to the competition. Their planning team included people from engineering, manufacturing, marketing, and finance. Using PPA, they found that time to market was critical, but it was not clear how it could be significantly decreased. One of the manufacturing participants suggested a design change that, upon analysis, would get them to market a year earlier. The team adopted this suggestion. He was asked, “Why have you never suggested this before?” He responded, “Because you have never invited manufacturing to these types of meetings before.” Product planning with PPA often results in involvement of a richer set of internal stakeholders. Digital Signal Processor. The NPD team began this effort convinced that they already knew the best functiodfeature set with which to delight the market. The marketing manager, however, insisted that they test their intuitions using PPA. After developing the marketlproduct models and using them for competitive analyses, the team concluded that assumptions regarding stakeholders’ preferences for three particular attributes, as well as the values of these attributes, were critical to their original intuitions being correct. Attempts to support these assumptions by talking with stakeholders, especially end users and customers, resulted in the conclusions that all three assumptions were unsupportable. The team subsequently pursued a different product concept. Medical Imaging System. A NPD team had developed an advanced concept for medical imaging that they argued would enable their company to enter a very crowded market, where a couple of brand name companies currently dominated. They used PPA to assess the market advantages of their concept relative to the offerings of the market leaders. Initial results showed a considerably greater market utility for their potential offering. Attention then shifted to the likely reactions of the market leaders to the
248 People & Organizations
introduction of this advanced product. The team's expectation was that the leaders would have to invest in two years of R&D to catch up with the new technology embodied in their offering. However, using the "How to Improve?" feature for PPA models of the competitors' offerings resulted in the conclusion that the best strategy for the market leaders was to reduce prices significantly. The team had not anticipated this possibility someone said, "That's not fair!'' This caused the team to reconsider the firmness of their revenue projections, in terms of both number of units sold and price per unit. Summarv. These four examples serve to illustrate several types of issues in new product development. The first example showed how the concerns of a secondary stakeholder could affect the attractiveness of a solution. The second example illustrated how a planning team gained insights via the discussions and debates that this tool engenders. The third example depicted the impact of unsupportable assumptions regarding the preferences of primary stakeholders. The final example demonstrated how the likely reactions of competitors impact the possible market advantages of a product or system. Taken together, these four examples clearly illustrate how a human-centered orientation helps to avoid creating solutions that some stakeholders may want but other stakeholders will not support or buy.
Conclusion This section has explored the use of the Product Planning Advisor in some detail. My goal in presenting this level of detail was to portray the reality of pursuing human-centered design. Beyond the overarching philosophy, there are many details to consider and balance. This discussion also served to illustrate how human-centered design affects the conceptualization and development of methods and tools for human-centered design.
FACILITATING PRODUCT PLANNING I have indicated repeatedly that tools such as PPA, and other tools discussed in later chapters, are typically used by groups such as planning groups and executive teams. Such usage often benefits from facilitation, if possible by an expert in PPA that also has a good understanding of the
Products, Systems, and Services 249
domain at hand. In response to customers’ requests, we developed a short course for facilitators. This section summarizes this material. First of all, it useful to revisit the results of our study of what managers want from tools, which was discussed in Chapter 7 (Rouse, 1998). This study of 100 planning sessions involving more than 2,000 senior managers and executives led to the conclusions that managers want the following: Process: A clear and straightforward process to guide their decisions and discussions, with a clear mandate to depart from this process whenever they choose Information: Capture of information compiled, decisions made, and linkages between these inputs and outputs so that they can communicate and justify their decisions, as well as reconstruct decision processes Facilitation: Computer-aided facilitation of group processes via management of the nominal decision making process using computerbased tools and large screen displays Surprises: Tools that digest the information that they input, see patterns or trends, and then provide advice or guidance that the group perceives they would not have thought of without the tools. Facilitators can play an important role in assuring that users of PPA perceive that they are receiving these benefits. Interestingly, we have found that the better the facilitator, the more people attribute these benefits to the tool. In particular, skilled facilitators are often key to recognizing the surprises that planning teams value. In other words, the pattern recognition abilities of facilitators enable everyone to see the surprising results.
Objections and Responses We have also found that an important role of facilitators is to overcome inevitable objections to PPA and its underlying process. These objections and recommended responses are summarized in Table 1.
250 People & Organizations ~~
Response
Objection Approach is too structured
Importance of lack of structure is a myth; creativity should go into the content, not the process
Approach is too time-consuming
This is simply not true
Approach is too intense
Yes, and it should be because this leads to Good Plans Quickly
Tools require too much information
Tools help to assess what information matters
Information subject to too much uncertainty
Tools help to explore impacts of uncertainties ~~
~~
Validity of outputs at risk due to garbage in, garbage out
Tools make information needs clear and support exploration of variations
Tools are not complete
We are continually enhancing our tools and appreciate your comments and suggestions
Tools are too complex
Tools should be as simple as possible relative to problems of interest, but no simpler
Table 1. Typical Objections to PPA and Useful Responses
There are often people who think the process is too structured. Certainly, a reasonable amount of brainstorming is a good thing. Further, PPA does not require that the process be followed in lockstep. However, it is important that facilitators get the team’s creativity focused on the content of the plans, not on how to plan. As noted in Table 1, the use of PPA is not too time consuming. In fact, planning tends to get done very quickly. However, use of PPA can be very intense. The team is able to process immense amounts of information
Products, Systems, and Services 251
quickly and reach implementable conclusions, all in a typical two-day session. After two 10-hour days with the planning team of a test equipment company, one of the senior managers commented, “Wow! This was a lot of work. I’m exhausted.” I asked him how the group would usually pursue such plans, in this case for market roll out of their next generation test equipment. He said, “We would normally discuss this same material in a long sequence of meetings over several months.” I noted that the team had made several important decisions and asked how such decisions were normally made. He smiled and said, “We usually end up doing whatever the last person who isn’t tired wants to do.” PPA enables good plans quickly. Good plans are those that are well informed and enable confident agreement and action. There certainly may be remaining unknowns, but PPA helps the team to know which ones matter and agree on how to resolve these questions as they proceed. Quickly usually means in roughly two days. After one day, the team is typically just ready to harvest the fruits of much good thinking. This happens on the second day. In those situations where we have added a third day, many participants are too mentally tired to maintain the intense involvement of the previous two days. Beyond the responses summarized in Table 1, there is an overarching response that facilitators should continually reinforce. The essence of PPA’s value added is not the models created and numbers accessed or estimated. The value is in the group process that it engenders. The overall benefit is the shared mental models that result and the enhanced team performance that these models support. As indicated in Chapter 5 , the purpose of moderate-fidelity training simulators is not to replace high-fidelity engineering simulators. Similarly, PPA is not intended to replace the engineering models needed to design, develop, and manufacture the product, system, or service of interest. Instead, PPA is intended to enable the planning team to get their arms around the full complexity of the marketplace and how their offerings, and those of their competitors, play in their markets.
Elements of Facilitation Our facilitation short course also addresses a range of facilitation basics including:
252 People & Organizations
Make sure that participants know each other or introduce themselves to each other. Review the agenda, desired outcomes, and how the planning process will work Deal with difficult participants with low-level interventions and then move to higher levels as necessary Use a “bin” or “parking lot” to capture ideas, issues, and concerns that are not relevant to the current discussion Make sure that the group has closed on an issue before they move on to the next one If the session gets out of hand, switch to round-robin participation At transition points, summarize what has happened and what will happen next It can also be useful for people who expect to be frequent facilitators to take one or more of the many general workshops on this topic that are widely available.
Summary Without facilitation, it is not unusual for users of PPA to behave in ways that are inconsistent with the guidance summarized earlier. Further, the types of objections summarized in Table 1 can completely derail a planning effort. This, of course, undermines people’s perceptions of the value of PPA. More importantly, however, it severely limits their abilities to create good plans quickly.
CONCLUSIONS This chapter addressed human-centered design and how to support it. We also discussed the human-centered design of tools to support human-
Products, Systems, and Services 253
centered design. It is hopefully clear at this point that there is a lot involved in the process of realizing the benefits of this design philosophy. Many of the users of PPA have read Designfur Success and find the philosophy very attractive. At the same time, however, they do not want to constantly reference this book during design meetings. They want tools that embody first principles rather than having to return to these principles at every turn. Electrical engineers use circuit design and analysis tools that assure they are inherently obeying Ohm’s Laws. Mechanical engineers can count on their tools to be consistent with Newton’s Laws and civil engineers can be assured that their tools comply with Mohr’s Law. Numerous people have told me that one of the benefits of the Product PZanning Advisor is that using it helps you to comply with the tenets of human-centered design without having to think about these tenets as you address every decision. One executive told me, “Design for Success provides knowledge; the Product Planning Advisor provides executable knowledge.” The notion of executable knowledge has stuck with me since this comment was made several years ago. Researchers share knowledge with each other, as well as the ways knowledge is produced, vetted, and communicated. For researchers to effectively and efficiently communicate with and provide value to designers, they need to create and provide executable knowledge. Chapters 9-1 1 address other types of knowledge, particularly knowledge of interest to managers and executives as they develop strategies, invest in technology portfolios, and address needs for fundamental change. These chapters discuss tools for supporting pursuit of these issues, although they are not considered at the level of detail pursued in this chapter. Nevertheless, executable knowledge, often embodied in tools (Rouse, 1997,2000), is a central theme.
REFERENCES Billings, C.E. (1996). Aviation Automation: The Search for a HumanCentered Amroach. Mahwah, NJ: Erlbaum. Booher, H.R. (Ed.).(2003). Handbook of human systems integration. New York: Wiley.
254 People & Organizations
Card, S.K., Moran, T.P., and Newell, A. (1983). The Psychology of Human-Computer Interaction. Mahwah, NJ: Erlbaum.
ESS,
(2000).
Product Planning Advisor: http://essadvisors.com/software.htm.Atlanta, GA: Enterprise Support Systems.
Hammond, J.S., Keeney, R.L., & Raiffa, H. (1998). Smart choices: A practical guide to making better decisions. Boston, MA: Harvard Business School Press. Hauser, J.R., & Clausing, D. (1988, May-June). The house of quality. Harvard Business Review, 63-73. Keeney, R.L., & Raiffa, H. (1976). Decisions with multiple obiectives: Preferences and value tradeoffs. New York: Wiley. Norman, D.A., and Draper, S.W. (Eds.).( 1986). User Centered System Design: New Perspectives on Human-Computer Interaction. Mahwah, NJ: Erlbaum. Rouse, W.B. (1991a). Design for success: A human-centered approach to designing successful products and systems. New York: Wiley. Rouse, W.B. (1991b). Human-centered product planning and design. In G. Salvendy (Ed.), Handbook of industrial engineering (2ndEdition, Chapter 49). New York: Wiley. Rouse, W.B. (1993). Human-centered design: Concept and methodology. Journal of the Society of Instrument and Control Engineers, 32(3), 187192. Rouse, W.B. (1994). Human-centered design: Integration with corporate processes. In H. Kanis, C. Overbeeke, & J. Vergeest (Eds.), Measurement & design: Measuring in an interdisciplinary research environment (pp. 111137). Delft, The Netherlands: Delft University of Technology. Rouse, W.B. (1997). Technology for strategic thinking. Strategy & Leadership, 25 (l), 40-41. Rouse, W.B. (1998). Computer support of collaborative planning. Journal of the American Society for Information Science, @ (9), 832-839. Rouse, W.B. (1999). Human-centered design. In J. G. Webster (Ed.), Encyclopedia of Electrical and Electronics Engineering. New York: Wiley.
Products, Systems, and Services 255
Rouse, W.B. (2000). Software tools for strategic management. Information 0 Knowledge 0 Systems Management, 2 (l), 1-5. Rouse, W.B. (2001). Human-centered product planning and design. In G. Salvendy (Ed.), Handbook of industrial engineering (31d Edition, Chapter 49). New York: Wiley. Rouse, W.B. (2003). Human systems integration and new product development. In H.R. Booher, (Ed.), Handbook of human systems intepration (Chapter 24). New York: Wiley. Rouse, W.B., & Howard, C.W. (1995). Supporting market-driven change. In D. Burnstein (Ed.), The Digital MBA (pp. 159-184). New York: Osborne McGraw-Hill.
This Page Intentionally Left Blank
Chapter 9
INVENTION, INNOVATION, AND OPTIONS
INTRODUCTION I have spent my professional life in organizations populated by many inventive people. This includes more than two decades as a faculty member and leader of research groups at four universities and over two decades as founder and leader of two software companies. Sandwiching the company experiences between two long stints at universities has provided, I think, a rather interesting perspective on what invention means in these two very different environments. More importantly, it has enabled me to see the transition of technological inventions to market innovations from both ends of the funnel.
Invention vs. Innovation Invention is the creation of a new device or process. Examples include computers, software products, chemical processes, junk foods, and kitchen gadgets. Innovation is the introduction of change via something new. Cable television and video rental stores were innovations, as were overnight mail, communications via fax, and the Internet. These innovations took advantage of one or more inventions, but these inventions were not the innovations. The vast majority of inventions and good ideas in general do not result in change. They do not become part of products or services in people's hands, being used to be productive, make life easier or safer, or bring enjoyment (Rouse, 1992). The distinction between invention and innovation is very important. Many of the enterprises with whom I have worked consider themselves to be innovative, despite flat sales and sagging profits. Usually, I find that they are reasonably inventive but not as innovative as they perceive themselves to be. Presented with this observation, they often say, "But, our employees are full of good ideas and have created lots of neat things." 257
258 People & Organizations
This assertion is almost always correct. Their perceptions that their employees are inventive are usually well founded. However, at the same time, this plethora of inventions has seldom - at least, not recently -resulted in change in these companies' markets. Value provided to the marketplace has not increased. Their inventions did not result in innovations. I hasten to note that the position I am advocating is somewhat risky. Most people like to consider themselves and their colleagues innovative. They want to view their inventions as evidence of their being innovative. Convincing them that these inventions are potential innovations rather than inherently innovations can become a bit confrontational. Fortunately, once you come to expect these negative reactions, you can move smoothly beyond them. Further, it can be critical to get your leadership or management team - or your customer's team - to realize that the inability to transform inventions to innovations is a plausible explanation of the company's value shortfalls in the marketplace. The inevitable next question, of course, is what to do about it. I address this issue at length in Essential Challenges of Strategic Management (Rouse, 2001). Management of innovation has received an immense amount of attention in recent years (Rouse & Rogers, 1990; Rouse, 1992, 1993; Christensen, 1997; Rycroft & Kash, 1999). This chapter does not address the breadth of the problem and, instead, focuses on the role of technology strategies and the R&D functions chartered to pursue such strategies.
Technology and Innovation James Burke, the well-known commentator on BBC's production Connections, has compiled a fascinating history of innovation, tracing roughly 300 technologies from the Renaissance to modem times. His book, The Pinball Effect (1996), shows how the eventual payoffs from inventions are almost always greater for unanticipated applications than for the originally envisioned applications. Further, the originators of inventions are seldom the ones who reap the greatest rewards. Innovations often emerge from tortured paths of unmet expectations and overlooked serendipity, until finally we end up with penicillin or Post-It notes. Despite the tortured path to success, it is clear that technology plays a key role in economic progress. In The Lever of Riches (1990), Joel Mokyr reviews the emergence, evolution, and impact of a wide variety of
Invention, Innovation, and Options 259
fascinating devices and processes - over the past 2,000 years - and comes to the conclusion that, under the right conditions, technology can have enormous impacts on economic progress. These conditions include a nonconservative culture, high value placed on economic activity, noncentralized government, and the necessary human capital. The compelling works of Burke and Mokyr suggest that inventions are needed to provide the basis for innovation and economic progress. However, investing in invention provides no guarantee of success - that your invention will provide the basis for innovation and, if it does, that you will benefit from its success. Further, your investments may be required many years before innovations are possible, and the nature, magnitude, and timing of success are all highly uncertain. The essential phenomenon of this chapter concerns decisions regarding how best to provide value to your markets or constituencies in the future, when the future and the nature of value are laced with many significant uncertainties. As the CEO of two software companies I faced this phenomenon every day. If our software products did not embody highly valued proprietary technologies, then the market would drive our prices to commodity levels and, consequently, our profit margins would be slim. On the other, investing in R&D (invention) is quite risky because you are likely to make investment decisions that do not lead to innovations, or any eventual innovations may not benefit your company. Keep in mind that most of these types of investments have been, in retrospect, misguided. How, then, should such decisions be made? As an aside, I have long been amazed at the numbers of people who invest themselves in inventing despite the terrible odds of success. The vast majority of ideas and businesses never lead to success. Yet, people of all persuasions and intellects dive into idea development and business formation. I have asked knowledgeable colleagues if they think that people do not understand the probabilities of success or simply ignore then. The most frequent answer is that inventors and entrepreneurs are aware of these probabilities; they just don’t think they apply to them. Having worked with many inventors and entrepreneurs, my judgment is that these people have a strong sense of being able to control activities and events so as to avoid the typical failure probabilities. They are not gambling. They are playing a game where knowledge and skills can thwart these statistics. For any particular idea or business, they may or may not be right in their assessment of their knowledge and skills. However, whether or not they are right, our economy and society benefit greatly by all these people getting up to bat.
260 People & Organizations
Overview This chapter addresses how to make R&D investment decisions. This begins with a discussion of the purpose of R&D. This leads to framing R&D investments in terms of options and, consequently, option pricing models for valuing R&D investments. We then consider the flow of options using the idea of value streams and networks. R&D World is introduced as an approach to modeling value flow and evaluating alternative management practices. Value-centered R&D management is then discussed, as well as issues associated with technology adoption.
PURPOSE OF R&D Throughout the early chapters of this book, I have mentioned the difficulties of transitioning the products of R&D to practice. This is due, in part, to misalignment of cultures of research and practice (Rouse, 1985). However, even when this misalignment is addressed and minimized, transitions remain difficult. The nature of the problem is represented in Figure 1 . In a highly cited paper, Stevens and Burley (1997) present a meta analysis of numerous studies of the transition of R&D to practice. They conclude that you need 3,000 ideas to get one market success. In the drug industry, it has been reported that 10,000 ideas are required for one market success (Nichols, 1994). These are daunting numbers. The numbers become more imaginable when one thinks about the number of projects that must be funded to yield one innovation. As a member of the Air Force Scientific Advisory Board, I participated in a study of best practices associated with science and technology investments (Ballhaus, 2000). Two companies, one a large chemical company and the other a large telecom equipment manufacturer, reported investing in 300 projects to get one or two innovations in the market. When these numbers were mentioned, someone in the audience asked, perhaps in jest, “Why didn’t you just invest in the right ones in the first place?” After a bit of laughter, the Chief Technology Officer of the chemical company said, “R&D is a very uncertain business. You cannot get rid of the uncertainty and we don’t try to. Our goal is to be the best at managing uncertainty and thereby gain competitive advantage.”
Invention, Innovation, and Options 261
Figure 1. The Innovation Funnel
Another way to think about this phenomenon is to imagine there being no uncertainty. The future and the nature of value in the future would be crisply defined. In every market segment, all competitors would produce exactly the same products, systems, and services. Everything would be a commodity and all profit margins would be very slim or perhaps zero. This does not sound like an appealing world to me.
Multi-Stage Decision Processes
So, in some sense, uncertainty is our ally. The goal, consequently, is to be very good at managing uncertainty. Two traditional approaches for managing uncertainty are multi-stage investment processes for managing R&D projects and discounted cash flow models for assessing the merits of proposed projects. A typical multi-stage process is shown in Figure 2.
262 People & Organizations
Initial Project Decision
Exploratory Advanced Development helopment Decision Decision
Ideation & Concept
Technology Transition Decision Technology Tmnsition& Innovation
Figure 2. Typical Multi-Stage R&D Management Process
Management of R&D project portfolios - as well as new product project portfolios - typically employ multi-stage decision processes (Cooper & Kaplan, 1988; Cooper, 1998). Typical stages for new product development include ideation, preliminary investigation, detailed investigation, development, testing and validation, production and launch, and market innovation. For R&D projects, a more appropriate designation for these stages might be the following: Ideation Concept paper Initial project Exploratory development program 0
Advanced development program Technology transition Innovation
Figure 2 depicts the critical decisions between these stages. Projects exit this process all along the way, resulting in many more projects in
Invention, Innovation, and Options 263
earlier stages than at later stages - hence, the aforementioned funnel in Figure 1. It is important to emphasize the fact that exiting projects are not complete failures in that, at the very least, they provide knowledge about what does not work and what cannot be cost effective. Unfortunately, most organizations are quite poor at capturing such knowledge. There is a natural tendency to assume that a central premise underlying such multi-stage processes is that projects can and wiil smoothly move through the stages. However, some projects in exploratory development, for example, may stay at that stage for an extended period of time without explicit intentions to transition. Further, not all ideas and projects enter the funnel from the left. More mature ideas and projects may enter at exploratory or advanced development. Thus, the metaphor of a funnel should be loosely interpreted. It is important to realize that the nature of what transitions between stages may vary, ranging from people to information to prototypes to requirements - but rarely in the form of off-the-shelf technology. This can be thought of in terms of data vs. information vs. knowledge, transitioned either in formal documents or informally in people’s heads. In fact, it often appears that the speed with which projects move through the funnel is highly related to the flexibility with which people move throughout the enterprise.
Discounted Cash Flow The decision process depicted in Figure 2 usually occurs over several years - sometimes many years. This process is also laced with uncertainty regarding both technical and market success. The traditional approach to assessing the impact of uncertainty is the use of measures of discounted cash flow. The time value of money is the central concept in the discounted cash flow approach. Resources invested now are worth more than the same amounts gained later. This is due to the costs of the investment capital that must be paid, or foregone, while waiting for subsequent returns on the investment. The time value of money is represented by discounting the cash flows produced by the investment to reflect the interest that would, in effect at least, have to be paid on the capital borrowed to finance the investment. The equations below summarize the basic calculations of the discounted cash flow model. Given projections of costs, ci, i = 0, 1, ... N,
264 People & Organizations
and returns, h, i = 0, 1, ... N, the calculations of Net Present Value (NPV) and Internal Rate of Return (IRR)are quite straightforward elements of financial management (Brigham & Gapenski, 1988). The only subtlety is choosing a discount rate, DR, to reflect the current value of future returns decreasing as the time until those returns will be realized increases.
i=O N
IRR = DR such that
i=O
(ri - ci)/( 1 + DR)’ = 0
It is quite possible for DR to change with time; possibly reflecting expected increases in interest rates in the future. These equations must be modified appropriately for time-varying discount rates. The above metrics are interpreted as follows: 0
NPV reflects the amount one should be willing to pay now for benefits received in the future. These future benefits are discounted by the interest paid now to receive these later benefits.
0
IRR, in contrast, is the value of DR if NPV is zero. This metric enables comparing alternative investments by forcing the NPV of each investment to zero. Note that this assumes a fixed interest rate and reinvestment of intermediate returns at the internal rate of return.
NPV and IRR do not address uncertainty if one limits DR to reflecting only the cost of capital to enable the incurring of costs until returns are realized. In my experience, the private sector tends to use 10-12% as the cost of capital while the public sector uses around 56%. However, especially in the private sector, DR is often set much higher. This is due to the fact that the returns are uncertain. I have seen corporate analyses with DR equaling 20% and venture capital assessments using DR of 50%. These discount rates reflect a lack of belief in the projected returns. Those seeking investments have been known to present “hockey stick” projections - little if any return for several years and then explosive growth in the out years. Managers do not believe these projections so they increase the discount rate, which tends to result in advocates presenting
Invention, Innovation, and Options 265
steeper hockey sticks. It becomes a battle of bean counters vs. wild-eyed inventors. The result is predictable. Bill Beckenbaugh, a former Vice President at Motorola with whom we worked extensively, commented that if NPV was the only criterion, Motorola would not invest in R&D at all. Yet, they do invest heavily. When I was last involved, they were spending roughly 12% of sales on R&D and approximately 1% on science. Gne of the primary problems with NPV as a criterion is due to people using the discount rate to hedge uncertainty.
Technology Options Ken Boff and I first addressed this deficiency in the context of his role as Chief Scientist of the Human Effectiveness Directorate at the Air Force Research Laboratory. The question he raised was how best to justify longterm R&D investments. “Long” can mean 10-20 years in his world. With the large investments often required for Air Force R&D, it would be difficult to argue positive NFVs. In our early efforts in this area (Rouse, Boff & Thomas, 1997; Rouse, Thomas & Boff, 1998; Rouse & Boff, 1998, 1999, 2000), we focused on two issues. First of all, discounted cash flow reflects the economics of investments. There are also non-economic criteria. Thus, cost-benefit analyses (see Chapter 6) should reflect both types of criteria. In other words, NPV is just one attribute. We also realized that the decision process in Figure 2 was the primary means for controlling downside risks and the NPV did not need to be inflated via DR to hedge risks. The problem is that the NPV calculations shown above do not include the possibility of curtailing investments prior to the end of the process. The calculation assumes that you will proceed as originally planned. We were having these discussions just as a flurry of papers appeared on “real” options. These options are real in the sense that they are not financial options. The assets underlying real options tend to be tangible manufacturing capacity, product technologies, etc. - rather than limited to cash flow due to the ownership of a financial asset like a bond or a share of stock. The work of Fischer Black and Myron Scholes (1973) as well as Robert Merton (1973), provided a mathematical solution to the problem of valuing options (more on this below) and led to Merton and Scholes being
266 People & Organizations
awarded the Nobel Prize in Economics in 1997. (Fischer Black had died in the interim between publication of this work and the award.) Ken and I argued for and illustrated how real options would provide a better way to represent the economic impact of R&D. Put simply, we argued that the purpose of R&D is to provide the enterprise with “technology options.” It is the responsibility of the business units within the enterprise to decide whether or not to exercise these options. In the Air Force, the relevant business units are the operating commands and acquisition organization. In the private sector, these units would be product line organizations. Due to a paucity of .data with which to operationalize our options framework, our arguments remained conceptual. Fortunately, the conceptual argument had noticeable impact. Nevertheless, our first fullblown options analysis for the military was a few years later for the Ministry of Defense of Singapore. This example is later discussed. Serendipity intervened at this point in the form of Motorola. Jim Prendergast, another Vice President at Motorola that I had met through Bill Beckenbaugh, asked me to help him think through an issue his group as Motorola Labs had been wrestling with. Succinctly, how much should Motorola spend on R&D? As I noted earlier, they were at the time spending roughly 12%of sales on R&D and approximately 1% on science. Jim asked whether these percentages should be higher or lower. My first question was, “Why do you bother to do R&D at all?” A couple of days of discussion ensued. We focused on the needs of business units for technology advantages. This led to discussion of the idea of technology options, how they are created, how they are exercised, and the process whereby they create value for the enterprise in terms of free cash flow. At the end of the two days, we had a plan for how to apply this thinking to a major semiconductor research project. Soon after, a meeting was scheduled with Dennis Roberson, a Senior Vice President and Chief Technology Officer of Motorola. I had first met Dennis when he held a similar position at NCR. It turned out that Dennis had been paying attention to the same flurry of real options publications. He was enthusiastic about our proposal and we were off and running. The convergence of our agenda with that of Motorola has a tremendous impact on our research. Beyond being talented and enthusiastic, they had the resources to support us and, of great importance, they had the data we needed to operationalize our approach to technology options. To date, we have conducted more than 20 case studies with many companies and agencies. Several of these are summarized in Table 1.
Invention, Innovation, and Options 267
I Technology
-
Option Purchase
Enterprise Type
Duration
Investment
Net
Option Exercise
Option
Value
(SM)
Investment
(Years)
Amount
(SM)
NPV Pmflt
($M)
-
Aircraft (tnanufucturiiip)
Private
R&D
I
Deploy I ~n(rrovetnerits
0
16
8
Aircraft (utumtiited)
Public
R&D
I0
Deploy System
72
149
137
Auto Radar
Private
Run Bushess
3
Expand Offeriiigs
16
160
Liceiiw Tecluiology
8
220
Acquire Capacity
147
581
ltiitiate Offering
144
522
Deploy System
614
Batteries (litluum ion)
Private
R&D
Batteries (lirluuin plyter)
PriVdIK
R&D
Fuel Cell Coinpoiieiits
Private
R&D
Micro-satellites
Puhlic
R&D
Optical Multiplexers
Private
R&D
Optical Switclles
Privute
Run Bushiess
I
8
I33
215
552
18
359
47 I
43
930
0
2
68
Expand Capacity
66
568
488
Expaid Offerings
402
1642
619
Add Market Cliaiuiels
104
416
267
Security Software
Private
Ruii Busuiess
0
Seicotiducrors (amplifiers)
Private
liivest b i Capacity
24
2
Expaiid Offerings
412
1035
43 I
Seinicoiiductors (prapliics)
Private
R&D
3
I
hlitinte Offering
8
102
99
Semiconductors (memory)
Private
R&D
4
Initiate Offering
1608
3425
546
Wireless LAN
Privntr
Run Busuirss
2
R&D
40
268
109
19
--
-
I9 I
-
Table 1. Example Option-Based Valuations of Technology Investments
268 People & Organizations
OPTION PRICING MODELS This section addresses the models that enabled the valuations shown in Table 1. These option-based estimates were the results of a systematic process of framing downstream options, estimating input data needed, calculating option values, and performing sensitivity analyses to assess impacts of modeling and input uncertainties.
Framing Options The modeling process begins with consideration of the effects sought by the enterprise and the capabilities needed to provide these effects. In the private sector, desired effects are usually profits, perhaps expressed as earnings per share, and needed capabilities are typically competitive market offerings. Options can relate to which technologies are deployed and/or which market segments are targeted. Purchasing options may involve R&D investments, alliances, mergers, acquisitions, etc. Exercising options involves deciding which technologies will be deployed in which markets and investing accordingly. In the public sector, effects are usually couched in terms of provision of some public good such as defense. More specific effects might be expressed in terms of measures of surveillance and reconnaissance coverage, for instance. Capabilities can then be defined as alternative means for providing the desired effects. Options in this example relate to technologies that can enable the capabilities for providing these effects. Attractive options are those that can provide the effects at lower costs of development, acquisition, and/or operations.
Estimating Input Data Option-based valuations are economic valuations. Various financial projections are needed as input to option calculations. Projections needed include: Investment to “purchase” option, including timing Investment to “exercise” option, including timing
Invention, Innovation, and Options 269
Free cash flow - profits and/or cost savings - resulting from exercise Volatility of cash flow, typically expressed as a percentage The analyses needed to create these projections are often substantial. For situations where cash flows are solely cost savings, it is particularly important to define credible baselines against which savings are estimated. Such baselines should be choices that would actually be made were the options of interest not available.
Calculating Option Values As indicated earlier, the models employed for option-based valuations were initially developed for valuation of financial instruments. For example, an option might provide the right to buy shares of stock at a predetermined price some time in the future. Valuation concerns what such an option is worth. This depends, obviously, on the likelihood that the stock price will be greater than the predetermined price associated with the option. More specifically, the value of the option equals the discounted expected value of the stock at maturity, conditional on the stock price at maturity exceeding the exercise price, minus the discounted exercise price, all times the probability that, at maturity, the stock price is greater than the exercise price (Smithson, 1998). Net Option Value equals the option value calculated in this manner minus the cost of purchasing the option. Thus, there are Net Present Values embedded in the determination of Net Option Values. However, in addition, there is explicit representation of the fact that one will not exercise an option at maturity if the current market share price is less than or equal to the exercise price. Sources such as Amram and Kulatilaka (1999), Boer (1998, 1999), Luehrman (1998), Luenberger (1997), and Smithson (1998) provide a wealth of illustrations of how option values are calculated for a range of models. As indicated earlier, the options addressed here are usually termed real options in the sense that the investments associated with these options are usually intended to create tangible assets rather than purely financial assets. Application of financially derived models to non-financial investments often raises the issue of the extent to which assumptions from financial markets are valid in the domains of non-financial investments. This concern is later addressed.
270 People & Organizations
Performing Sensitivity Analyses The assumptions underlying the option-pricing model and the estimates used as input data for the model are usually subject to much uncertainty. This uncertainty should be reflected in option valuations calculated. Therefore, what is needed is a probability distribution of valuations rather than solely a point estimate. This probability distribution can be generated using Monte Carlo simulation to systematically vary model and input variables using assumed distributions of parameteddata variations. The software tool employed for the analyses summarized in Table 1 -Technology Investment Advisor (ESS, 2000; Rouse, et al., 2000) -supported these types of sensitivity analyses. These analyses enable consideration of options in terms of both returns and risks. Interesting “What if ?” scenarios can be explored. A question that we have frequently encountered when performing these analyses is, “How bad can it get and have this decision still make sense?” This question reflects a desire to thoroughly understand the decision being entertained, not just get better numbers.
Example Calculations Consider the example of semiconductor memory in the second row (from the bottom) of Table 1. For $109M of R&D, this company “purchased” an option to deploy this technology in its markets four years later for an expected investment of approximately $1.7B. The expected profit was roughly $3.5B. The Net Option Value of more than $0SB reflects the fact that they bought this option for much less than it was worth. In the second row (from the top) of Table 2, a government agency invested $420M in R&D to “purchase” an option on unmanned air vehicle technology that, when deployed 10 years later for $72M, would yield roughly $750M of operating savings when compared to manned aircraft providing the same mission effects. The Net Option Value of $137M represents the value of this option is excess of what they invested. It is instructive to compare these two examples intuitively. For the semiconductor memory investment, the option value of more than $600M (i.e., the R&D investment plus the NOV) represents roughly one third of the net present difference between the expected profit from exercising the option and the investment required to exercise it. This is due to
Invention, Innovation, and Options 271
considerable uncertainties in the 10+ year time period when most of the profits would accrue. In contrast, for the unmanned air vehicle technology investment, the option value of roughly $560M represents more than two thirds of the net present difference between the expected cost savings from exercising the option and the investment required to exercise it, despite the returns occurring in a similar 10+ year time frame. This may seem counterintuitive. However, the quotient of expected profit (or cost savings) divided by the investment required to exercise the option is quite different for these two examples. This quotient is roughly 2.0 for the semiconductor memory option and 10.0 for unmanned air vehicle technology investment. Thus, the likelihood of the option being “in the money” is significantly higher for the latter. This is why the option value is one third of the net present difference for semiconductor memory and two-thirds for unmanned air vehicle technology.
Technology Investment Advisor As noted earlier, the options pricing models that enabled the results in Table 1 are embodied in the Technology Investment Advisor (TIA), whose functionality is depicted in Figure 3. As indicated in this figure, TIA is often used in conjunction with the Product Planning Advisor discussed in Chapter 8. The option pricing model needs as inputs the R&D investment over time and the subsequent, contingent investment, again over time, to launch the technology in market offerings. The former investment represents purchasing an option while the latter investment represents exercising the option. Also needed are the projected free cash flows resulting from both investments, which are usually minimal for the R&D investment and hopefully substantial for the market investment. The option pricing model can then compute the value of the options, which is often significantly higher than the necessary R&D investment, hence making the option quite attractive. The other elements of Figure 3 represent models used to project the inputs for the options pricing model: 0
S-curve models are used to project the maturity of market offerings and consequently revenues and, if appropriate, units sold (Roussel, 1984; Foster, 1986; Young, 1993; Meyer, 1994).
272 People & Organizations
Production learning models are used to project decreasing unit costs as the cumulative number of units produced increases (Lee, 1987; Hancock & Bayha, 1992; Heim & Compton, 1992). Competitive scenario models are used to guide projections of prices and margins depending on market timing and relative technology and strategy advantages. Functions/features models are used to represent and evaluate the specific nature of each market launch, the timing of which is determined by the S-curve models. Markevstakeholder models are used to assess the relative market utilities of market offerings across stakeholders and competitors, thereby influencing competitive scenarios
....................................................................................................... Competitive Scenario Model
i S-Curve i Product i Model
L
A
Technology i Investment Advisor
+
I
Option Pricing Model
i
+$ i
Production
-
+ Learning Model
............................................................................................. ......................................................................................... :......a
v
i Functions/ II, Market/ i Features Model i Model
Product Planning Advisor
........................................................................................... Figure 3. Technology Investment Advisor & Product Planning Advisor
Invention, Innovation, and Options 273
Thus, the combination of the Technology Investment Advisor and Product Planning Advisor includes models for options pricing, S-curve maturity, production learning, Quality Function Deployment, and multiattribute utility. Users can choose to employ any or all of these models for analysis of their investment portfolios. To the extent that useful data are already available - and, therefore, need not be projected -- these data can be entered into the analysis spreadsheets directly. Figure 4 shows example projections from S-curve models of a product line (upper left) involving an initial product launch and subsequent derivative launches. These projections "feed" the option price valuation (lower right). Extensive capabilities for sensitivity analysis and Monte Carlo analysis support exploration of such models and their interactions, enabling users to answer questions such as, "How wrong can I be and still have this investment make sense?"
Figure 4. Example Projections and Option Price Valuation
274 People & Organizations
Case Stories The Technology Investment Advisor, often but not always used in conjunction with the Product Planning Advisor, has been employed in conjunction with a large number of major investment decisions. The results of some of these decisions are indicated in Table 1. These engagements were with companies and agencies in the aerospace, communications, computing, defense, electronics, energy, materials, pharmaceuticals, retail, and semiconductor industries. All involved “big bets” in the sense that typically $10-50 million was required to “buy” the option of interest, and $100-500 million would be required to exercise the option. Potential profits ranged from $100 million to $2 billion. Invest in R&D. In many cases the option was bought by investing in R&D. The R&D was intended to create proprietary technology that would enable new product lines or new generations of existing product lines. The notion of product line generations is depicted in Figure 4 by the series of launches in the upper left. The decision makers involved in these analyses were very concerned about the uncertainties associated with future markets. We were able to vary the dynamics of market penetration and the cost savings due to production learning to yield probability distributions of Net Option Value (NOV) that enabled seeing both returns and risks. Run the Business. There were also several cases where the option was bought by continuing to operate an existing line of business. Thus, the option was, in effect, a bonus for sustaining the business. These cases were typically situations where the Net Present Value (NPV) of the current product line was low enough for the company to consider selling or closing the business. Yet, they felt that the technologies underlying the product line had potential in adjacent markets. In these models we cast continuing to run the existing business as retaining the option to enter the adjacent markets. We were usually able to show that the NOV of the potential new markets significantly exceeded the NPV of the existing business. Acauire Capacity. A few cases involved the possibility of acquiring capacity to support market growth that was uncertain in terms of timing, magnitude, and potential profitability. In those cases where the acquisition
Invention, Innovation, and Options 275
of capacity was modeled as the purchase of an option, it was necessary for this purchase to be significantly below market value. If option exercise requires less money than option purchase, then NPV is often close to NOV because most of the investment occurs up front. In one interesting case, the NOV was very large because the capacity was a depressed asset. Thus, it provided an attractive option despite strong chances of weak future markets and substantial walk-away penalties. Despite the very attractive NOV, the company’s CFO did not take this investment opportunity to the Board because he felt that they would shy away from acquiring capacity with a significant probability that it would never be used. Acquire Competitor. One case yielded an unintended and perhaps counterintuitive result. The company was considering investment in R&D to enter a new market. A much smaller company had patents on technologies believed to be key to success in this market. There were considerable risks that our client would not be able to “work around” these patents. We included these uncertainties in the model. While the investment was attractive, our analyses showed that it was very risky. Despite the fact that the senior executive for this unit had eliminated acquisitions from the market entry strategy, a “last minute” analysis found that acquiring the competitor provided the highest Net Option Value. The possibility of acquisitions was then seriously reconsidered. Summary. We have been involved in well over 20 engagements such as just discussed involving quite large investments and potentially very large returns. Due to the very significant downstream uncertainties as well as the possibility of staged investments, these opportunities were much better cast as options and better assessed in terms of NOV rather than NPV. It should be kept in mind, however, that the resulting numbers only informed the decision - they did not dictate the decision. In fact, as long as NOV was attractive, we found that decision makers were much more interested in the uncertainties and risks associated with the investment than the specific magnitude of the projected returns. I hasten to note that not all decision makers have been comfortable with the notion that options hedge downside risks. We have found, for example, that defense companies and agencies find it difficult to agree with the notion that some options will inevitably be purchased and then not exercised. They often ask questions like, “Well, why did you purchase these options if you did not intend to exercise them?” My standard answer
276 People & Organizations
is, “Do you regret that you did not use your life insurance last year?” This often yields a nervous laugh, but seldom changes people’s minds. The central overarching idea is that an enterprise needs a portfolio of options designed to address its uncertain, contingent future needs. It is not possible to eliminate the uncertainties and contingencies, although some types of enterprises seem to be adept at convincing themselves that this is possible. Later in this chapter, we discuss a principled way to adopt this overarching idea.
Limitations and Extensions As indicated earlier, Black and Scholes (1973) along with Merton (1973, 1977) pioneered analysis of European call and put options, with the assumption that market risk is represented as a random walk process for asset value over time (i.e., volatility). A broader introduction to real options and their application to investment analysis can be found in Dixit and Pindyck’s Investment Under Uncertainty (1994) and Trigeorgis’s Real ODtions (1996). Smit and Trigeorgis (2004) expand upon this topic examining options analysis under competition since competitors may have their own options. In the studies reviewed above, we assumed that investment in the next stage of the R&D process represented the purchase of an option on the remaining stages including deployment of the technology in product lines. Black, Scholes, and Merton’s methods have been extended to the notion of compound options (i.e., an option on an option), which can be used to analyze multi-stage decision processes for R&D such as portrayed in Figure 2 (Geske, 1979; Carr, 1988, Cassimon, et al., 2004). A discretetime version for compound options assumes a binomial probability of asset price increase or decrease each period, as opposed to the continuous random walk process (Cox, Ross, & Rubinstein, 1979). Real options include not only the ability to exercise a future purchase, but also include the ability to defer, to expand or contract, to sell, or to switch inputs (e.g., convert to a new technology). Many of these efforts use valuation models derived from the fundamental analytic Black-Scholes and binomial methods. Others adapt numerical techniques such as Monte Carlo simulation (Boyle 1977; Boyle, Broadie, & Glasserman, 1997). It is important to recognize, however, that the analogy with financial options is not precise. Lander and Pinches (1998) discuss the limitations of this analogy, which include the following.
Invention, Innovation, and Options 277
Future asset values and other parameters may not be known (or knowable). This can be addressed with sensitivity analysis, Bayesian analysis, or via specialized volatility estimation methods. Real options may not satisfy the assumptions on the underlying asset used for valuating financial options (e.g., complete markets and no arbitrage). This can be addressed by tying the non-traded asset to a tradable twin security, i.e., cash flow (Trigeorgis, 1993). Typically, this results in a shortfall in return expected from the asset as compared to the twin security (Trigeorgis 1996). Sensitivity analysis can be used to assess the effect of this shortfall. A random walk process may not be reasonable for modeling asset value variability. This can be addressed by formulations involving stochastic jump processes (Merton 1976; Pennings & Lint, 1997; Martzoukos & Trigeorgis, 2002). Analysis may focus only on one real option, when others are present and have an interaction effect with the first. Trigeorgis (1993, 1996) presents a set of examples that illustrate how to address this limitation. Exercise may not be instantaneous (e.g., time may be needed to build a plant). Majd and Pindyck (1987) examine investment decisions when it takes time to build and there is maximum rate of construction. Michael Pennock’s research has explored many of the above variations and their applicability to the types of problems discussed in Chapters 9-1 1 of this book (Pennock, 2007; Pennock, Rouse & Kollar, 2007). In particular, his option-based model for economic valuation of enterprise transformation is discussed in Chapter 11. It is important to keep in perspective these endeavors to remedy shortfalls of the classical Black-Sholes formulation relative to particular investment problems, for instance, those with several stages and, hence, options on options. In all of our applications, NPV is much more conservative than NOV. A two-stage NOV is a bit more conservative than a three or four-stage model. However, the two-stage model often addresses the lion’s share on the over-conservative nature of NPV. Our experience is that decision makers are not expecting exact answers. If they know that your valuation tends to be somewhat
278 People & Organizations
conservative, that is usually acceptable - and much better than being the other way around. Of particular importance, they are not going to use the valuations as the sole means of informing their decisions. As one decision maker told me, “I want to count the numbers right, but the numbers are not all that counts.” Decision makers want insights. They want to better understand what strongly influences value and which uncertainties and risks can most undermine value. Tools such as PPA and TIA enable exploration of these issues. Decision makers pay attention to the numbers that result. However they pay much more attention to the insights that they and their management teams gain from the modeling process.
VALUE STREAMS AND NETWORKS The option-based models discussed thus far in this chapter enable valuation of R&D projects. Thus, we can attach economic value to investments in inventions with the intention of these investments leading to market innovations. If we are concerned with more than economic value, we employ the costbenefit methods from Chapter 6 with NOV and/or NPV as attributes. In this section, we address how value flows through an organization. This will enable assessing how organizational processes and practices affect value flow and, ultimately, the abilities of an organization to provide value to its stakeholders. Central concepts in this assessment are value streams and value networks. Value streams are composed of process stages where value is added. Each stage can be represented as shown in Figure 5 . Stages receive inputs from upstream users and provide outputs to downstream users. Adding value consumes resources and is influenced by various controls. Lack of resources and controls tends to undermine value. Stages can be connected in series and/ or parallel to depict overall value streams as illustrated in Figure 6. Upstream users may provide direct inputs to downstream users - for instance, in terms of technology options -or possibly provide controls or resources as shown in Figure 6. For example, a procurement planning activity occurring in parallel with R&D may enable downstream transition of the outputs of R&D to fielded solutions.
Invention, Innovation, and Options 279
Controls: o Requests o Approvals o Events
Next User
Last User Resources: o Human o Financial o Physical o Knowledge o Time
Figure 5. Representation of a Stage of a Value Stream
(3 Contractors
Figure 6. Example of a Value Stream
280 People & Organizations
End users, and other intermediate users, influence stages via controls by requesting and/or approving the activities associated with a stage or, at least, the products expected of a stage. Events such as unexpected breakthroughs or demands can also exert control. Such breakthroughs may, for example, provide new benchmarks for innovation, effectively redefining that dimension of value. Stages consume resources including the people associated with the stage, budgets, and facilities. Stages also consume knowledge, which may or may not be available, and may or may not come from other users in the value stream. Finally, stages consume time, both for performing the activities associated with a stage and while waiting for resources (e.g., budget or knowledge) and controls (e.g., approvals). Value streams can be evaluated in terms of quality, productivity, and innovation (Rouse & Boff, 2001). An overall R&D value stream - or any stage of a value stream - can yield less than maximum value by, for instance, creating poorly documented research results (poor quality), requiring investments for which the returns are inadequate (poor productivity), and/or producing the “same old stuff’ which downstream users find decreasingly useful (lack of innovation). Value streams can be dysfunctional for structural reasons also. In particular, controls and resources can be “miswired,” creating delays and inefficient uses of resources. For example, follow-on procurements may be delayed. As another example, human resources may not be available when needed. Our case study of R&D in intelligent tutoring systems (ITS) exhibits aspects of these problems (Rouse & Boff, 2003). Beyond stage and structural inadequacies, value streams may lack adequate feedback control systems for adapting the flow of value to changing environmental conditions. For example, end users and/or their priorities may change. Technology generations may change or breakthroughs occur. The feedback system should be able to adjust controls and thereby allocations of resources in response to such changes. Thus, we can assess the processes and structures underlying value streams in terms of 0
Quality, productivity, and innovation of stages, individually and collectively Existence and timing of structural relationships necessary to enable stages to deliver quality, productivity, and innovation
Invention, Innovation, and Options 281 0
Existence, flexibility, and adaptivity of feedback control systems in response to environmental changes
Inadequacies in any of these areas can lead to low yield, late delivery, and excessive investments, any of which can culminate in decreased economic and/or strategic value. At the very least, redesign of value streams, and perhaps the overall organization, should enable the same yield, faster and cheaper. Multiple value streams can result in value networks, as illustrated in Figure 7. This example, drawn from our case study of ITS research, found that the “kernel” of Figure 6 repeated many times in the four decades of research in this area supported by several government agencies. In fact, examination of this value network helped us to identify the many years lost due to delays waiting for approval of the next phases of these initiatives and delays waiting for funding of the next phases. These delays cost more time than the delays themselves as they often resulted in dispersion of the research teams to other initiatives.
Figure 7.Example of a Value Network
282 People & Organizations
It is important to note that the value stream construct just outlined is not dissimilar from related systems engineering concepts. Indeed, the idea of a value stream is borrowed directly from work in lean thinking (Womack & Jones, 1996; Kessler, 1999). Life cycle systems engineering models (Sage, 1995, Patterson, 1999) and IDEF models (Sage, 1995; Kusiak & Larson, 1999) can also be used to depict value streams. In fact, there is a rich heritage of systems-oriented representations of relationships, feedback, uncertainty, and so on that Figures 5-7 draw upon (Sage & Rouse, 1999).
R&D World The value flow of primary interest in this chapter is through the multi-stage investment decision making process depicted in Figure 2. Doug Bodner and I studied this value stream via an organizational simulation called R&D WorZd (Bodner & Rouse, 2007). This simulation models the submission of proposals, their acceptance or rejection, and project execution at each stage from concept paper to deployment. Thus, R&D World models the innovation funnel in Figure 1. There are two central choices in this simulation: Decision criterion for valuation of proposals, that is, NPV vs. NOV Allocation of budget across stages, for example, line balancing Line balancing involves weighting budget allocations by the probabilities of success across stages to assure that budget is available to support the likely flow of successful projects. Other allocation schemes considered include providing more funds upstream to create a larger flow of projects, or providing more funds downstream to enable larger investments in the few projects that make it through all the stages. Environmental parameters include the general outlook for all projects in terms of probability of negative NPV, and the volatility of projected free cash flows upon deployment with the standard deviation expressed as a percentage of the nominal value. Outcome measures of interest are total deployed value (TDV) and yield which is the total deployed value divided by the R&D budget expended. Parametric studies of R&D performance using R&D WorZd resulted in the following conclusions:
Invention, Innovation, and Options 283
In general, using NOV as a valuation criterion outperforms using NPV. This effect is pronounced when NPV has a high probability of being negative. There is some evidence that using NOV increasingly outperforms NPV as volatility increases. Shifting funds upstream leads to improved total value deployed over the line-balancing approach with high levels of volatility. While upstream allocation outperforms line-balancing in general, this effect is more pronounced when initial probability of negative NPV is low. If the upstream allocation is used, an R&D system with high volatility will outperform one with low volatility, with this effect being more pronounced when initial probability of negative NPV is low. The opposite is true for the line-balancing allocation.
R&D World was adapted to represent the new product R&D operations of a large forest products corporation. Forest products include a variety of consumer and industrial artifacts such as paper, packaging and building materials. This particular company has six stages of R&D. The model was based on summary data and was not an in-depth case study. Nevertheless, it proved useful in illustrating the application of R&D World. Table 2 shows the results of this study. The first row of Table 2 is the baseline case, which was validated to some extent by comparing its output in number of projects deployed and free cash flow to historical data collected from the company. The results for alternative management policies (rows 2-16) show wide variance in outcomes, based on the combinations of factors selected. Profit ranges from a $254 million loss to a $1,732 million profit. This serves as a powerful demonstration of the importance of selecting not only the right parameters to run the research enterprise, but also the right combination of parameters. The R&D managers who participated in this demonstration - and suggested the alternative management policies -- found this insight quite compelling. Many said that they did not realize how much the valuation and allocation decisions mattered. They thought they just had to choose “reasonable” valuation criteria and allocation methods. However, all of the
284 People & Organizations
choices in Table 2 are reasonable. Unfortunately, several of them lead to inferior results. The demonstration of these inferior results via R&D World was much more convincing than simply making an abstract argument about this phenomenon.
Note: NOV = Net Option Value, NPV = Net Present Value, SG = Company’s Stage Gate Criteria
Table 2. Results in R&D World for Forest Products Industry
Invention, Innovation, and Options 285
These two studies with R&D World resulted in two overarching conclusions. First, real options provide a desirable alternative to traditional discounted cash flow methods for determining the value of projects when the objective is to maximize value creation with a given R&D budget. On the other hand, traditional methods are superior when the objective is to maximize the value created per R&D investment. In fact, use of NPV usually results in lower overall R&D expenditures (RDE in Table 2). Thus, use NOV when you want to maximize the value provided to the enterprise; use NPV when you want to conserve the R&D budget. Second, budget allocation across stages is another key value lever for R&D. In high volatility environments, it makes sense to invest funds in upstream stages so as to provide options for downstream selection, when the objective is to maximize value from a given budget. On the other hand, when focused on conserving the R&D budget, a line-balancing allocation makes sense.
VALUE-CENTERED R&D The discussion thus far has emphasized the technical side of value-based technology strategies. We have looked at how to define value, assess value, and model the flow of value (Rouse & Boff, 2001, 2003). The next question is how best to get an organization to operate this way. To address this question, we have to consider the nature of the organizational changes required and how human-centered design can enable these changes. Table 3 summarizes the results of thinking about how best to “package” our research results for R&D managers, as well as the senior executives from whom they seek investments (Rouse & Boff, 2004). These principles are partitioned into three groups associated with characterizing, assessing, and managing value. We will consider the first and second groups briefly and then focus on the third group.
Characterizing Value Principle No. 1: Value is created in R&D organizations by uroviding “technology omions” for meeting contingent needs of the enterprise. R&D is almost always about the future. R&D organizations may occasionally become involved in today’s problems, but involvement in
286 People & Organizations
such situations is usually focused on gaining an appreciation for the context in which the future is likely to emerge. To the extent that R&D is focused on solving today’s problems, they are performing an engineering function rather than R&D.
No.
Focus
Principle
1
’haracterizing Jalue
galue is created in R&D organizations by iroviding “technology options” for meeting :ontingent needs of the enterprise.
2
’haracterizing Jalue
i&D organizations provide a primary means for mterprises to manage uncertainty by generating lptions for addressing contingent needs.
3
Zharacterizing Value
A central challenge for R&D organizations is to :reate a portfolio of viable options; whether or not options are exercised is an enterprise challenge.
4
Assessing Value
Value streams, or value networks, provide a means for representing value flow and assessing the value of options created.
5
Assessing Value
Valuation of R&D investments can be addressed by assessing the value of the options created in the value network.
6
Managing Value
Decision making processes -- governance - are central in managing the flow of value.
7
Managing Value
Organizational structure affects value flow, with significant differences between hierarchical vs. heterarchical structures.
8
Managing Value
Individual and team affiliations and identities affect value flow; dovetailing processes with disciplines is essential.
9
Managing Value
Champions play important, yet subtle, roles in value flow; supporting champions is necessary but not sufficient for success.
10
Managing Value
Incentives and rewards affect value flow; aligning these systems with value maximization is critical.
Table 3. Ten Principles for Characterizing, Assessing and Managing Value
Invention, Innovation, and Options 287
Even when R&D is kept firmly focused on the future, there can be difficulties assessing success. It is often perceived that success is proportional to the fraction of investments that yield technologies that transition to deployment. The goal tends to become one of transitioning every idea and result to providing value in the enterprise’s marketplace or equivalent. However, from an options perspective, the R&D “scorecard” should not be dominated by the percentage of technologies transitioned, that is, technology options exercised. Instead, one should also count viable technology options created, some of which get exercised and some of which do not. The key point is that the enterprise needs the right portfolio of options for meeting future needs. R&D should be scored on both its demonstrated ability to provide these options and, obviously, the actual creation of the options. Principle No. 2: R&D organizations provide a primary means for enterurises to manage uncertainty by generating options for addressing contingent needs. R&D is a means of managing and often reducing uncertainty. Providing options for addressing contingent needs involves addressing various types of uncertainties beyond the uncertainty underlying the need for contingencies. One may be uncertain about whether or not something is possible, how best to do it, and what one can expect in terms of performance and cost. One also may be uncertain about what functionality will be needed, what levels of performance and cost will be required, and what competitors are likely to do. To the extent that futures are uncertain with many possibilities, it is better to have options on alternative futures than attempt to invest in all possibilities. These investments should cover the full range of uncertainties just noted. The purpose of these investments is not to eliminate uncertainty, but to have the right portfolio of options. As discussed earlier, the value of an option increases with the magnitude of the consequences of exercising the option. This value also increases with the uncertainty associated with these consequences. Finally, value increases with time into the future when the option can be exercised. Thus, the value of an option increases with the magnitude and uncertainty of consequences, and time until these consequences can be obtained.
288 People & Organizations
Strategic advantage can be gained and sustained by understanding uncertainty better than one’s competitors and creating a portfolio of options that provides high-value hedges against these uncertainties. Once contingencies emerge, one can exercise those elements of the portfolio that provide the greatest competitive advantage. Organizational abilities to address uncertainty this way - enabled by R&D - makes uncertainty a factor to be leveraged rather than eliminated. Principle No. 3: A central challenge for R&D organizations is to create a portfolio of viable options; whether or not options are exercised is an entemrise challenge. R&D organizations can assure that options are viable - that they are exercisable. This means that knowledge is vetted, codified, and supported with “how to” models, methods, and tools. It also means that people are up to date technically and available if needed. It is also important to have a good understanding of the resources needed for these people to employ the requisite knowledge. Whether or not viable options are exercised depends on a range of factors beyond the purview and control of a typical R&D organization. Market conditions may not be right. Resources may not be available. Of particular importance, downstream decision makers may choose to exercise other options, perhaps for technologies that now appear to provide greater competitive advantage. The portfolio of technology options can be portrayed as shown in Figure 8. Return is expressed in terms of NPV (net present value) or NOV (net option value). The former is used for those investments where the lion’s share of the commitment occurs upstream and subsequent downstream “exercise” decisions involve small amounts compared to the upstream investments. NPV calculations are close enough in those cases. Risk (or confidence) is expressed as the probability that returns are below (risk) or above (confidence) some desired level - zero being the common choice. Assessment of these metrics requires estimation of the probability distribution of returns, not just expected values. In some situations, this distribution can be derived analytically, but more often Monte Carlo analysis or equivalent is used to generate the needed measures. The line connecting several of the projects (PA,Pg, PH, and Pz) in Figure 8 is termed the “efficient frontier.” Each project on the efficient
Invention, Innovation, and Options 289
frontier is such that no other project dominates it in terms of both return and confidence. In contrast, projects interior (below and/or left) to the efficient frontier are all dominated by other projects in terms of both metrics. Ideally, from an economic perspective at least, the R&D projects in which one chooses to invest - purchase options - should lie on the efficient frontier. Choices from the interior are usually justified by other, typically non-economic attributes. A primary purpose of a portfolio is risk diversification. Some investments will likely yield returns below their expected values, but it is very unlikely that all of them will - unless, of course, the underlying risks are correlated. For example, if the success of all the projects depends on a common scientific breakthrough, then despite a large number of project investments, risk has not been diversified. Thus, one usually designs investment portfolios to avoid correlated risks.
r
A f LI
\/Typical I
Technology Projects
>-iimcem*
Typical Pr&t Projects Typical Process Improvement Projects
Return (Expected Net presentlc)ption Value)
confidence
(Prdxbility Net Value > 0)
Figure 8. Technology Strategy Portfolio
290 People & Organizations
While this makes sense, it is not always feasible - or desirable - for R&D investments. Often multiple investments are made because of potential synergies among these investments in terms of technologies, markets, people, etc. Such synergies can be quite beneficial, but must be balanced against the likely correlated risks. Further, it is essential to recognize that options are not like certificates that are issued upon purchase. Considerable work is needed once “purchase” decisions are made. Options often emerge piecemeal and with varying grain size. Significant integration of the pieces may be needed before the value upon which the investment decisions were based is actually available and viable.
Assessing Value Principle No. 4: Value streams, or value networks, provide a means for representing value flow and assessing: the value of options created. As discussed earlier, it is useful to think of value in terms of both how technology options are created and how they are consumed. On the creation side, the focus is on R&D processes that yield viable options. On the consumption end, the concern is with how technologies make possible functionality that is embodied in products, systems, and services that provide capabilities that enable achieving the effects of interest to the marketplace or other constituencies. R&D organizations should attempt to maximize the yield and minimize the time to create the portfolio of technology options needed by the enterprise. This affects the “purchase price” of the options. The “exercise price” and the resulting cash flows are strongly affected by how the technologies underlying the options flow to market impact. R&D often cannot directly impact the elements of the value stream subsequent to option execution. However, R&D may be able to deliver more “exercisable” options if the nature of downstream processes is understood.
Principle No. 5 : Valuation of R&D investments can be addressed by assessing: the value of the options created in the value network. In order to appropriately allocate resources, one needs to attach value to the flows in the value streams or value network. From an options-based perspective, a central issue concerns assessing the economic value of a
Invention, Innovation, and Options 291
network of contingent decisions over time, laced with substantial uncertainties. In the context of R&D, these decisions primarily involve purchasing options that, when exercised, yield other options. Cash flow, in terms of profit and/or cost savings, does not come until technology is deployed in products, systems, and services. It is important to note that options-based thinking has considerable value beyond the calculations outlined here. Elaboration of contingencies and associated uncertainties greatly improves the framing of investment decision problems. Responding to valuation in this manner, R&D organizations are often quite nimble in identifying and formalizing downstream contingencies much more broadly than typically motivated by traditional valuation methods. These conclusions are based on observations by many of the executives and senior managers involved with the assessments summarized in Table 1. Managing Value
The first set of five principles summarizes the foundation for adoption of an options-based view of value developed earlier in this chapter. These five principles are not sufficient for success. Beyond impressive models and methods, an enterprise has to execute successfully. A second set of five principles, elaborated in this section, provides the underpinnings for successful execution. Princide No. 6: Decision making Drocesses -- governance - are central in managing the flow of value. Processes for investment decision making and investment management can have a significant impact on value created and subsequent benefits to the enterprise. Good decision processes result in better decisions for two reasons. First, the right attributes and tradeoffs are considered at the right time. Second, all stakeholders understand how decisions are made, how to influence decisions, and how final decisions emerge. This greatly improves buy-in. Inadequate or inappropriate decision processes can undermine value creation by allowing poor decisions, resulting in ineffective allocations of investment resources. Misunderstood decision processes can result in a lack of individual and organizational commitment to decisions. At an extreme, a variety of dysfunctional myths can emerge (Rouse, 1998). For
292 People & Organizations
example, management may feel that they have consensus, the right processes, and “just” have to execute when, in fact, throughout the organization people do not know how to influence decisions, are carrying out processes that are not value-centered, and face considerable execution hurdles. It is common for organizations to have difficulty agreeing to decision making processes, employing these processes with a degree of integrity, and communicating decisions in the context of these processes. It is also quite natural for stakeholders to attempt to work around decision processes to assure their interests are supported. To the extent that this succeeds, it undermines disciplined processes and precipitates cynicism among stakeholders. Such difficulties are common in R&D organizations where traditions of authority and value-centered decision making are often in conflict. Strong leadership can be of great help in adopting and succeeding with a value-centered approach. However, to the extent that such leadership preempts value-based governance processes, the approach will be undermined and its benefits diminished. If the creation of options is a desired outcome at each stage of the process in Figure 2, then arguments for continued investment should be couched in terms of net option values. If this is expected from proponents of investments, then methods and tools such as discussed earlier should be provided. Finally, investment decisions should be communicated in the context of options-oriented metrics. Principle No. 7: OrPanizational structure affects value flow, with significant differences between hierarchical vs. heterarchical structures. Value is maximized, both in magnitude and time, when it flows through efficient organizational processes. Such processes minimize the number of steps between upstream and downstream next users, and eventually end users. It is desirable that steps with little or no value added be eliminated. However, the nature of organizations can make this difficult. Organizations can be viewed in several ways. Organizations receive inputs and produce outputs. For R&D organizations, these inputs and outputs often are in the form of information. Structural relationships within and across organizations define the extent and content of information flows. Such flows influence the extent to which R&D organizations can deeply understand future enterprise aspirations, as well as communicate and support the options created by R&D processes.
Invention, Innovation, and Options 293
Organizational structure also affects decision making. Hierarchical structures are useful for leadership and goal setting. Such structures, however, can impede value flow to the extent that higher levels are designed to make decisions in lower level processes. Hierarchical requests for approvals and resources add time and uncertainty while also consuming resources in themselves. In this way, the magnitude and timing of value are decreased and delayed, respectively (Rouse & Boff, 2m3). Heterarchical structures, in contrast, enable efficient horizontal flow of value. Authority for approvals and resource allocations reside at the level of the value streams. Higher levels communicate the vision and elaborate the goals but do not specify how goals are to be achieved. Value flow is monitored but intervention is rare as authority for corrective actions also resides as the level of the value streams. Leaders in such organizations have more influence than power. They also must carefully consider and articulate the vision and goals. Design and communication of incentives and rewards are also key leadership roles. These types of leadership roles are difficult to perform well, and are not natural traits for those steeped in more authoritative models of leadership. Organizational structure also affects the control of resources -- human, financial, and physical. This is useful in that resources usually need “homes” to be stewarded appropriately. However, it can also lead to “silos,” associated with functions, disciplines, or regions. This limits the flow of resources to where value can best be added. These tendencies can be countered by matrixing resources across organizational boundaries. The decision processes discussed above can be used to reallocate resources periodically. Such reallocations can be driven by where the greatest option values are likely to be created. This approach typically results in those who seek resources making their arguments in terms of options and their value. This is, of course, exactly what one would like them to do. These implications of organizational structure can also be expressed in terms of who can direct what initiatives, who gets to review proposals and progress, and who determines rewards. Organizational structure and the allocation of authority should assure that execution of these managerial responsibilities is aligned with creating options for achieving enterprise aspirations. For R&D organizations, the above considerations are manifested in terms of how information and knowledge flows, funding decisions are made, resources are allocated, and research outcomes assessed (Rouse & Boff, 1998). There is no best organizational structure for these activities.
294 People & Organizations
Nevertheless, structure should be derived from strategy. To this end, organizational structure needs to be designed to support the way in which options are best created and nurtured in the enterprise environment of interest. Both descriptive and prescriptive approaches to organizational structure are needed. One must be able to understand the ways things work now in order to define the gaps between “as is” and “to be” as we discuss later. It is also important to recognize that the “best” structure may not be simply a variation within the reigning organizational paradigm. Thus, an improved hierarchy may be inferior compared to a more heterarchical structure, for example. PrinciDle No. 8: Individual and team affiliations and identities affect value flow: dovetailing Drocesses with disciplines is essential. R&D is often pursued by people with similar disciplinary backgrounds, for example, scientists and engineers from particular disciplines. People in finance or marketing often work together in other aspects of enterprise value streams. The professional affiliation and identity that this encourages can be very important for professional development and knowledge sharing. However, this affiliation and identity can also limit people’s abilities to fully understand next users and end users’ perspectives and needs. For this reason, it is useful to also encourage affiliation with overall value streams and associated processes. This can be fostered by providing education and training focused on enterprise value streams, as well as creating opportunities for value stream participants to meet and get to know each other. Dovetailing processes with disciplines is important, but it can be very difficult. University education typically does a poor job at supporting cross-disciplinary perspectives. Academic faculty members are often among the most discipline-bound professionals. The best-performing students have often fully assimilated this trait. Consequently, it is essential to be very intentional in providing valuecentered education and training. This should include material on the nature and functioning of enterprise processes. Next users and end users of processes should be explicated, including the options they need and those they create. Supporting information flows should also be outlined and explained.
Invention, Innovation, and Options 295
In addressing this principle, a balance must be managed between understanding and identification with enterprise aspirations, and affiliation and interaction with sources of disciplinary knowledge and best practices. Overemphasis on the former typically results in less than fully competent, but nevertheless enthusiastic researchers. Overemphasis on the latter tends to foster first-rate researchers, although they may at times have a somewhat cynical view of the organization. Creating this balance is not a problem to be “solved.” One should maintain awareness of how the underlying tension is evolving. If the situation evolves to either extreme and persists, it can be very difficult to reestablish a more productive balance. In this sense, perhaps unfortunately, the extremes tend to be fairly stable situations while balance takes continual effort. Principle No. 9: Champions play imuortant, yet subtle, roles in value flow; suuporting chamuions is necessary but not sufficient for success. Well-designed organizational processes can often sustain incremental value improvements. However, quantum and often disruptive improvements are frequently facilitated by champions that pursue “the cause” regardless of organizational hindrances (Christensen, 1997). Champions are noted for formulating innovation strategies, finding resources, and sustaining commitment through implementation (Rouse & Boff, 1994). On the other hand, champions cannot convert bad ideas to good, seldom succeed without recruiting others to share responsibility and communicate the benefits of ideas more broadly. Thus, while champions may be necessary for disruptive change, they are seldom sufficient. They can be essential catalysts but rarely the sole cause of success (Markham & Griffin, 1998). Managing value requires that champions be encouraged and supported. An explicit mentoring process can help foster champions. Recognition of champions and their contributions can also help. Once champions emerge for particular initiatives, it is also important to provide ongoing encouragement and support, ranging from visibility with leadership to additional resources. Nevertheless, it is important that champions not be viewed as the only essential ingredient in success. Organizational processes, especially decision processes, should be designed to empower and support champions. Such processes should also be capable of sustaining initiatives
296 People & Organizations
when, for instance, champions depart. There are elements of succession planning that are relevant here. Organizations seem to have natural tendencies to let champions work things out, often resulting in their having to work around current organizational processes. This is certainly better than watching initiatives fail. However, a more proactive and eventually more successful approach is to redesign and support processes based on lessons learned by champions. This will also encourage people to become champions. More broadly, one needs to create an environment that encourages champions, while also attracting and hiring the right kinds of people (Jain & Triandis, 1990). There needs to be the right mix of unencumbered visionaries, respected thought leaders, and competent value managers to play the roles of idea generators, gatekeepers, and coaches, as well as champions. The overall climate should encourage creative imagining and framing of options, including how they can be realized. This all must be designed in the context of typical R&D professionals and organizational cultures, including typical driving forces (Miller, 1986). Princide No. 10: Incentives and rewards affect value flow; aligning these systems with value maximization is critical. People respond to incentives. When incentives are aligned with maximizing value flow, people pay much more attention to providing their next users with viable options. In contrast, when incentives and rewards are not aligned - or are not realigned - with value streams, people “march to old drummers” in order to gamer rewards. It is important to balance recognition and rewards for individuals and teams associated with value processes. Individual excellence is, of course, important but excessive stress on individual disciplinary accomplishments can undermine an organization’s value orientation. In an R&D organization, this could mean giving all authors full credit for a jointly authored research paper rather than trying to assess who did what. The key is to develop metrics that are both individually and organizationally oriented. Balanced scorecards (Kaplan & Norton, 1996), or equivalent, can be developed for both overall value processes and individual contributions to these processes. Incentives and rewards can be linked to some combination of these two types of metrics. Whatever is measured, recognized, and rewarded will get attention. Careful design of value streams will not yield desired results without developing and implementing a measurement system that links individual
Invention, Innovation, and Options 297
and organizational performance to these value streams. A key is to relate recognition and rewards to value outcomes, for example, options created, rather than just well intended activity. More specifically, measures should be carefully chosen to reflect value goals and strategies, as well as the consequent nature of value streams. From this perspective, an R&D value scorecard is quite different than what one might devise for manufacturing or customer service (Rouse & Boff, 2001). It is also important to assure that personnel are educated with regard to such measurement mechanisms and trained in their use. One particularly difficult aspect of implementing this principle involves getting seasoned middle managers to adopt new approaches. Such people are often quite skilled at succeeding in terms of the old metrics. A useful tactic is to recruit thought leaders from this population to participate in the team(s) defining new measures and scorecards. This enables early understanding of objections and use of these thought leaders to help devise countermeasures. Enterprise strategies fall victim to two primary failures (Rouse, 2001). The first is a failure to execute - the strategy is all talk and no walk. The second is a lack of alignment between what the enterprise wants to become and how it incentivizes and rewards stakeholders. This tenth and last principle, therefore, is critical to avoiding value strategies being just a concept rather than a real way forward.
Organizing For Value Given the ten principles just outlined and summarized in Table 3, how should one go about creating a value-centered R&D organization? More specifically, how can one design or redesign such an R&D organization in a particular enterprise? This section outlines an overall design process. Some of the difficulties encountered in pursuing this process are then discussed, including a variety of best practices for addressing these difficulties. Overall Design Process The design of a value-centered R&D organization can be pursued using the following general steps: 0
Define desired enterprise outcomes
298 People & Organizations
Design processes for achieving these outcomes Design measurement system for processes Design structure for managing processes Design incentives and rewards to maximize value These design tasks should be performed using the ten principles for generating alternatives and addressing tradeoffs. This approach seems quite reasonable, especially if one were designing an R&D organization ‘from scratch.” However, most of the applications of the principles outlined here have involved existing R&D organizations that were aspiring to create greater value for their stakeholders. In these situations, it is usually very difficult to start from scratch. As shown in Table 4,the value principles can be applied in these cases by first assessing the “as is” organization from the perspective of these principles. The “as is” organization’s strengths and weaknesses can be characterized in terms of deficiencies in satisfying principles. It is important to determine the specific nature of deficiencies rather than just their existence. The next step is to define the “to be” organization in terms of deficiencies remedied. This should include specific programs of action to yield significantly greater conformance with the value principles. It is also important to define a time frame for accomplishing these changes and measures of success. Defining “As Is” and “To Be” At least conceptually, the principles as described earlier are fairly straightforward. More concretely, however, it can be difficult to map these general principles to a particular R&D organization. As surprising as it may seem, it can be somewhat difficult to determine how an R&D organization currently provides value.
Invention, Innovation, and Options 299
I
As Is
Principle
I I
Value is created in R&D organizations by providing “technology options” for meeting contingent needs of the enterprise R&D organizations provide a primary means for enterprises to manage uncertainty by generating options for addressing contingent needs
A central challenge for R&D organizations is to create a portfolio of viable options; whether or not options are exercised is an enterprise challenge Value streams, or value networks, provide a means for representing value flow and assessing the value of options created Valuation of R&D investments can be addressed by assessing the value of the options created in the value network Decision making processes -- governance - are central in managing the flow of value Organizational structure affects value flow, with significant differences between hierarchical vs. heterarchical structures
I
Individual and team affiliations and identities affect value flow; dovetailing processes with disciplines is essential
I
~~~~
~~~
Champions play important, yet subtle, roles in value flow; supporting champions is necessary but not sufficient for success
I
~
Incentives and rewards affect value flow; aligning these systems with value maximization is critical
Table 4. Template for Supporting Organizational Design or Redesign
300 People & Organizations
This determination begins with identification and characterization of the organization’s current activities, the inputs to and outputs from these activities, and how the value of these inputs and outputs are assessed. This can require significant effort, as people, especially researchers, do not necessarily think about their work this way. They just do what they do. Nevertheless, once one has created this characterization, one next needs to assess the extent to which the organization operates according to the ten principles for value-centered R&D organizations outlined in this chapter. Such an assessment will inevitably be fairly qualitative. This will likely be sufficient, as the types of changes that are typically entertained tend to be compelling without complete quantification. Designing; Action Plans The process just outlined usually results in identification of a variety of deficiencies. Stated as observations, examples include: We are so focused on helping business or operational units now, we don’t know whether we are doing the right things for their futures. Assessment of value is difficult because our “next users” have no data or projections of the impacts of what they ask us to provide. There are many activities dictated by the broader enterprise for which we can find no value added relative to our role in the enterprise, Our incentive and reward systems are not aligned with how we can best provide value, and we may be unable to unilaterally change these systems. Such observations, as well as a typical variety of more mundane conclusions, provide a rich basis for developing action plans. The overall process outlined earlier focuses on filling gaps to remediate deficiencies. Some changes are likely to be straightforward. However, some of the types of change illustrated in the above list cannot be initiated unilaterally. These changes require that broader stakeholders embrace an options-based view of the R&D organization. This suggests that action plans include both overt and covert elements. Some things one can make happen immediately, for instance, require that
Invention, Innovation, and Options 301
all proposals include an options argument. There may be other changes, for example, ceasing non-value-added activities, for which it may be much more difficult to gain approval. Nevertheless, changes that require external approval may be essential to becoming a value-centered R&D organization. These observations beg the question of who is leading the transformation to becoming a value-centered enterprise. If R&D is the driver, there are more subtleties to negotiate. If top management is the driver, the whole process can be pursued much more directly and aggressively. This suggests, obviously, that R&D executives should focus on selling the CEO, or equivalent, rather than covertly fostering such changes despite the chief executive. Executing Action Plans Action plans only deliver value when plans are executed, results are measured, and remedial adaptations made. This obvious statement conflicts with the organizational reality of business as usual. Articulating project proposals and outcomes in terms of option values may be difficult for audiences accustomed to hearing of budgets and milestones. Initially at least, it may be necessary to tell the story both ways. It is important to keep the momentum by constantly articulating the value story, explaining and advocating an options-based view. This can be facilitated by illustrating specific outcomes and the option values attached to these outcomes. It is important to keep in mind that options-based thinking is not necessarily natural. For example, the idea that options have value even when not executed can take people some time to digest. It is essential to value-centered planning that potential outcomes be cast in terms of possible value provided. The expected outcomes of action plans need to be monetized in terms of Net Option Values. Measures of risk can be derived from probability distributions of Net Option Value. Taken together, these two metrics enable portfolio plots such as Figure 8. Such plots will, with time, become a central element of the organization’s strategic dialog. Indeed, we have found that this impact on the dialog is more important than the numbers. Returns and risks are, of course, good topics for strategic discussions. Just as important, however, are debates about alternatives futures, the options needed to realize these futures, and how
302 People & Organizations
these options can be created. Being a primary provider of these options, R&D inherently plays a central role in such debates. Implementing and managing change is a challenging undertaking in itself. There are numerous difficulties associated with gaining and maintaining momentum. Delusions that execution will be straightforward are common, as are delusions of having already changed (Rouse, 1998). For R&D organizations, recognition of having succumbed to various delusions often comes far too late to be able to remediate these problems and react to an already-changed environment.
Summary Successful adoption of value-centered R&D requires much more than learning how to formulate option-based models and perform the associated calculations. A value orientation requires a different perspective on the role of R&D and what success means in terms of delivering value downstream to the rest of the enterprise. It also requires understanding and addressing the nature of the organizational change associated with adopting this philosophy.
TECHNOLOGY ADOPTION This chapter has addressed the issue of deciding about R&D investments that, hopefully, will lead to technologies that will enable innovative new products, systems, and services. Another approach is to adopt technologies developed elsewhere and integrate these technologies into your products and processes. This approach raises issues that are more fully addressed in Chapter 11. Specifically, the overall concern is with the future capabilities needed by the enterprise, the alternative technologies for enabling these capabilities, and the strategy and policy implications of adopting these alternatives (Rouse & Acevedo, 2004). Even if one can disregard the uncertainties associated with whether particular technologies will actually be available, one still must address the uncertainties surrounding technology adoption and the consequent value realized - or not realized. A good example involves information and communications technologies intended to enable enterprise mobility (Basole, 2006). Access to all enterprise information and knowledge assets any where and any time
Invention, Innovation, and Options 303
sounds appealing. Of course, a business case would likely be needed prior to investing in these technologies. However, a basic issue remains. Is the enterprise ready to succeed in becoming mobile? The answer depends on having the potential to benefit, being prepared to proceed and being willing to act. Without this readiness, an investment that might seem reasonable in principle could become a failed investment. Another variation involves collaboration where multiple enterprises work together to yield a shared intellectual product, for instance, a technology or patent. Such collaborations can be quite difficult to orchestrate. Our study of the biotechnology, pharmaceutical and medical device industries provided a variety of insights into the difficulties that companies and universities, for example, encounter when trying to align interests while also preserving proprietary competitive advantages (Farley & Rouse, 2000). There is, of course, always the probability that technology transitions will fail. A good example is our attempt to transition intelligent interface technology and simulation-based training to manufacturing applications (Rouse & Hunt, 1991). As discussed in Chapter 6, the key stakeholders associated with these applications were not willing to take the risks they perceived to be associated with these technologies. Thus, transitioning invention to innovation is, by no means, straightforward.
CONCLUSIONS People invest in invention (R&D) in hopes that the resulting inventions will enable market innovations, providing value to customers, users and, of course, investors. This chapter has elaborated an options-based approach to characterizing, assessing, and managing value so as to enhance chances that such investments will provide attractive returns. Beyond the principles whereby this can be accomplished, I have discussed extensive practice in applying these principles. We have considered invention and innovation in science and technology as they relate to business success. While it is beyond the scope of this chapter, it is important to note that the phenomena discussed in this chapter can be generalized far beyond this context. We have studied how different disciplines think about the nature of their domains and there are many common threads (Rouse, 1982). For example, our research in the arts has shown that the behavioral and social phenomena underlying invention and innovation are very similar in technology and art (Rouse,
304 People & Organizations
2003). Further, constructs such as teamwork and leadership have strong parallels across these domains (Rouse & Rouse, 2004). From a human-centered design perspective, this chapter has considered a wide range of stakeholders including customers, who may or may not be end users, researchers, designers, managers, and investors. Each of these stakeholders has interests somewhat different from each other. They all contribute to enabling and/or creating value that flows in the value streams and networks from invention to innovation. Value-centered R&D - as well as value-centered processes in general provides a framework for considering and balancing the roles and interests of the various stakeholders in the success of invention and innovation. This framework prompts explicit consideration of how value is understood, created, communication, and delivered.
REFERENCES Amram, M., & Kulatilaka, N. (1999). Real outions: Managing strategic investment in an uncertain world. Boston: Harvard Business School Press. Ballhaus, W., Jr., (Ed.).(2000). Science and technolow and the Air Force vision, Washington, DC: U.S. Air Force Scientific Advisory Board. Basole, R.C. (2006). Modeling and analysis of comulex technology adoution decisions: An investigation in the domain of mobile ICT. Ph.D. Dissertation, School Of Industrial & Systems Engineering, Georgia Institute of Technology. Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economics, 8l,637-659. Bodner, D.A., & Rouse, W.B. (2007). Understanding R&D value creation with organizational simulation. Systems Enpineering, lo (l), 64-82. Boer, F.P. (1998, Sept-Oct). Traps, pitfalls, and snares in the valuation of technology. Research Technology Management, 41(5), 45-54. Boer, F.P. (1999). The valuation of technology: Business and financial issues in R&D. New York: Wiley. Boyle, P.P. (1977). Options: A Monte Carlo approach. Journal of Financial Economics, 4 (3), 323-338.
Invention, Innovation, and Options 305
Boyle, P., Broadie, M., & Glasserman, P. (1997). Monte Carlo methods for security pricing, Journal of Economics, Dynamics & Control, 21, 12671321. Brigham, E.F., & Gapenski, L.C. (1988). Financial management: Theory and practice. Chicago, IL:Dryden. Burke, J. (1996). The pinball effect: How Renaissance water pardens made the carburetor possible and other iourneys through knowledge. Boston: Little, Brown. Carr, P. (1988). The valuation of sequential exchange opportunities. Journal of Finance, 43 (5), 1235-1256. Cassimon, D., Engelen, P.J., Thomassen, L. & Van Wouwe, M. (2004). The valuation of a NDA using a 6-fold compound option. Research Policy, 33 (l),41-51. Christensen, C.M. (1997). The innovator's dilemma: When new technologies cause great firms to fail. Boston: Harvard Business School Press. Cooper, R.G. (1998). Product leadership: Creating and launching superior new products. Reading, MA: Perseus Books. Cooper, R., & Kaplan, R.S. (1988, Sept-Oct). Measure costs right: Making the right decisions. Harvard Business Review, 96-103. Cox, J.C., Ross, S.A., & Rubinstein, M. (1979). Option pricing: A simplified approach, Journal of Financial Economics, 2 (3), 229-263. Dixit, A.K., & Pindyck, R.S. (1994). Investment under uncertainty. Princeton, NJ: Princeton University Press. ESS (2000).Technology Investment Advisor: http://www.ess-advisors.com /software.htm. Atlanta, GA: Enterprise Support Systems. Farley, M., & Rouse, W.B. (2000). Technology challenges & opportunities In the biotechnology, pharmaceutical & medical device industries. Information Knowledge Systems Management, 2 (2), 133-141. Foster, R. (1986). Innovation: The attacker's advantage. New York: Summit Books. Geske, R. (1979). The valuation of compound options. Journal of Financial Economics, 2 (l), 63-7 1.
306 People & Organizations
Hancock, W.M., & Bayha, F.H. (1992). The learning curve. In G. Salvendy, Ed., Handbook of Industrial Engineering (Chapter 61). New York: Wiley. Heim, J.A., & Compton, W.D. (Eds.).( 1992). ManufacturinE Systems: Foundations of World-Class Practice. Washington, DC: National Academy Press. Jain, R.K., & Triandis, H.C. (1990). Management of research and development organizations - Managing the unmanageable, New York: Wiley, New York. Kaplan, R.S., & Norton, D.P. (1996). Using the balanced scorecard as a strategic management tool. Harvard Business Review, Jan-Feb, 75-85. Kessler, W.C. (1999). Implementing lean thinking. Information Knowledye Systems Management, (2), 99-103.
0
Kusiak, A., & Larson, N. (1999). Concurrent engineering. In A.P. Sage & W.B. Rouse, Eds., Handbook of systems engineering and management (Chap. 9). New York: Wiley. Lander, D.M., & Pinches, G.E. (1998). Challenges to the practical implementation of modeling and valuing real options. Ouarterly Review of Economics & Finance, 38, 537-567. Lee, I. ( 1987). Design-to-cost. In J.A. White, Ed., Production Handbook (Chapter 3.3), New York: Wiley. Luehrman, T.A., ( 1998, July-August). Investment opportunities as real options. Harvard Business Review, 5 1-67. Luenberger, D.G., (1997). Investment science, Oxford, UK: Oxford University Press. Majd, S., & Pindyck, R.S., (1987). Time to build, option value, and investment decisions. Journal of Financial Economics, B, 7-27. Markham, S.K., & Griffin, A. (1998). The breakfast of champions: Associations between champions and product development environments, practices, and performance. Journal of Product Innovation Management, 15,436-454. Martzoukos, S.H., & Trigeorgis, L. (2002). Real (investment) options with multiple sources of rare events, European Journal of Ouerations Research, 136 (3), 696-706.
Invention, Innovation, and Options 307
Merton, R.C. (1973). Theory of rational option pricing. Bell Journal of Economics and Manacement Science, 3 (l),141-183. Merton, R.C. (1976). Option pricing when underlying stock returns are discontinuous. Journal of Financial Economics, 3, 125-144. Merton, R.C. (1977). On the pricing of contingent claims and the Modigliani-Miller theorem. Journal of Financial Economics, 3, 24 1-249. Meyer, P.S. (1994). Bi-logistic growth. TechnoloPical Forecasting and Social Change, 47, 89-102. Miller, D.B. (1986). Managing professionals in research and development, San Francisco: Jossey-Bass. Mokyr, J. (1990). The lever of riches: Technological creativity and economic progress. New York: Oxford University Press. Nichols, N. A. (!994, Jan-Feb). Scientific management at Merck: An interview with CFO Judy Lewent. Harvard Business Review, 88-99. Patterson, F.G., Jr., (1999). Systems engineering life cycles. In A.P. Sage & W.B. Rouse, Eds., Handbook of systems engineering and management (Chap. 1). New York: Wiley. Pennings, E., & Lint, 0. (1997). The option value of advanced R&D. European Journal of Operations Research, 103,83-94. Pennock, M.J. (2007). The economics of enterprise transformation, Ph.D. dissertation, School of Industrial & Systems Engineering, Georgia Institute of Technology. Pennock, M.J., Rouse, W.B., & Kollar, D.L., (2007). Transforming the acquisition enterprise: A framework for analysis and a case study of ship acquisition. Systems Engineering. 10 (2). Rouse, W.B. (1982). On models and modelers: N cultures. Transactions on Systems. Man, and Cybernetics, SMC-12(5), 605-610. Rouse, W.B. (1985). On better mousetraps and basic research: Getting the applied world to the laboratory door. IEEE Transactions on Systems, Man, and Cybernetics, SMC-15(1), 2-8. Rouse, W.B. (1992). Strategies for innovation: Creating successful products, systems. and organizations. New York: Wiley.
308 People & Organizations
Rouse, W. B. (1993). Managing innovation. In S. Parker (Ed.), McGrawHill Yearbook of Science and Technology: 1994 (pp. 404-405). New York: McGraw-Hill. Rouse, W.B. (1998). Don’t iump to solutions: Thirteen delusions that undermine strategic thinking, San Francisco: Jossey-Bass. Rouse, W.B. (2001). Essential Challenges of strategic management. New York: Wiley Rouse, W.B. (2003) Invention and innovation in technology and art. In B.B. Borys & C. Wittenberg, Ed.), From Muscles to Music: A Festschrift to Celebrate the 60th Birthday of Gunnar Johannsen (pp. 140-151). Kassel, Germay: University of Kassel Press. Rouse, W.B., & Acevedo, R. (2004). Anticipating policy implications of emerging information technologies. Information 0 Knowledge 0 Svstems Management, 4 (2), 77-93. Rouse, W.B., & Boff, K.R. (1994). Technologv transfer from R&D to amlications, Wright-Patterson Air Force Base: Armstrong Research Laboratory. Rouse, W.B., & Boff, K.R. (1998). R&D/technology management: A framework for putting technology to work. IEEE Transactions on Systems, Man, and Cybernetics -- Part C, 3 (4), 501-5 15. Rouse, W.B., & Boff, K.R. (1999). Making the case for investments in human effectiveness. Information Knowledge Systems Management, 1 (3), 225-247 Rouse, W.B., & Boff, K.R. (2000). Costhenefit challenges: How to make the case for investments in the human side of technological innovation (Chapter 13). In R. Reichwald and M. Lang, Eds., User-Friendly Communications Systems. Heidelberg: Huethig Verlag, 2000. Rouse, W.B., & Boff, K.R. (2001). Strategies for value: Quality, productivity, and innovation in R&D/technology organizations. Systems Engineering, 4 (2), 87-106. Rouse, W.B., & Boff, K.R. (2003). Value streams in science & technology: A case study of value creation and Intelligent Tutoring Systems. Svstems Enpineering, 6 (2), 76-9 1.
Invention, Innovation, and Options 309
Rouse, W.B., & Boff, K.R. (2004). Value-centered R&D organizations: Ten principles for characterizing, assessing & managing value. Systems Engineering, 2(2), 167-185. Rouse, W.B., Boff, K.R., & Thomas, B.G.S. (1997). Assessing costhenefits of R&D investments. IEEE Transactions on Systems, Man, and Cybernetics - Part A, 27 (4), 389-401. Rouse, W.B., Howard, C.W., Cams, W.E., & Prendergast, E.J. (2000). Technology investment advisor: An options-based approach to technology strategy. Information 0 Knowledge 0 Systems Mananemen?, 2: (l), 63-81. Rouse, W.B., & Hunt, R.M. (1991). Transitioning advanced interface technology from aerospace to manufacturing applications. International Journal of Industrial Ergonomics, 7(3), 187-195. Transforming the acquisition enterprise. Proceedings of the Fourth Annual Conference on Acauisition Research, Monterey, CA: Naval Postgraduate School. Rouse, W.B., Thomas, B.G.S., & Boff, K.R. (1998). Knowledge maps for knowledge mining: Application to R&D/technology management. Transactions on Systems, Man, and Cybernetics - Part C, 28 (3), 309-317. Rouse, W.B., & Rogers, D.M. (1990). Technological innovation: What's wrong? What's right? What's next?. Industrial Engineering;,a ( 4 ) , 43-50. Rouse, W.B., & Rouse, R.K. (2004). Teamwork in the performing arts. Proceedings of the IEEE, p2 (4), 606-615. Roussel, P. (1984). Technological maturity proves a valid and important concept. Research Manapement, 27 (l), January-February. Rycroft, R.W., & Kash, D.E. (1999, Fall). Innovation policy for complex technologies. Issues in Science and Technology, 16(1). Sage, A.P. (1995). Systems management for information technology and software engineering. New York Wiley Sage, A.P., & Rouse, W.B., (Eds.).(1999). Handbook of systems engineering and management. New York: Wiley. Smit, H.T.J., & Trigeorgis, L. (2004). Strategic investment: real options and games. Princeton, NJ: Princeton University Press.
310 People & Organizations
Smithson, C.W., (1998). Managing financial risk: A guide to derivative products, financial engineering. and value maximization, New York: McGraw-Hill. Stevens, G.A.. & Burley, J. (1997, May-June). 3000 raw ideas = 1 commercial success! Research Technology Manavement, (3), 16-27. Trigeorgis, L. (1993). Real options and interactions with financial flexibility. Financial Management, 2 (3), 202-224. Trigeorgis, L. (1996). Real oDtions: Manaperial flexibility and strategy in resource allocation. Cambridge, MA: MIT Press. Womack, J.P., & Jones, D.T. (1996). Lean thinkinn: Banish waste and create wealth in your corporation. New York: Simon & Schuster. Young, P. (1993). Technological growth curves: A competition of forecasting models. Technological Forecasting and Social Change, 44, 375.
Chapter 10
CHALLENGES, SITUATIONS, AND CHANGE
INTRODUCTION Our efforts planning new products, systems, and services (Chapter 8) and R&D/technology strategies (Chapter 9) led to frequent consideration of overall business strategies. As I have mentioned several times in earlier chapters, our clients frequently involved us in business strategy issues, indicating that our product planning and technology strategy methods and tools would only be fully successful when integrated with their overall approach to business strategy. Not surprisingly, this led us to think about human-centered strategic planning. We pursued numerous strategy initiatives for a range of clients, all in technology-based companies and agencies. In parallel, I took several intense short courses on strategic planning. These experiences led me to conclude that the standard, off-the-shelf approach to strategic planning, while certainly useful, lacked sufficient attention to invention and innovation as discussed in Chapter 9. This led me to research and write Strategies for Innovation (Rouse, 1992), a sequel to Design for Success. We also created the Business Planning Advisor (ESS, 2000a) as a companion tool for the Product Planning Advisor. Another tool - Setup f o r Business Planning Advisor - enabled creating BPA templates that were tailored to particular industries and specific companies or agencies. It was clear that our company, Enterprise Support Systems, needed a strategic planning tool in our suite of strategy tools. However, this tool, despite reasonable sales, was never sufficiently powerful to distance itself from the many competing tools. My sense was - and is -- that the strategic planning business suffers from a very low cost of entry. Any retired senior executive or business school faculty member can hang out his or her shingle. Further, anyone 311
312 People & Organizations
can take the numerous short courses available that I mentioned above. Thus, almost anyone can find enough content to structure a generic business planning tool. Not everyone can add embedded expertise as we and a few others did. However, this does not sufficiently differentiate a software tool in a crowded market. Given the competitive nature of the business planning tool market, we decided to offer a low-end version for $99.95. With considerable investment in marketing and direct mail, we managed to generate a reasonable volume of revenue. However, our costs consumed all this revenue - and more. Beyond the costs of packaging and mailing, and the fact that direct mail yielded only a modest hit rate, our customer support costs were too high. I was very surprised at the number of senior managers who would call asking for, in effect, free consulting in creating their business plan - all for $99.95. We continued to engage in strategic planning initiatives. I am still involved in several of these engagements every year. My experience was that strategic planning experience and expertise were needed to get in the game. However, this did not differentiate us from the competition is the same ways as enabled by the Product Planning Advisor and Technology Znvestment Advisor. Consequently, despite the fact that many of the experiences related in this chapter were gained in the context of a strategic planning engagements, the discussion in this chapter focuses solely on areas where we created more unique and competitive approaches to strategy issues. In this chapter, we transition from focusing on researchers and designers to addressing the abilities, limitations, and preferences of executives and managers as “end users” in themselves. The essential phenomena of interest include essential challenges faced by all enterprises, typical business situations from which these challenges emerge, and difficulties people encounter when addressing the organizational changes needed to succeed.
ESSENTIAL CHALLENGES What do enterprises do? There are lots of meetings, much typing and filing, and many things are lifted and stacked. There are innumerable tasks and activities. It is important that this work be productive, safe, and rewarding. However, we cannot approach strategic management of an enterprise at this level.
Challenges, Situations, and Change 313
We need to begin with the work of the enterprise as a system, rather than the jobs, tasks, and activities of the many people that work in the enterprise. To an extent, we need to conduct a work domain analysis of an enterprise (Rasmussen, 1986, 1994; Vicente, 1999). This analysis should begin with consideration of the goals and objectives of the work of enterprises. Goals and objectives might be considered in terms of revenues, profits, market share, etc. for the private sector, and budgets, constituencies served, and so on for the public sector. However, this level of analysis tends to be idiosyncratic. Instead, we should begin with the recognition that all enterprises face similar strategic challenges, shown below, that must be appropriately understood and addressed for enterprises to succeed (Rouse, 1999a, 2001). Growth: Increasing Impact, Perhaps in SaturatedDeclining “Markets” Value: Enhancing Relationships of Processes to Benefits and Costs Focus: Pursuing Opportunities and Avoiding Diversions Change: Competing Creatively While Maintaining Continuity Future: Investing in Inherently Unpredictable Outcomes Knowledge: Transforming Information to Insights to Programs Time: Carefully Allocating the Organization’s Scarcest Resource There is a variety of ways of approaching these challenges (Collins & Porras, 1994; Collins, 2001; Rouse, 2001, 2006). Despite the pronouncements of a plethora of management gurus, there is no “silver bullet” that handles all of these challenges. Strategic management involves understanding which challenges are central and adopting a reasonable approach among the many possibilities. As shown in Figure 1, growth has to be the goal. Growth can be cast in terms of economic, behavioral, and/or social impacts, or possibly in terms of improved quality, service, and responsiveness. The key point is that growth is a must - the only alternative is decline. Enterprise stasis is not a stable state. Hence, growth must be pursued or decline is assured. ~
314 People & Organizations
Goal
Foundation
Figure 1. Relationships Among Essential Challenges
It should be emphasized that share price, earnings per share, revenues, market share, and so on reflect just one perspective on growth. Impact can be measured in many ways. Enterprises can improve the quality of their offerings, the benefits of their services for their constituencies, and/or the influence of their activities and communications without necessarily growing financially or in terms of staff and facilities. Indeed, in some situations, growth of impacts may have to be pursued while such human, financial, and physical resources are declining. There are, admittedly, situations where graceful decline may be the appropriate goal. In such cases, the transformation of interest might be from providing value to providing nothing, perhaps in the sense of doing no harm in the process. Ideally, one might like to assure a “soft landing” €or the enterprise’s stakeholders. This unusual, though not improbable, case involves many concerns beyond pursuit of negative growth - for example, liability and tax implications of ceasing operations -- which are beyond the scope of this book. Value provides the foundation for growth. Understanding the nature of value, its evolution or migration, and the consequent growth opportunities
Challenges, Situations, and Change 315
are critical elements of this challenge (Slywotsky, 1996, 1997). One then, of course, must devise a value proposition and associated business processes to secure this growth. As elaborated in Chapter 9, understanding and enhancing the value streams and networks that provide value to constituencies are keys to successful growth (Womack & Jones, 1996). Focus provides the path to growth. Pursuit of opportunities and avoidance of diversions can be quite difficult (Rouse, 1998), particularly in the presence of significant organizational learning disabilities (Senge, 1990), or when the organization is trapped in single-loop learning (Argyris & Schon, 1978). Equally difficult is change in terms of designing the enterprise to pursue this path (Rouse, 1993). Both focus and change can create enormous organizational and cultural change problems (Collins & Porras, 1994; Collins, 2001). Strong leadership is crucial during such transitions (Charan & Colvin, 1999; Bennis & O’Toole, 2000; Rouse, 2001; George, 2003). The nature of the future, especially the long-term future, exacerbates the difficulties of focus and change. Not only are the magnitudes and timing of investment returns uncertain - the very nature of the returns is uncertain (Burke, 1996). Further, most large enterprises have difficulty taking advantage of new ideas, even when they are due to their original investments (Christensen, 1997). The uncertainties and risks associated with an enterprise’s view of the future create needs for hedges against downsides, while still being focused on the upsides. As discussed in Chapter 9, option-based thinking can provide the needed balance between these two perspectives. Options provide ways for addressing an enterprise’s future, contingent opportunities and needs. Knowledge is the means by which enterprises increasingly address these challenges. It can be quite difficult to transform data, information, and knowledge into programs of action and results (Whiting, 1999; Zack, 1999). As discussed in Chapter 7, this involves understanding the roles of information and knowledge in problem solving and decision making in different domains (Rouse, 2002), as well as the ways in which archival knowledge and people with knowledge can meet these needs (Cook & Brown, 1999; Brown & Duguid, 2000). Time is an overarching challenge for leaders of enterprises. To a great extent, leaders define themselves by how they spend their time (Rouse, 1994, 200 1). Transformational leadership involves devoting personal time to those things that will create lasting value (Kouzes & Posner, 1987; George, 2003). Time is the scarcest of leaders’ resources, much more than
316 People & Organizations
financial and physical resources. Nevertheless, leaders often report being trapped by urgent but unimportant demands for their time (Covey, 1989; Miller & Morris, 1999). This is a classic challenge for senior management (Oncken & Wass, 1974; Mintzberg, 1975). Considering the nature of the above challenges, what do executives or teams of executives do? One might imagine that they spend time creating models, analyzing tradeoffs, and attempting to optimize allocations of resources. However, the fact is that executives and managers spend their time reacting to their environments, negotiating compromises, and “satisficing” much more than optimizing (Mintzberg, 1975; Simon, 1957, 1969). In general, they have to consider and balance the perceptions, concerns, and desires of the many stakeholders in their enterprises. To illustrate this point, I recall a discussion with a Vice President for Strategic Planning at a major corporation. I had called him to followup on an ongoing initiative where we were supporting him. After a review of this initiative, he commented, “I really like it when you call. It’s just about the only time I get to talk about strategic planning.” I responded, “But, isn’t leading strategic planning your job?” He replied, “Bill, you don’t understand. I am so busy underperforming that I don’t have time to get good at this.” To understand and support executives in these types of situations, we need to adopt the human-centered philosophy discussed throughout this book. Understanding and supporting the interests of an enterprise’s diverse stakeholders - and finding the “sweet spot” among the many competing and possibly conflicting interests - is a central aspect of discerning the work of the enterprise and creating mechanisms to enhance this work.
SITUATIONS AND STORIES A core issue in developing successful business strategies concerns understanding your current and emerging relationships with your markets. In Start Where YouAre (Rouse, 1996), I discuss and illustrate an approach to performing this type of situation assessment that involves a set of ten common business situations and five archetypal business stories. The overall premise is that knowing where you are is an essential first step to getting where you want to be. The situation assessment methodology - and a tool that is discussed later in this chapter - is based on an extensive study of three industries: transportation, computers, and defense over the past 200 years. This study
Challenges, Situations, and Change 317
involved fascinating examinations of steamboats, railroads, automobiles, aircraft, calculators, typewriters, cash registers, tabulators, computers, and so on. Similar patterns emerged in all of these industries. A new technology matured to the point of practical application. Fairly quickly, hundreds and often thousands of businesses were formed to exploit this opportunity, and eventually consolidation resulted in a handful of remaining competitors. I recall, in particular, reading about competition in the steamboat industry in the early and mid 1800s. Steamers with a new hull design or a new engine design might enable cutting one hour off the trip from New York to Boston. Passengers would flock to this new type of boat. However, the originators of this innovation might within a year bemoan the fact that yet a newer hull or engine had cut further time off the trip and captured all the passengers. Last year's innovation was relegated to carrying low-margin freight. Commentaries from this period wonder at the pace of technological change. Similar stories are quite common from other periods in the 200year span studied. This year's innovations displace last year's innovations and, much less often, last year's dominant competitors are displaced by new players. At the very least, however, the originators of the latest innovations are making much higher profit margins than those providing last year's offerings. On the surface, this pattern fits quite well a classical cyclical theory of business growth. However, this description is not sufficiently rich to enable the insights needed to make strategic management decisions. We need a level of description that portrays the experiences and possibilities for a particular company in a broader industry context. In other words, we need to describe patterns of growth at a level that is compatible with the strategic alternatives typically available to managers.
Common Business Situations Table 1 summarizes the ten common business situations identified in this study. The overarching situation assessment question is, "Where am I now and where am I headed?" To support addressing this question, a set of 41 current and leading indicators - thus, 82 in all - were gleaned from the business histories studied. Situation assessment involves estimating the levels of each of these indicators for your enterprise. The Situation Assessment Advisor -- described later in this chapter -- then provides a
318 People & Organizations
knowledge-based assessment of your current situation and your most likely transitions to future situations. Numerous historical examples are provided of how other companies dealt with the situations one is currently facing and likely to face. More general advice for dealing with the assessed situations is also provided. It should not be surprising that there are several dominant patterns of transitions among the ten common situations in Table 1. As shown in Figure 2, transitions from steady growth to consolidation are fairly common while transitions from vision quest to consolidation are quite rare. The predominant transition paths among the ten business situations can provide important insights. Anticipating, recognizing, and responding to these transitions are important elements of strategic thinking and should strongly influence strategic planning.
I
Situation Vision Quest
I
Definition A situation where you are trying to create a relationship with the marketplace, usually for products and services that are ahead of the market’s expressed needs and wants.
Evolution
A situation where development of your relationship with the marketplace takes substantial time as your technologies, processes, and overall market proposition mature.
Crossover
A situation where either your success depends on importing key technologies and processes from other domains, or your success depends on exporting your technologies or processes to other markets.
Crossing the Chasm
A situation where you must transition from selling to innovators and early adopters to a more pragmatic relationship with the early majority in the broader marketplace; originated by Geoffrey Moore in his book CrossinP the Chasm (1991).
Challenges, Situations, and Change 319
Steady Growth
A situation where sales and profits repeatedly increase as your relationship with the market becomes strongly established; quite often, market share increases in an overall market that is also increasing.
Consolidation
A situation where the number of competitors and the fierceness of the competition increase to the point that price cutting and increased costs result in many mergers, acquisitions, and business failures.
Silent War
A situation where you do not recognize competing companies or technologies, or perhaps recognize and discount them; they thereby become strong competitors while you offer little if any resistance; originated by Ira Magaziner and Mark Patinkin in their book The Silent War (1989).
Paradigm Lost
A situation where your technologies, processes, market propositions, etc. become obsolete, often suddenly, due to new approaches and competitors, which results in damage to your relationship with the marketplace; phrase originated by John Casti in his book Paradiems (1989).
Commodity Trap
A situation where most or all competitors are selling the same products or services due to de facto or actual standards, with the result that you must focus on quality, service, and price as the dominant competitive attributes.
Process
A situation where improvements of processes, rather than new product innovations, are the central competitive issue; substantial investments are usually required if you are to beat your competitors’ quality, service, and prices. Table 1. Ten Common Business Situations
320 People & Organizations
Figure 2. Typical Transitions Among Business Situations
The pattern of transitions experienced by a specific company constitutes the story of this business. There are five common stories, which I call classic life cycle, false start, death spiral, reinventing the company, and branching and pruning. These stories illustrate patterns that are both desirable and undesirable.
Classic Life Cycle: This is the classic story reflected in the more traditional theories of growth. Typically, this story involves steps with names such as birth, growth, maturation, and decline. Using the ten common situations, this pattern can be described as follows: Vision Quest -- Evolution -- Crossover -- Crossing the Chasm -- Steady Growth -- Consolidation -- Silent War -- Paradigm Lost -- Commodity Trap - Process This story can be interpreted in terms of broad relationships with markets. The vision quest, evolution, crossover and crossing the chasm situations are elements of being ahead of the market. Crossing the chasm,
Challenges, Situations, and Change 321
steady growth, and consolidation situations relate to the market catching up. Finally, the silent war, paradigm lost situations, and potentially the commodity trap and process situations, can reflect the market passing by. The business life cycle reflected in this story is often considered to be inevitable. However, this is far from true. The comprehensive and compelling analyses provided by Collins in Built to Last (1994) and Good to Great (2001) make this very clear. They show how companies such as Hewlett-Packard and Motorola have continued to renew themselves.
False Start: This story is quite common among new ventures. It also is common for new product lines within existing companies. There are two versions of this story: Vision Quest -- Evolution -- Crossover
-- Evolution
or
Vision Quest -- Evolution -- Crossing the Chasm -- Evolution This story is one of continually trying to gain momentum, and then losing it. I have encountered this story frequently among university spin offs. Small, new ventures are formed on the basis of a specific technical idea. In the U.S., they may seed this company by winning a Small Business Innovation and Research (SBIR) contract with the federal government. In principle, this seed money, and perhaps a second round of such funding, should result in commercialization of the founding technical idea. What frequently happens, however, is that these new ventures repeatedly apply for SBIR contracts. I have known a couple of companies that have won more than 100 of these contracts without ever getting a product to market. People who manage small business incubators, which are prevalent at U.S. universities, have told me of many instances of companies who settle in and solely focus on SBlR contracts. I have seen the same behaviors in small companies supported by similar mechanisms in other countries. I have also experienced this story in much larger companies who attempt to launch new product lines that are rather different than their existing product lines. A good recent example in the U.S. is the many companies who have entertained pursuing the health care market. I have worked with four defense electronics companies who considered applying
322 People & Organizations
their technologies and skills to either medical electronics or health information systems. In one company, they actually made a large scale investment, but the customer went bankrupt. Two other companies have transitioned from evolution to crossover to evolution at least once. The fourth company quickly developed a business plan, evaluated its chances of success relative to other opportunities, and decided to terminate this effort. This last case is the only success story among the four. You may wonder why I label such a decision a success. This company quickly developed and evaluated an alternative, and then stopped all further investments in this alternative once they determined that it was not their best choice. This is good planning. Successfully considering an alternative does not mean that you have to decide to pursue this alternative. In fact, if you are only willing to consider alternatives that you are sure will subsequently be pursued, you are being much too conservative. A key to being able to avoid undesirable situations is the ability -- and willingness -to quickly consider and evaluate a variety of ways of proceeding.
Death S-piral: You want to avoid false starts or, hopefully, exit them quickly. You also want to avoid death spirals or, again, find your way out quickly. This story is probably quite familiar: Steady Growth -- Consolidation -- Commodity Trap -- Consolidation Note that I have not indicated how steady growth was achieved as this is not central to this story. We saw this story repeatedly in the aforementioned analysis of companies in the transportation, computer, and defense industries since 1800. In 1850, there where 2,500 railroad companies; in 1900 there were 30. We identified comparable ratios for steamboats, automobiles, aircraft, calculators, cash registers, typewriters, and tabulators. A good, more recent example is the defense industry. In the 1990s: Lockheed acquired General Dynamics’ aircraft division Martin Marietta acquired GE’s military electronics division Northrop acquired Grumman Raytheon acquired E Systems
Challenges, Situations, and Change 323
Lockheed and Martin merged 0
Raytheon acquired Hugh's military electronics business Boeing acquired McDonnell-Douglas
In addition, numerous small defense contractors merged or left the market. Many companies, or business units within companies, were caught in death spirals. In this story, you either want to be one of the surviving companies or you want to be acquired with attractive terms. Simply closing your doors and selling your hard assets is not a desirable result, although many companies, especially lower-tiered subcontractors, have little choice. For would-be acquirers -- with discretionary resources available -prevalent death spiral stories can be wonderful. I have been involved with companies that have acquired other companies for no cash and the assumption of the acquired company's debt, which the acquirer promptly renegotiated with the financial institutions involved. In one case, the acquirer made back their investment in six months. If you do not have the resources -- or energy -- to acquire other players, you should make yourself as attractive as possible. This means maximizing short-term sales and profits, avoiding investments, and making your financial statements look as rosy as possible. Alternatively, you can try to break out of the death spiral by redeploying your resources in other markets where there are fewer players or less capable players. The following two stories illustrate how this might be accomplished.
Reinventing the Companv: One approach to avoiding classic life cycles and death spirals is to reinvent your company or, in effect, create a new company. There are two common stories in this pattern: Steady Growth -- Consolidation -- Paradigm Lost -- Crossover -- Steady Growth
or Steady Growth -- Consolidation -- Paradigm Lost -- Vision Quest -Evolution -- Crossing the Chasm -- Steady Growth
As with the death spiral, I have not indicated how steady growth was initially achieved.
324 People & Organizations
These two versions of this story involve either encountering a paradigm lost situation or precipitating this situation. In the former case, you have little choice but react. In the latter, you may be choosing to act much earlier than necessary, but while you have the resources to act. In the first pattern, you transition to a crossover situation by either acquiring technology and people or moving your technology and people to new markets. In the second pattern, you create the technology and grow the people needed for competitive advantage. If you are skilled at accomplishing these transitions, you may be able to keep your company continually growing by harvesting resources from declining stories and investing these resources in potential growth stories. A good example of a company that is highly skilled in this manner is Motorola. I have had the good fortune to work with several Motorola business units and a couple hundred managers and executives. As noted earlier, they started in battery eliminators, moved to radios and televisions, and then moved again to semiconductors, pagers, cellular phones, and other products. This process continues, with the latest quest being the “quadruple play” of convergence in the telecom industry. A particularly important aspect of how Motorola and other companies reinvent themselves involves entertaining and investing in multiple stories. Some of these stories turn out to be false starts and investments are stopped - that is, options are not exercised. Some become modest successes and may or may not be continued. A few -- and only a few are needed -become significant successes. These receive substantial investments until paradigm lost situations emerge or are precipitated.
Branching and Pruning: Branching and pruning provides another way to avoid classic life cycle and death spiral stories. Three common versions of this story are as follows: Steady Growth -- Consolidation -- Paradigm Lost -- Commodity Trap -Process -- Steady Growth or Steady Growth -- Consolidation Growth
-- Paradigm
Lost -- Crossover -- Steady
or Steady Growth -- Consolidation -- Paradigm Lost -- Vision Quest -Evolution -- Crossing the Chasm -- Steady Growth
Challenges, Situations, and Change 325
As earlier, how steady growth is initially achieved is not central to this story. This story is similar to reinventing the company, with a few important exceptions. First, you do not necessarily transition away from mature markets. Second, you actively encourage the pursuit of a large number of versions of these stories -- in other words, much branching. Third, you communicate very clear criteria for continued investment in a story -- thus, decisive pruning is expected by key stakeholders. Built to Last (Collins & Porras, 1994), from which I borrowed the name of this story, provides a detailed discussion of an exemplary player of branching and pruning stories, namely 3M -- Minnesota Mining and Manufacturing Company. 3M is well known for its many divisions and hundreds of product lines. The company is constantly branching and pruning. In my work with 3M, I have been impressed with the autonomy of its divisions. They clearly all understand the branching and pruning process, which they usually have replicated locally. 3M’s continued success provides substantial evidence of the power of understanding your relationships with your markets, their likely changes, and how to quickly respond and remain innovative in these markets. The five typical business stories portray predominant patterns among the ten common situations or relationships with markets. These stories illustrate how particular patterns of situations portend specific consequences for companies who play out these stories. Growth, lack of growth, or decline underlies all of these stories: 0
Classic Life Cycle False Start
+ Growth is achieved, but slips away into decline
+ Growth never emerges
Death Spiral .)Growth transitions to steep decline Reinventing the Company
.)Old
growth is replaced with new growth
Branching and Pruning 3 Many growth paths are tried; a few flourish Thus, the challenge of growth is pervasive and must be addressed, one way or another, by all enterprises.
326 People & Organizations
Situation Assessment Advisor The process of gaining an understanding of where you are and where you are headed can be greatly enhanced by useful and usable data-driven methods and tools. It certainly is possible, and quite often the practice, to just "wing it." However, I have found that most managers would rather have well-founded strategies and plans. At the same time, most of these people are not at all patient with abstract, time-consuming processes that provide only general, rather than specific, value added to the process of strategic thinking and planning. Figure 3 shows an example situation assessment resulting from use of the Situation Assessment Advisor (ESS, 2000b), or the "light" version of this tool available in Strategic Thinking (Rouse, 1999~). These tools embody the situation assessment methodology discussed earlier in this chapter. They also include a hypertext version of Start Where Your Are (Rouse, 1996), and extensive online advice. The screen in Figure 3 shows the results of having estimated the aforementioned 82 current and leading indicators of the relationships of a hypothetical software company to its markets. The knowledge base of the advisor has enabled translation of these estimates to assessments of both current and likely future situations. Note that this company is now in steady growth, but close to consolidation, and consolidation is looming strongly in the future. This tool is typically used by management teams, working as a group, to address and resolve several issues: What do the current and leading indicators mean in the context of our enterprise and what are our estimates of these indicators? To what extent do we agree with the tool's assessment of our current and future situations, or how do we differ? Considering the results of sensitivity analyses, do we need to reconsider our estimates of any of the indicators?
Do any of the examples of how other companies have addressed our current and future situations provide relevant guidance to us? 0
In light of all of the above, what assessment do we agree on, and what are the strategic implications of this assessment?
328 People & Organizations
and services, with commensurate price increases. Everyone agreed that consumers are quite willing to pay more if they perceive good value. This example illustrates a very important aspect of using computerbased methods and tools to support strategic thinking. To a great extent, the overriding purpose of such tools is to get the right people to have the right discussions on the right topics at the right time, and support this process with information, computation, and advice. The real creativity and all the decisions come from the users of such tools, not from the tools themselves. The Situation Assessment Advisor (SAA) was well received by management teams. They liked how it represented situation assessment, how it supported the process, and the expert advice and case histories. Unfortunately, this did not lead to impressive sales volumes for this tool. The basic limitation was that a company would buy one copy and use it once per year. We had targeted an important task performed infrequently by very few people. Our product development process was sufficiently efficient that we made money on SAA, but not much. I like to use this example in my graduate course on decision theory and decision support systems. By the time I present the underlying theory and show the tool with a friendly and clean interface, the students are usually feeling quite good about what they are learning. Then, I tell them that SAA was a business disappointment because there were too few users and the task was infrequently performed. This was a lesson they did not expect. This usually results in considerable discussion.
UNDERSTANDING CHANGE I think it was Peter Drucker who said, “Every enterprise is optimally designed for the results it is achieving right now.” Thus, if you do not like where you are and/or where you are headed, then something has to change. As noted earlier, change is one of the essential challenges of strategic management. Change brings new opportunities for growth and rewards. However, it also challenges the status quo and established ways of doing business, making a living, and gaining recognition. Hard-won and long-honed skills can become less and less central to success. New skills easily learned by new markets entrants can be difficult to gain for established players. For managers of established businesses, this situation presents a fundamental challenge. You must keep the company you have running
Challenges, Situations, and Change 329
well - because that's where the cash flow is coming from - at the same time that you try to create the company you aspire to be, the company with new competencies and new growth opportunities. How much do you invest in maintaining the old business relative to creating the new business? There are very strong pressures to keep the old business functioning, pressures that often consume all available resources. This is due to the simple fact that almost everyone in the current business is part of the status quo, hoping to continue prosperity and needing resources to pursue this goal. As the old business paradigm continues to falter, the status quo requires increasing attention. While you should be focused on the future business, you are trapped by the current business. What aspects of change are difficult and why are they difficult? To answer these questions, I considered several new business paradigms -Total Quality Management, Business Process Reengineering, Lean Thinking, and Open Book Management -- that companies have tried to adopt in recent years (Rouse, 2001). The successes and failures of these attempts provided the following insights: 0
It can be quite difficult for management to understand the need for and value of new approaches.
0
Once a new approach is embraced, it can be very difficult to manage expectations due to natural desires for panaceas.
0
The levels of investment needed to fully succeed with new approaches are often substantially underestimated.
Thus, change can be difficult to initiate, once initiated often results in inflated expectations, and is usually thought to be much easier than will be the actual experience. These phenomena often lead to indecision. I have had numerous experiences of helping executives come to the conclusion that a particular change was undoubtedly needed - for example, launching new products that would cannibalize sales of current products. Using PPA, for instance, we may have determined that no imaginable new information would change the relative attractiveness of the alternative futures. The choice was vividly clear. Nevertheless, in several instances, the senior manager involved suggested that we collect more information, think about the decision longer, or just wait some arbitrary period of time, say ti month, before
330 People & Organizations
deciding. In a few instances, I challenged this indecision, indicating that after the proposed activity (or inactivity) they would be back to the same decision, no wiser. A couple of times, this resulted in the decision right then and there. My sense is that some managers only like to make decisions when they have a clear consensus. However, when significant change is needed, there are always stakeholders asserting that change will be a mistake. Thus, consensus may not be possible. This leads to indecision, perhaps in hope that the dissenters will “come around” later on.
Organizational Delusions In Don’t Jump to SoZutions (Rouse, 1998), I discuss at length common individual and organizational delusions that hinder change. These persistent false beliefs, shown in Table 2, keep managers from recognizing their true situations and dealing with them appropriately. Consequently, as Day (1999) indicates, it often requires major crises for the undermining effects of delusions to be circumvented. Delusions 1-3 relate to incorrect commonly held assumptions. Wellestablished, formerly-successful enterprises are particularly prone to these delusions. They lead to, for example, putting Cadillac badges on Chevrolets and expecting customers to willingly pay many thousands of dollars for the honor of driving such cars. Delusions 4-8 concern choices of goals. De facto, and usually unstated, goals often become preservation of the status quo and associated near-term objectives. Stated goals, in contrast, may herald change, new paradigms, etc. However, the ways people spend their time will reflect what really matters. Delusions 9-1 1 are associated with implementation of plans. New plans are often greeted with great fanfares. The fact is, however, that it is much easier to devise compelling plans than it is to implement them successfully. The inability to implement plans is a hallmark of many former CEOs (Charan & Colvin, 1999). Delusions 12-13 relate to the reality of how plans succeed - seldom according to plan! Intense focus on plans succeeding exactly as originally devised is a sure recipe for missing opportunities to succeed in ways that far surpass original aspirations. These delusions can lead to excellent implementation of no-longer-viable plans.
Challenges, Situations, and Change 331
No.
Delusion
1
We Have a Great Plan
2
We Are Number One
3
We Own the Market
~~
I I I I I
7
IWe Have Already Changed IWe Know the Right Way IWe Just Need One Big Win IWe Have Consensus
8
IWe Have to Make the Numbers
9
We Have the Ducks Lined Up
10
We Have the Necessary Processes
11
We Just Have to Execute
12
We Found It Was Easy
13
We Succeeded as We Planned
4 5
6
Table 2. Delusions That Undermine Abilities to Change
These 13 delusions can completely undermine change efforts in particular and plans in general. At the very least, essential time is wasted as organizations wait for the evidence to become overwhelming before they decide to act. A common role of consultants is to detect and diagnose the presence and impact of organizational delusions. The electronic version of Don't Jump to Solutions (Rouse, 1998, 1999c) provides a means for assessing the risks of these delusions. Use of this tool involves answering 70 key questions about one's organization and its relationships with markets or, in general, constituencies. Figure 4 shows the screen where this knowledge-based tool presents the assessments based on answers to the key questions.
332 People & Organizations
Note that "Just Having to Execute" - delusion no. 1 1 - is portrayed as having the greatest risk, followed closely by "Having to Make the Numbers" - delusion no. 8. The subset of the 70 questions related to delusion no. 11 includes:
How frequently does the "ball get dropped" in your organization? 0
Do you have the people and financial resources to execute your plans successfully?
0
Do near-term problems and opportunities frequently preempt long-term plans and undermine progress? Who are the champions for your organization's long-term plans and are they focused on executing these plans? Have you done the hard work of answering the who, how, what, when, and why questions of plan execution?
Choices among standardized answers to these questions resulted in the risk assessment shown on Figure 4.
Figure 4. Risks of Organizational Delusions
Challenges, Situations, and Change 333
Management teams using this tool will typically quickly answer all 70 questions, or perhaps only the questions related to the few delusions of interest. They then view the assessments in Figure 4. For high-risk delusions, they may revisit the questions to see if they are still comfortable with their answers. Attention will then shift to what they can do about their apparent delusions. One source of help is a set of principles associated with each delusion. For delusion no. 11 - "Just Having to Execute" - these principles are: Sustained and committed execution of plans can be very difficult to achieve A common problem is over commitment, which results in a lack of people and financial resources for execution Individual and organizational priorities dominated by near-term issues often preempt execution Lack of individual and organizational commitments to plans often hinders execution Without a champion - someone committed to a plan's success - plans almost always wither on the vine Dealing, in detail, with the who, how, what, when, and why questions of plan execution is the key to avoiding the delusion of "just" having to execute Hard work is the only viable means of successfully linking plans to execution To explore these principles in more depth, users can link to a hypertext version of Don't Jump to Solutions that illustrates delusions, explains principles, and provides examples of their implementation. They might also, for example, link to the electronic version of the book via the types of examples of interest (e.g., computer industry) rather by delusions. This assessment tool enables management teams to have discussions they might otherwise find difficult. The tool, with its outputs projected on large-screen displays, externalizes potentially threatening issues for the group to address as a team. Rather than argue with each other, they in effect argue with the tool. In the process, many perceptions emerge to
334 People & Organizations
justify answers to questions and provide interpretations of principles. Beyond providing opportunities to calibrate perceptions, this process often exposes underlying needs and beliefs that can be understood and addressed with the strategies discussed earlier.
Needs-Beliefs-Perceptions The above delusions provide a bit deeper explanation of the difficulties underlying change. To move deeper yet, we need to consider why these delusions emerge and persist. The Needs-Beliefs-Perceptions Model, shown in Figure 5 , provides useful insights into the sources of the delusions. This model is elaborated in Catalysts for Change (Rouse, 1993), the sequel to Design for Success and Strategies for Innovation. This book focuses on the ways in which people's perceptions can thwart change, as well as the basis for these perceptions in needs and beliefs. People's perceptions are usually viewed as being primarily influenced by their knowledge and information presented to them. With this view, knowledge gained via education and experience combines with the information available to yield perceptions. This perspective can lead you to explain others' misperceptions by their lack of understanding and requisite information. However, as indicated in Figure 5, misperceptions are likely to have other causes.
Figure 5. Needs-Beliefs-Perceptions Model
Challenges, Situations, and Change 335
People's needs and beliefs affect what knowledge is gained, what facts are sought, and how both are interpreted. Thus, for example, people may need to feel that their competencies are important and valued. This leads them to believe that these competencies are critical elements of the company's competitive position. This prompts them to seek confirming information and perceive that the resulting "data" support their beliefs. Consequently, they advocate decisions to continue investments in competencies linked to fading markets. Relationships among needs, beliefs, and perceptions can explain much of the difficulties associated with change. People believe that reigning paradigms are still valid and vital because they need to feel that they are still integral to the enterprise's future. They also may need to feel that their future is secure. Put simply, regardless of the validity of needs to change, this possibility may be significantly deficient relative to meeting people's needs. Consequently, they will believe that change is ill-advised and advocate sticking to the knitting. Beyond providing a deeper explanation for resistance to change, the Needs-Beliefs-Perceptions Model also provides clear approaches to catalyzing change. We have used a tool associated with this model to diagnose conflicts in areas such as environmental management, quality management, and defense conversion. In these examples, we found that the conflicts surrounding strategy decisions were, at a deeper level, really conflicts in beliefs about the impacts of industrial practices on the environment, importance of quality to consumers' behavior, and needs to change markets given intense consolidation. We had to address the conflicting beliefs before we could reach consensus on strategy.
Summary Beyond the above underlying difficulties of change - which are inherent to much of life -- this challenge is particularly problematic for companies because it concerns competing creatively while also maintaining continuity. Specifically, responding to changing market opportunities and threats often requires creative and possibly disruptive changes. However, successfully instituting these changes requires some degree of continuity to take advantage of the company you have in order to become the company you aspire to be. Striking a balance between these forces involves changing the organization to enhance competitive advantages while also building on existing competencies and inclinations. At one extreme you
336 People & Organizations
can create chaos; at the other, you can be stymied by inertia (Rouse, 1999b). A good example of where this challenge is likely to emerge in full force involves repositioning a company to address both current markets (e.g., defense electronics) and new markets (e.g., industrial electronics). This requires maintaining existing infrastructure and processes to satisfy current customers - and keep the cash flowing - while also creating new infrastructure and processes to enable satisfying new customers. This becomes particularly problematic when the two sets of infrastructure and processes are incompatible. Worse yet are situations where the same personnel have to work for both types of customers. A related but somewhat less extreme example is reengineering processes to create a lean, agile organization. It is difficult to simultaneously foster new skills, retain critical existing skills, and discard obsolete existing skills. This is especially true when people with the soonto-be obsolete skills are still needed to serve customers who, in the near term at least, will continue to provide significant cash flow. In situations like this, people almost always become very confused and uncertain as this process proceeds. Another difficult aspect of the challenge of change is the need to engender and reward new competencies while old competencies continue to command center stage. For example, an engineering-oriented company that needs to become substantially more market-oriented may experience the engineering function finding it difficult to share the limelight. This is particularly problematic when most of senior management is steeped in the old competencies, with many long-standing loyalties to functional groups where these competencies are housed. Yet another difficulty concerns incentives and rewards (Flannery, Hofrichter & Platten, 1996; Weiss & Hartle, 1997). For substantial changes to have a chance of succeeding, it is often necessary to significantly modify the incentive and reward system. For example, it may be important to shift to a performance-based bonus system with base compensation that increases very slowly, if at all. If this is significantly different than the existing incentive and reward system, people may have difficulty adjusting - for instance, to the fact that a significant portion of their compensation is at risk. Thus, change is a challenge in part because of inherent underlying behavioral and social difficulties associated with changes, and in part because of the ways that changes tend to be instituted in companies. The
Challenges, Situations, and Change 337
need to maintain both the old and the new simultaneously creates numerous problems beyond people's inherent reluctance to change. Despite available wisdom, there are innumerable ways to pursue change poorly. People become confused and disheartened. Performance suffers as discussions of ambiguity and disillusionment take center stage. To mitigate against these substantial risks, it is important to understand the needs and beliefs that underlie people's behaviors. Change should be planned and implemented in ways that supports needs and does not conflict with beliefs unless, of course, you are prepared to invest in changing beliefs and thereby needs. Change often involves redesigning the essence of an organization. This should be done carefully to assure that needed current competencies are retained, competencies no longer relevant are discarded, and required new competencies are gained. These changes need to be accomplished at the same time as one keeps cash flowing to fuel the transition. We pursue this need further when we consider large-scale change in Chapter 11.
ORGANIZATIONAL SIMULATION Thus far in this chapter, we have seen that people have difficulty understanding their current and emerging business situations, tend to be plagued by organizational delusions, and have needs and beliefs that skew their seeking and interpretation of information and knowledge. From the perspective of human-centered design, these seem like significant human limitations. However, think back to Chapter 5 where we discussed failure detection and diagnosis. At that point, we differentiated tasks that were familiar vs. unfamiliar, and frequent vs. infrequent. The task of assessing whether an enterprise is on course relative to its goals, strategies, and plans is a familiar but infrequent task. In contrast, the task of assessing how and why an enterprise is off course is an unfamiliar and infrequent task. I agree with Gary Klein (1998, 2002) and Malcolm Gladwell (2005) that one can often rely on pattern-driven intuition for making most decisions. However, one should not solely rely on this method for unfamiliar and infrequent tasks. Thus, we should expect human limitations to emerge and affect decisions when humans address tasks they have never seen before. A few people, but not many, have multiple experiences of successfully leading an enterprise through major change.
338 People & Organizations
We addressed similar task situations in Chapters 5 and 6. The support needs that emerged in these situations were often met, at least in part, by simulation-based training and aiding. In the context of organizational challenges, situations, and change, organizational simulation (OrgSim) can be an important tool (Rouse & Boff, 2005). Ideally, we would like to support senior managers and executives by providing them the ability to experience the future before they have to commit to investing in creating this future. By devising various “What if ..?” experiments and experiencing their consequences, they could convert the assessment task from unfamiliar and infrequent to familiar and frequent. Consequently, their intuitions would be better informed and more apt to be valid. The resulting simulation - perhaps called Your Enterprise World would provide a design studio for crafting the new enterprise, the changed enterprise of the future. This simulation would also be invaluable for introducing the future to the organization as a whole. People tend to be less resistant to change when they understand it, experience their roles in the changed enterprise, and can provide comments and suggestions for improving the future.
Architecture of Organizational Simulation The architecture of the organizational simulation shown in Figure 6 embodies key elements needed to enable fully functional capabilities. The representation in this figure is not intended to suggest a software architecture, per se. Instead, it portrays a conceptual model of relationships among several layers of functionality of an OrgSim. It is useful to begin at the top of Figure 6. Users interact with the immersive environment by viewing large-screen displays (e.g., whole walls and rooms), listening and verbalizing to other real and synthetic characters, and gesturing (e.g., walking and pointing). In light of the likely complexity of this environment, users are assisted via training, aiding, and guidance in general - perhaps an “Obi Wan” for their world. Otherwise, they might get lost in the simulation. Users’ interactions with the simulation happen in a context provided by the organizational story that is playing out. This story might focus on the enterprise’s new integrated supply change management system, or it might emphasize the new mobile and distributed enterprise with employees collaborating remotely and asynchronously. To an extent, the context is
Challenges, Situations, and Change 339
represented in the “script,” except that characters - real and synthetic - are free to say and do what they want in this context. Characters, for instance, executives, employees, news reporters, and bystanders, populate the organizational story. Some of these characters, particularly the users of the simulation, are usually real humans. However, most of the characters are typically synthetic, perhaps created via intelligent agent technology. The organizational story plays out and the characters interact in a dynamic world, for example, a market or city, where actions affect the states of the world, and these states, of course, influence characters’ actions. The dynamics of the world, as well as external forces and unseen actors, also affect the states of the world. Typically, the world continues to evolve whether or not users do anything. Often, users’ actions are intended to bring about desirable world states.
‘I-----Facilitation, e.g., Training, Advising, Guiding
User Interface, e.g., Large Screens, Voice, Gestures Organizational Story, e.g., Consolidating Market Characters, e.g., Managers, Employees, Vendors World Model, e.g., Enterprise, Market, Economy Distributed Simulation Software Hardware, e.g., Computers, Networks
t
340 People & Organizations
All of the layers discussed thus far are software intensive. Certainly, there is hardware such as visual and audio displays, control devices and, of course, computers, routers, and wires, which enable creating the “virtual reality” of an organizational simulation (Durlach & Mavor, 1995). However, the story, characters, and world are primarily manifested in software, which may be distributed among many computers that are potentially networked across many locations.
Implications for Strategic Management Organizational simulation has the potential to support strategic thinking about challenges, situations, and change in new ways. Traditional analytical strategic analyses can be supplemented with experiential perceptions gained by acting in the future and assessing solution characteristics before investing in their actual development and deployment. Analyses of alternative courses of action can be very important, especially when “big bets” are involved. However, such analyses are inevitably limiting. Once the spreadsheets show that “the numbers work,” decision makers’ overarching concern is the uncertainties and risks associated with various elements of the analysis. Of course, analytical models can also incorporate these uncertainties. More often, however, I have found that hesitancy is due to needs to experience a course of action before committing to it. Executives and senior managers - in both private and public sectors - would like to know how a course of action will “feel” before finally committing. Put simply, they would like to test drive the future before buying it. A central difficulty with meeting this need is the difficulty of efficiently designing, evaluating, and deploying possible futures. Truly experiencing these futures is likely to require immersive, interactive portrayals. This situation can be further complicated by needs to do all this rapidly, e.g., before deploying a disaster relief force within 24 hours of recognizing the need for this force. Organizational simulation can provide a means for experiencing the future, acting in this future, and having the future react to you. How could OrgSim enable this? As might be expected, the answer depends almost totally on the strategy questions being asked. Questions relevant to this chapter include:
Challenges, Situations, and Change 341
How can a new strategy best be deployed? What are the organizational implications of a new strategy? 0
How will novel situations be addressed with this strategy?
0
What are the design implications of this strategy?
0
What are the work implications of a new organization?
0
How well will the organization perform in the environment?
Note that these questions are more concerned with evaluating and deploying a new strategy rather than developing the strategy. However, using OrgSim for evaluation and deployment of strategies is likely to result in significant changes of these strategies. Elsewhere, I discuss in detail the ways in which OrgSim capabilities can support and enhance strategic thinking (Rouse, 2005). The central idea concerns the value of being able to experience solutions before committing to them, whether these solutions are new market strategies, reengineered supply chains, or enterprise information systems. As noted above, these types of solutions tend to involve substantial investments, and planners and decision makers would like both analytical and experiential insights before committing the enormous resources typically required. My analysis began by outlining the types of questions asked by planners and decision makers. To consider how best to answer these types of questions, I used three scenarios from private and public sector enterprises to assess how OrgSim could support answering the types of questions listed above. This assessment provided insights into the functionality needed for OrgSim to support answering these strategic questions. This functional analysis then mapped into consideration of technology needs (Rouse, 2005). I found that there is much that we need to know to realize the functionality outlined. However, as the chapters in Organizational Simulation (Rouse & Boff, 2005) ably demonstrate, much is known already, across many key disciplines ranging from behavioral and social science to computing and engineering. In addition, developments in online synthetic worlds are rather compelling (Castronova, 2005). Thus, while we do not know everything, we know a lot.
342 People & Organizations
The greatest difficulty involves translating scientific knowledge into computational representations for specific domains. Such representations are needed to support creation of the experiential environments that embody OrgSim capabilities. We particularly need tools to create these environments, especially tools that enable doing this relatively quickly. A focus on tools would also help to drive both the knowledge sought and the form of the knowledge should take. This means that we need to move beyond research whose sole purpose is tabulation of what affects what, for instance, incentives affect motivation, and seek specific relationships that are computationally useful. An example of a more specific representation of knowledge is: Knowledge of the bases of new product plans Increases organizational commitment to execution of these plans and Leads to broader sharing of plans that 0
Tends to result in external knowledge of plans and
0
A changed competitive environment
0
More so in domains with relatively short product life cycles and
0
Less so in domains with longer product life cycles
0
Where research plans tend to be more open
0
Due to sources of funding for long-term research.
The former simpler statement (i.e., X affects Y) can only be incorporated into a simulation heuristically. The latter nine-part statement provides clear guidance for a rule-based representation of the phenomena for one specific type of incentive, that is, knowledge sharing. Much more attention should be paid to creating forms of knowledge that can be more easily operationalized and provide the bases for OrgSim capabilities. I began this discussion with a characterization of decision makers’ needs to gain a “feel” for the impacts of major investments before committing to them. The executives and senior managers articulating these desires also indicated another desire. “Once we experience the future,” one
Challenges, Situations, and Change 343
executive commented, “and decide we like it, it would be great to be able to flip a switch and immediately have it.” This desire suggests an intriguing possibility: 0
Why can’t a compelling OrgSim environment become the actual world?
0
Why can’t synthetic characters become actual workers? Why can’t the simulated organization become the actual organization?
Of course, this would mean that OrgSim would not only be a means for supporting strategic thinking, but also that strategic thinking about OrgSim would be a means to new types of organizations and enterprises. This possibility is indeed compelling.
Applications of Organizational Simulation Our first bona fide organizational simulation was Team Model Trainer developed in the mid 1990s for combat crews of Aegis cruisers and discussed in Chapter 2. Much more recently, as described in Chapter 9, R&D World was developed to simulate project flow and decisions making in R&D organizations. Currently, we are involved in developing two organizational simulations. Health Advisor is an online game where players manage clients through the healthcare delivery systems, attempting to maximize their health state divided by the costs of delivering this health care. We are also early in the process of developing an online game to train people in city management. Our recent book, Organizational Simulation (Rouse & Boff, 2005), describes the development and impact of several organizational simulations and games. Thus, there are many examples to consider. However, it is also reasonable to conclude that the state of the art cannot deliver the types of organizational simulations discussed earlier as being of high potential value for supporting strategic management. Much remains to be done, particularly in the area of methods and tools, before OrgSims will be standard elements of human-centered design for senior managers and executives.
344 People & Organizations
CONCLUSIONS This chapter has considered senior managers and executives as they address essential challenges of strategic management in general, and situation assessment and organizational change in particular. People and organizations have significant limitations when they face unfamiliar and infrequent tasks. When the challenge of growth can only be successfully pursued by changing the nature of how value is provided, then significant organizational change is often required. In these circumstances, organizational delusions and dysfunctional needs and beliefs can become enormous barriers to success. To overcome these human limitations, as well as foster human acceptance, we discussed a situation assessment methodology and tool; a risk assessment model for addressing delusions; and a diagnostic model of needs, beliefs and perceptions. We also discussed how organizational simulation can provide a means to aiding - and training - decision makers faced with these circumstances, as well as the broader sets of stakeholders likely impacted by the needed changes. My quest to apply the tenets of human-centered design to supporting managers and executives has required first understanding their abilities, limitations, and preferences, and then developing and deploying approaches to aiding and training them. Some of these “solutions” have been successful (e.g., PPA and TIA), while others have been less successful (e.g., BPA and SAA) or remain to be fully tested (e.g., OrgSim). Another means of supporting managers and executives is through books, journal articles, and magazine columns. From 1995-1999 I wrote a column for Competitive Edge!, a business magazine published in Atlanta. The column was one page and appeared as the last published page in the magazine. I worked hard to create a pithy but humorous one page discourse on contemporary business issues, for example, markets and technologies. I received a significant amount of feedback from readers - at business meetings, in the grocery store, and via email. In all, eleven columns appeared, culminating in the column, “Do you eat snakes?” The introduction of this column was as follows (Rouse, 1999d):
I nestled into my business class seat on Delta, headed f o r London to change planes f o r the adventure of a sales trip to an Asian country that was new to me. I stowed my laptop and was browsing & magazine when a dapper, middle-aged gentleman
Challenges, Situations, and Change 345
sat next to me. He introduced himseK speaking with the Queen’s version of English. We talked quite a bit as the plane rolled back and taxied to the runway. We chatted about where we were headed, where we came from, our work, and even our children. I was both enjoying the conversation and, I felt, getting a bit of education as this gentleman was a native of my country of destination. Once we were at altitude, our flight attendant came around with menus. Thank heavens, no Sky Deli tonight! M y new-found friend asked my opinion of a couple of menu items. We ordered drinks and continued chatting. Then, looking at me intently, he asked, “Do you like to eat snakes ? ”
It seemed like an unusual question, but I took it in stride, not wanting to appear less than cosmopolitan as we jetted to London.
“No. I can’t say I do. Snakes aren’t really on the top of my list. “Oh, I’m surprised, he said. ‘‘I like to eat them a lot. ”
I’
”
“Really,” I responded, not really sure of what to say next. “When I was a student, I couldn’t afford them very often. But, whenever I could, I would buy them at the farmer’s market. Now, we eat them at least once a week.”
“Do you buy them live?” I asked, a bit at a loss f o r something meaningfit1 to say. Completely puzzled, he stared at me and responded, “Do you mean a whole cow?” Obviously, I had mistaken steaks for snakes. This confusion had led to a profusion of misunderstandings which, as the column subsequently elaborated, I took as indicative as how global misunderstandings could arise. The publisher of the magazine published the column with a longish footnote explaining the story. The exquisite moment of recognition intended by this column was preempted by a didactic explanation of the misunderstanding. I knew at that point that this was my last magazine
346 People & Organizations
column. My style of communication was not sufficiently practical and immediate. Some of my books, but not all, have fared better. It can be difficult to achieve the needed level of discourse and the right packaging to appeal to extremely busy managers and executives. It helps if you can address a topic that is of urgent importance to leaders of enterprises. The next chapter represents our latest efforts to support managers and executives in a crucial arena.
REFERENCES Argyris, C., & Schon, D.A. (1978). Organizational learning: A theory of action perspective. Reading, MA: Addison-Wesley. Bennis, W., & O’Toole, J. (2000, May-June). Don’t hire the wrong CEO. Harvard Business Review, 171-176. Brown, J.S., & Duguid, P. (2000, May-June). Balancing act: How to capture knowledge without killing it. Harvard Business Review, 73-80. Burke, J. (1996). The pinball effect: How Renaissance water gardens made the carburetor possible and other journeys through knowledge. Boston: Little, Brown. Casti, J. (1989). Paradigms lost: Images of man in the mirror of science. New York: Morrow. Castronova, E. (2005). Synthetic worlds: The business and culture of online games. Chicago: University of Chicago Press. Charan, R., & Colvin, G. (1999, June 21). Why CEOs fail. Fortune, 68-78. Christensen, C.M. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Boston: Harvard Business School Press. Collins, J.C. (2001). Good to great: Why some companies make the leap and others don’t. New York: Harper Business. Collins, J.C., & Porras, J.I. (1994). Built to last: Successful habits of visionary companies. New York: Harper Business.
Challenges, Situations, and Change 347
Cook, S.D.N., & Brown, J.S. (1999). Bridging epistemologies: The generative dance between organizational knowledge and organizational knowing. Organization Science, lo (4),381-400. Covey, S.R. (1989). The seven habits of highly effective people. New York: Simon & Schuster. Day, G.S. (1999, Fall). Creating a market-driven organization. Sloan Management Review, 41 (l), 11-22. Durlach, N.I., & Mavor, AS., (Eds.). (1995). Virtual Reality: Scientific and Technological Challenges. Washington, DC: National Academy Press. ESS, (2000a). Strategic Planninp Advisor: http://www.ess-advisors.com/ software.htm. Atlanta, GA: Enterprise Support Systems. ESS, (2000b). Situation Assessment Advisor: http://www.essadvisors.com/ software.htm. Atlanta, GA: Enterprise Support Systems. Flannery, T.P., Hofrichter, D.A., & Platten, P.E. (1996). Peoule, performance, and pay: Dynamic compensation for changing orpanizations. New York: Free Press. George, B. (2003). Authentic Leadership: Rediscovering the Secrets to Creating Lasting Value. San Francisco: Jossey-Bass. Gladwell, M. (2005). Blink: The power of thinking without thinking. New York: Little Brown. Klein, G.A. (1998). Sources of power: How Deople make decisions. Cambridge, MA: MIT Press. Klein, G.A. (2002). Intuition at work: Why developing your gut instincts will make YOU better at what you do. New York: Currency. Kouzes, J.M., & Posner, B.Z. (1987). The leadership challenge: How to ggt extraordinary things done in organizations. San Francisco: Jossey-Bass. Magaziner, I., & Patinkin, M. (1989). The silent war. New York: Random House. Miller, W.L., & Morris, L. (1999). Fourth generation R&D: Managing knowledge, technology, and innovation. New York: Wiley. Mintzberg, H. (1975, July/August). The manager’s job: Folklore and fact. Harvard Business Review, 49-61.
348 People & Organizations
Moore, G.A. (1991). Crossing the chasm: Marketing and selling technology products to mainstream customers. New York: Harper Business. Oncken, W. Jr., & Wass, D.L. (1974, Nov-Dec). Management time: Who's got the monkey. Harvard Business Review. Rasmussen, J. (1986). Information Processing and Human-Machine Interaction. New York: Elsevier. Rasmussen, J., Pejtersen, A,M., & Goodstein, L.P. (1994). Cognitive Systems Engineering;. New York: Wiley. Rouse, W.B. (1991). Design for success: A human-centered approach to designing successful products and systems. New York: Wiley. Rouse, W.B. (1992). Strategies for innovation: Creating successful products. systems, and organizations. New York: Wiley. Rouse, W.B. (1993a). Catalysts for change: ConceDts and principles for enabling innovation. New York: Wiley. Rouse, W.B. (1994). Best laid plans. New York: Prentice-Hall. Rouse, W.B. (1996) Start where YOU are: Matching your strategy to your marketplace. San Francisco, CA: Jossey-Bass. Rouse, W.B. (1998). Don't iumu to solutions: Thirteen delusions that undermine strategic thinking. San Francisco, CA: Jossey-Bass. Rouse, W.B. (1999a). Seven challenges: What keeps managers awake at night? Information Knowledge Systems Management, 1 (l),5-14. Rouse, W.B. (1999b). Connectivity, creativity, and chaos: Challenges of loosely-structured organizations. Information 0 Knowledge 0 Systems Management, 1 (2), 117-131. Rouse, W.B. ( 1999~).Strategic thinking. Atlanta, GA: Enterprise Support Systems. Rouse, W.B. (1999d). Do you eat snakes? Competitive Edge!, Januarypebruary, 80. Rouse, W.B. (2001). Essential challenges of strategic management. New York: Wiley
Challenges, Situations, and Change 349
Rouse, W.B. (2002). Need to know: Information, knowledge and decision making. IEEE Transactions on Systems, Man. and Cybernetics - Part C, 2 (4), 282-292. Rouse, W.B. (2005). Strategic thinking via organizational simulation. In W.B. Rouse & K.R. Boff, Eds., Organizational Simulation: From Modeling and Simulation to Games and Entertainment. New York: Wiley. Rouse, W.B. (2006). Enterprise transformation: Understandinv and enabling fundamental change. New York: Wiley. Rouse, W.B., & Boff, K.R. (Eds.). (2005). Organizational simulation: From modeling and simulation to games and entertainment. New York: Wiley. Senge, P.M. (1990). The fifth discipline: The art and practice of the learning organization. New York: Doubleday/Currency. Simon, H.A. (1957). Models of man: Social and rational. New York: Wiley. Simon, H.A. (1969). The sciences of the artificial. Cambridge, MA: MIT Press. Slywotsky, A.J. (1996). Value migration: How to think several moves ahead of the competition. Boston, MA: Harvard Business School Press. Slywotsky, A.J., & Morrison, D.J. (1997). The urofit zone: How strategic business design will lead vou to tomorrow’s urofits. New York: Times Books. Vicente, K.J. (1999). Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work. Mahwah, NJ: Lawrence Erlbaum Associates. Weiss, T.B., & Hartle, F. (1997). Reengineerinq uerformance management: Breakthroughs in achievinp strategy through people. Boca Raton, FL: S t . Lucie Press Whiting, R. (1999, Nov 22nd). Knowledge Management: Myths and realities. Information Week, 42-54. Womack, J.P., & Jones, D.T. (1996). Lean thinking: Banish waste and create wealth in your corporation. New York: Simon & Schuster. Zack, M.H. (1999). Developing a knowledge strategy. California Management Review, 41 (3), Spring, 125-145.
This Page Intentionally Left Blank
Chapter 11
VALUE, WORK, AND TRANSFORMATION
INTRODUCTION I returned to Georgia Tech in 2001, following more than 12 years away from academia, founding and leading first Search Technology and then Enterprise Support Systems. During the years of leading these software companies, I stayed connected to the Georgia Tech community, but was not a regular faculty member. I returned as Chair of the School of Industrial and Systems Engineering, at the time the number one ranked program in the U.S. for more than 10 years - now the streak is up to 16 years. I felt at the time, and still do, that this field was entering a transition. My role, I believed, was to contribute to this transition by working with the faculty to create a vision for this transition, as well as a strategy and plan for realizing this future. My sense of the vision was grounded in 12 years of working intensely with thousands of senior managers and executives in the leading technology-based enterprises in the world - in both private and public sectors. This vision began to emerge in the early 1990s (Rouse, 1993; Rouse & Howard, 1993) when I realized that the system of interest was the whole enterprise. This realization led to the formation of Enterprise Support Systems in 1995. In the late 1990s, we were immersed in the study of enterprises composed of multiple independent organizations such as the broad healthcare enterprise (Rouse, 2000) and the overall defense enterprise (Rouse, et al. 1982; Rouse & Boff, 2001). It was clear that such systems can be characterized as complex systems from a variety of perspectives (Rouse, 2003a). This has led to ongoing initiatives in the study of complex systems supported by the National Science Foundation (Rouse, 2007). As I reflected on these experiences and the future of industrial and systems engineering, it struck me that the enterprise - as a whole - was the 351
352 People & Organizations
future of the field (Rouse, 2004). The discipline started by focusing on individual machines, then broadened to the shop floor, then to manufacturing and production processes, then to whole factories as systems, and most recently to supply chain design and management. The whole enterprise was the natural next step. Many colleagues at Georgia Tech and other universities liked this vision. Probably more felt it was either too ambitious or too much of a leap from the current emphasis in the discipline - much more applied math and much less engineering. However, senior executives in both private and public sectors were very enthusiastic and supportive of this vision - which I elaborate in considerable detail in this chapter. Much more quickly than I had expected, we secured almost $10 million in support for this vision, the lion’s share coming from Michael Tennenbaum, a Georgia Tech alum and highly successful private equity investor. In January 2004, the Tennenbaum Institute began operations. After four years as School Chair, I stepped down in July of 2005 to lead the Institute, which, as of this writing, includes almost 50 faculty, staff, and graduate students from across Georgia Tech’s six colleges and the Georgia Tech Research Institute. Much of the material in this chapter emerged from a rather straightforward question. A primary objective of the Institute is Ph.D. education. As our numerous Ph.D. students prepared and defended their dissertation proposals, they received the same comment and question from faculty across Georgia Tech’s colleges, “You are obviously working on an important problem. However, what is the theoretical basis of your research and, if successful, what do you expect will be your theoretical contribution?” This was an awfully big question for any one Ph.D. student. This realization led me to prepare two “foundational” papers that were published in 2005, one on enterprises as systems (Rouse, 2005a) and the other on a theory of enterprise transformation (Rouse, 2005b). These two papers were well received by the broad systems community, with the second paper receiving an award for the best paper published in the Journal of Systems Engineering that year. This chapter addresses enterprises as systems with emphasis on the essential phenomenon of fundamental change - transformation - of complex organizational systems. This includes consideration of the nature of transformation, the context within which it occurs, models of the relationships between enterprises and contexts, and an overarching theory
Value, Work, and Transformation 353
of transformation. The implications of the theory are discussed in terms of research, practice, and academic disciplines.
DEFINING TRANSFORMATION Enterprise transformation concerns change, not just routine change but fundamental change that substantially alters an organization’s relationships with one or more key constituencies, for example, customers, employees, suppliers, and investors (Rouse, 2006a, 2006b; Rouse & Baba, 2006). Transformation can involve new value propositions in terms of products and services, how these offerings are delivered and supported, and/or how the enterprise is organized to provide these offerings. Transformation can also involve old value propositions provided in fundamentally new ways. Transformation can be contrasted with business process improvement. Adoption of the principles of Total Quality Management (Deming, 1986) has resulted in many enterprises focusing on their business processes and devising means to continually improve these processes. The adoption of TQM may be transformative for an enterprise. However, as judged by the definition of transformation provided here, the ongoing use of TQM subsequent to implementation is not transformative. The whole point of TQM is to make continual change a routine undertaking. Business Process Reengineering (Hammer & Champy, 1993) can be much more transformative. Adoption of BPR has led to much fundamental redesign of business processes. This rethinking followed the guidance “don’t automate; obliterate.” In this way, both the adoption and implementation of BPR tends to be transformative, although success is, by no means, guaranteed. One can then apply the principles of TQM to continually improve the reengineered business processes. Rather than routine, transformation tends to be discontinuous, perhaps even abrupt. Change does not occur continually, yielding slow and steady improvements. Instead, substantial changes occur intermittently, hopefully yielding significantly increased returns to the enterprise. Transformation and routine change converge when, as with BPR and TQM, the transformation involves fundamental new ways of pursuing routine change. This chapter outlines a theory of enterprise transformation. The theory focuses on why and how transformation happens, as well as ways in which transformation is addressed and pursued in terms of work processes and the architecture of these processes. As later discussion elaborates, the theory argues for the following definition:
354 People & Organizations
Enterprise transformation is driven by experienced andor anticipated value deficiencies that result in significantly redesigned andor new work processes as determined by management’s decision making abilities, limitations, and inclinations, all in the context of the social networks of management in particular and the enterprise in general. A variety of industry and corporate vignettes are used to illustrate the elements of this theory and definition. I discuss a portfolio of research initiatives in terms of how they can advance the proposed theory, while also enhancing practices of enterprise transformation. Several cases stories are used to illustrate the theory.
ROLE OF THEORY The study and pursuit of enterprise transformation is very much a transdisciplinary endeavor. The types of initiatives discussed later in this chapter involve disciplines ranging from artists and architects, to engineers of all types and economists, as well as management, public policy, and so on. As indicated earlier, the efforts of research teams pursuing these initiatives often begin with intense discussions of the fundamental basis for these pursuits. In essence, these discussions involve two questions. First, what is the theoretical basis for our research initiatives? Second, how do the emerging results of these efforts contribute to and advance theory? Given the range of disciplines just noted, it is important to understand what is meant by “theory” in the context of our investigations of enterprise transformation. Are we like Newton or Einstein postulating an axiomatic basis for the universe and working to derive “laws” such as F = MA or E = MC2 ? Or are we more like Darwin, combing the South Seas for evidence of our origins? For the former, we would formulate mathematical models from which we could deduce system behaviors and then compare those behaviors with observations. Eventually, we would devise theorems and proofs regarding behavioral phenomena such as response, stability, observability, and controllability in our “model worlds” (Rouse, 1982, 2003a). For the latter, we would rely on statistical inference to gain an understanding of what affects what, and under what conditions. This choice reflects the complex nature of the world of interest, with a wide
Value, Work, and Transformation 355
range of players, forces, and factors interacting dynamically to slowly yield long-term changes. This complexity precludes creating a model world of sufficient validity to enable reaching defensible conclusions about the real world. Thus, we must experiment in the real world. The distinction just elaborated contrasts the role of theory in axiomatic and empirical traditions in science and engineering. However, the research initiatives of interest also include participants from art, literature, music, politics, law, and so on. This suggests that we might need to consider the role of theory in the arts and humanities vs. science and engineering (Snow, 1962; Rouse, 2003b), as well as the role of theory in legal, political, and social systems (Diesing, 1962). These elaborations might be overwhelming were it not for the fact that the theory we need is to drive our research rather than explain or motivate change in general, perhaps of artistic or social nature, for instance. The theory should drive our hypotheses, determine the variables of interest, and specify potentially relevant environmental factors. Research results should confirm or reject our hypotheses, support or refute the effects of variables, and assess the relevance of environmental factors. The rules of statistical inference will govern these evaluations. Therefore, we are very much like Darwin combing the enterprise seas to gain understanding of the origins and processes of transformation. The theory presented in this chapter was formulated to help determine where to look and what to look for. Specifically, the theory helps to recognize enterprises of potential interest and the variables of importance to identifying enterprises that have attempted transformation, how they have pursued it, and the consequences of these pursuits. Thus, our theory fits into the empirical tradition. The possibility of an axiomatic theory depends on the relationships and patterns that our empirical studies will unearth.
CONTEXT OF TRANSFORMATION Enterprise transformation occurs in - and is at least partially driven by -the external context of the economy and markets. As shown in Figure 1, the economy affects markets that, in turn, affect enterprises. Of course, it is not quite as crisply hierarchical as indicated in that the economy can directly affect enterprises, for instance, via regulation and taxation. The key point is that the nature and extent of transformation are context dependent .
356 People & Organizations
Laws, Regulations, Taxes & incentives
Economy v Trade, Jobs & Tax Revenues
Demand, Competition, & Revenues
Market A
Supply of Products Services,&
Work Assignments & Resources
v
Enterprise
A
v Products Work
A
lntraprise
Figure 1. The Context of Enterprise Transformation
For public sector enterprises, the term “constituency” can replace the term “market.” The financially oriented metrics shown in Figure 1 also have to be changed to reflect battles won, diseases cured, etc. This chapter occasionally draws parallels between private and public sector enterprises; however, full treatment of these parallels is beyond the scope of this book. There is also an internal context of transformation - the “intraprise” in Figure 1. Work assignments are pursued via work processes and yield work products, incurring costs. Values and culture (Davenport, 1999), reward and recognition systems (Flannery, et al., 1996; Weiss & Hartle, 1997) individual and team competencies (Katzenbach & Smith, 1993), and leadership (Kouzes & Posner, 1987; George, 2003) are woven throughout the intraprise. As discussed in Chapter 10, these factors usually have strong impacts on an enterprise’s inclinations and abilities to pursue transformation.
Value, Work, and Transformation 357
MODELING THE ENTERPRISE Enterprise transformation occurs in the external context of Figure 1. The enterprise, with its internal strengths and weaknesses, and external opportunities and threats, operates within this broader external context. Possibilities for transformation are defined by the relationships between the enterprise and this context. The model of the enterprise as a system shown in Figure 2 provides a basis for understanding these possibilities. Relationships among the elements of the enterprise system are as follows. Inputs affect both work processes and enterprise state. For example, input resources (e.g., people, technology, and investment) affect both how work is done and how well it is done. As another example, input market conditions (e.g., demand and competition) affect quality and pricing of products and services.
k
Input
z
Demand Competition Laws Regulations People Technology Investment Revenues
.
output Enterprise .I+ Products Services . State
Revenues Earnings Share Price Market Share *Jobs Innovation
I
Work + processes II
Figure 2. A Model of the Enterprise
358 People & Organizations
The concept of “state” is central to the theory of enterprise transformation. The state of a system is the set of variables and their values that enable assessing where the system is and projecting where it is going. We tend to think that financial statements define the state of an enterprise as a system. However, financial variables are usually insufficient to project the future of an enterprise and a deeper characterization of state is needed (Rouse, 2001). The Balanced Scorecard (Kaplan & Norton, 1996) or, deeper yet, an enterprise-oriented version of the House of Quality (Hauser & Clausing, 1988) are two possibilities. Output is derived from the evolving state of the enterprise. For example, revenues can be determined from the numbers of units of products or services sold and the prices of these offerings. Determining profits requires also knowing the costs of providing offerings. Units sold relate, at least in part, to customer satisfaction as determined by product and service functionality, quality, and price, all relative to competing offerings. The construct of “value” is central to the arguments that follow. The value of the enterprise is traditionally viewed as its market capitalization, that is, share price times number of outstanding shares. Share price is traditionally conceptualized as the net present value of future enterprise free cash flows, that is, revenues minus costs. This view of value is often characterized as shareholder value. From this perspective, state variables such as revenues, costs, quality and price determine value. These variables are themselves determined by both work processes and architectural relationships among processes. Inputs such as investments of resources affect work processes. Coming full circle, the value of projected outputs influences how input resources are attracted and allocated. Table 1 summarizes several examples of enterprise domains, processes, states, work, and value. It is important to note that value, for example in terms of unit prices, will depend on the competing offerings from other enterprises. Similarly, the importance of any set of military objectives secured depends on the objectives secured by adversaries. Thus, as noted earlier, knowledge of context is essential to understanding enterprises as systems. The examples in Table 1 serve to illustrate the multi-faceted nature of value. It could be argued that all of the facets shown in the right column are simply intermediate surrogates for shareholder value; hence, shareholder value is the central construct. On the other hand, it is very difficult to argue that shareholder value, as traditionally defined, is the sole
Value, Work, and Transformation 359
driver of enterprise transformation. For many types of enterprises, shareholder value is the ultimate measure of success, but other forces such as markets, technologies, and the economy often drive change. Examples discussed later illustrate these forces. Many fundamental changes address value from the perspective of customers and, to a much lesser extent, suppliers and employees. According to Peter Drucker (2001), “The purpose of a business is to create a customer.” Thus, for example, while loss of market share and subsequent decreasing stock market valuation can be viewed as end effects in themselves, they also may be seen as symptoms of declining value of products and services as perceived by customers. Clearly, a broader view of value is needed (Slywotsky, 1996; Slywotsky & Morrison, 1997).
Domain
Process
Manufacturing Production
Delivery
Service
1
Work
State
I
Value
Work in Process
Products
Unit Price Minus Cost
People in Queues
Transactions
Customer Satisfaction
I
: R&D
Military
Research
Studies in Progress
Operations Positions of Forces
Technology Options
Potential of Options
Objectives Secured
Importance of Objectives
Table 1. Example Domains, Processes, States, Work and Value
360 People & Organizations
A THEORY OF ENTERPRISE TRANSFORMATION Succinctly, experienced or expected value deficiencies drive enterprise transformation initiatives. Deficiencies are defined relative to both current enterprise states and expected states. Expectations may be based on extrapolation of past enterprise states. They may also be based on perceived opportunities to pursue expanded markets, new constituencies, technologies, etc. Thus, deficiencies may be perceived for both reactive and proactive reasons. Transformation initiatives involve addressing what work is undertaken by the enterprise and how this work is accomplished. The work of the enterprise ultimately affects the state of the enterprise, which is reflected, in part, in the enterprise’s financial statements, Balanced Scorecard assessment, or the equivalent. Other important elements of the enterprise state might include market advantage, brand image, employee and customer satisfaction, and so on. In general, the state of the enterprise does not include variables internal to work processes. This is due to the fact that we only need state estimates sufficient to enable explaining, predicting, and/or controlling future states of the system. To illustrate, the state of an aircraft is usually defined in terms of its location, speed, attitude, etc., but not the current RPM of its fuel pumps, air flow in the cabin, and electron charge of its LED displays. Similarly, the state of an enterprise does not include current locations of all salespeople, ambient temperatures in each of its factories, the water flow in the rest rooms, etc. Were we not able to define state at a higher level of aggregation and abstraction, the complexity of modeling airplanes or enterprises would be intractable.
Value Deficiencies Drive Transformation More specifically, enterprise transformation is driven by perceived value deficiencies relative to needs and/or expectations due to: 0
Experienced or expected downside losses of value, for example, declining enterprise revenues and/or profits Experienced or expected failures to meet projected or promised upside gains of value, for instance, failures to achieve anticipated enterprise growth
Value, Work, and Transformation 361 0
Desires to achieve new levels of value, for example, via exploitation of market and/or technological opportunities
In all of these cases, there are often beliefs that change will enable remediation of such value deficiencies. Change can range from business process improvement to more fundamental enterprise transformation.
Work Processes Enable Transformation In general, there are three broad ways to approach value deficiencies, all of which involve consideration of the work of the enterprise: 0
0
Improve how work is currently performed, for example, reduce variability Perform current work differently, for instance, web-enable customer service Perform different work, for example, outsource manufacturing and focus on service
The first choice is basically business process improvement. As discussed earlier, this choice is less likely to be transformative than the other two choices. The second choice often involves operational changes that can be transformative depending on the scope of changes. The third choice is most likely to result in transforming the enterprise. This depends, however, on how resources are redeployed. Liquidation, in itself, is not necessarily transformative. The need to focus on work processes is well recognized, e.g., (Hammer & Champy, 1993; Womack & Jones, 1996; Kessler, 2002; Liker, 2004). Reengineered and lean processes have been goals in many transformative initiatives. Indeed, a focus on processes may, at least initially, require transformation of management’s thinking about an enterprise. The extent to which this subsequently transforms the enterprise depends on the extent of changes and success in their implementation. Transformation can also involve relationships among processes, not just individual work processes in and of themselves. These relationships are often framed in terms of an “architecture.” It is common to express architectures in terms of multiple “views.” The operational view is a
362 People & Organizations
description of the activities, operational elements, and information flows required to support enterprise operations. The technical view is a set of rules defining the interactions and interdependencies of system elements to assure compatibility and satisfaction of requirements. The system view describes the physical connections, locations, key nodes, etc., needed to support enterprise functions (Sage & Lynch, 1998). Recent work by Mark Mykityshyn argues for a strategic view to support formulation and communication of intent, goals, and strategies (Mykityshyn, 2007; Mykityshyn & Rouse, 2007). Transformation of work processes inherently must affect the operational view of the architecture. Changes of this view are likely to affect the technical and systems views. In contrast, changes of system andor technical views that do not change operational views do not, by definition, change work processes. Hence, these types of changes may improve processes but do not transform the enterprise. Bailey and Barley (2005) have argued for a renaissance in the study of work. They chronicle the substantial changes in work - from production workers to knowledge workers - while industrial engineering was abandoning the study of work practices and design. In the context of the theory outlined here, engineering will have to re-embrace work studies to play a central role in enterprise systems research (Rouse, 2004). Rasmussen and his colleagues (1986, 1994) have pioneered the use of work domain analysis to characterize human roles, jobs, and tasks in complex systems. Building on this foundation, we can characterize the work of the enterprise in terms of the hierarchy of purpose, objectives, functions, tasks, and activities. Transformation of work can be pursued at all levels of this hierarchy. Changing the tasks and activities of the enterprise, by themselves, relates to business process improvement. In contrast, changing the purpose, objectives, andor functions of the enterprise is more likely to be transformational. Such changes may, of course, cause tasks and activities to then change. Thus, change at any level in the hierarchy is likely to cause changes at lower levels. It seems reasonable to hypothesize that the higher the level of transformation, the more difficult, costly, time consuming, and risky the changes will be. For instance, changing the purpose of the enterprise is likely to encounter considerable difficulties, particularly if the extent of the change is substantial. In many cases, for example, defense conversion, such change has only succeeded when almost all of the employees were replaced (Rouse, 1996).
Value, Work, and Transformation 363
Ultimately, one could liquidate the enterprise and redeploy its financial and perhaps physical assets in other ventures. However, as noted above, it is difficult to characterize this as transformation. Thus, there is a point at which the change is sufficiently substantial to conclude that the enterprise has been eliminated rather than transformed.
Allocation of Attention and Resources Input is also central to the theory of enterprise transformation. As implied by Figure 2, input includes both external variables related to customers, competitors, demand, interest rates, and so on, as well as internal variables such as resources and their allocation among work processes. Transformation involves allocating attention and resources so as to: Anticipate and adapt to changes of external variables, that is, control the enterprise relative to the “road ahead” rather than the road behind Cultivate and allocate resources so as to yield future enterprise states with high projected value with acceptable uncertainties and risks Thus, the ability of an enterprise to redeploy its human, financial, and physical resources is central to the nature and possibility of transformation.
Management Decision Making Value deficiencies and work processes define the problem of enterprise transformation - one should recognize and/or anticipate deficiencies and then redesign work processes to remediate these deficiencies. To fully understand transformation, however, we need to understand both the problem and the problem solvers. Thus, a characterization of management decision making is central to our overall theory. Nadler and Tushman (1989) summarize how managers address change, ranging from tuning, to adaptation, to reorientation, to re-creation. They focus on how management addresses the more complex and difficult changes of reorientation and re-creation in terms of diagnosing the problem, formulating a vision, creating a sense of urgency, linking change to core strategic issues, communicating and leading, and broadening the base of leadership. This all occurs in the context of a mixture of planning
364 People & Organizations
and opportunism that includes redesign of key processes and nurturing of investments as returns emerge over time. Hollnagel’s (1993) contextual control model of cognition has potential for describing how managers address the problems and decisions outlined by Nadler and Tushman. He outlines how the competence of decisions makers, combined with the characteristics of the situation (i.e., number of goals, available plans, mode of execution, and event horizon) combine to determine the chosen mode of control, ranging from scrambled, to opportunistic, to tactical, to strategic. The overarching premise is that strategic control is preferable to scrambled control. However, as noted in Chapter 10, Mintzberg’s (1975) classic paper, as well as more recent works (Mintzberg, 1998,1999), serves to shatter the myth of the manager as a coolly analytical strategist, completely focused on optimizing shareholder value using leading-edge methods and tools. Simon (1957, 1969) articulates the concept of “satisficing,” whereby managers find solutions that are “good enough” rather than optimal. Another important factor is the organizational environment that can be rife with delusions that, as discussed in Chapter 10, can undermine strategic thinking (Rouse, 1998). Thus, Nadler and Tushman describe the work of managers addressing transformation, and Hollnagel’s model suggests how managers respond to this work. Mintzberg and Simon’s insights provide realistic views of real humans doing this work, often in an organization that may be beleaguered by one or more of the organizational delusions that I have elaborated. This somewhat skeptical view of management decision making ignores several important aspects of human decision making. Managers’ expertise and intuitions (Klein, 1998, 2002) and abilities to respond effectively in a blink (Gladwell, 2005) can be key to success, especially in situations where these abilities enable recognizing what is really happening in an enterprise. The effective use of analogical thinking can also be invaluable, although there is the risk of relying on poor analogies (Gavetti & Rivkin, 2005). This can lead to doing the wrong things very well. Managers’ roles as leaders, rather than problem solvers and decision makers, are also central to transformation (George, 2003; Kouzes & Posner, 1987). The leadership styles of managers who are well attuned to business process improvement may prove to be poor matches for situations requiring reorientation and re-creation (Rooke & Torbert, 2005). Thus, the nature of the problem solver can have a substantial impact.
Value, Work, and Transformation 365
Social Networks Beyond the individual skills and abilities of managers and management teams, the “social networks” both internal and external to the enterprise can have enormous impacts (Burt, 2000; Granovetter, 2005). An important distinction is between strongly and weakly connected networks. Strongly connected networks result in rapid and efficient information and knowledge sharing among members of these networks. Weakly connected networks have “holes,“ in many cases between strongly connected subnetworks. Several researchers (Granovetter, 2005; Mohrman, Tenkasi, & Mohrman, 2003; Tenkasi & Chesmore, 2003) have found that weakly connected networks are better sources of new information and novel ideas. The resulting “big picture” perspective may better inform the nature of transformations pursued. In contrast, strongly connected networks are better at implementing change, at least once sense has been made of the anticipated changes and new meaning has been attached to these changes. Summarizing, the problem of transformation (i.e., value deficiencies prompting redesign of processes) combines with the nature of the problem solvers addressing transformation, as well as their organizations, to determine whether transformation is addressed, how it is addressed, and how well desired outcomes are achieved. Several theories of human problem solving and decision making, as well as theories of social phenomena, are relevant and useful for elaborating these aspects of the theory of enterprise transformation. The key point is that explanations of any particular instance of transformation will depend on the situation faced by the enterprise, the nature of the particular managers leading the enterprise, and the social structure of the enterprise.
Transformation Processes How does transformation happen? Transformation processes could be external to the model in Figure 2. However, it would seem that higher levels of transformation expertise would involve incorporation of transformation processes into the work processes in Figure 2. This possibility has been characterized in terms of constructs such as doubleloop learning and organizational learning (Argyris & Schon, 1978; Senge, 1990).
366 People & Organizations
Thus, transformation might become integral to normal business practices, perhaps even routine. Of course, this raises the question of the extent to which routine fundamental changes should be considered transformative. It is quite possible that such an evolution of an enterprise would not render changes less fundamental, but would enable much easier implementation of changes.
Summary of Theory Figure 3 summarizes the theory of transformation outlined in this chapter. Transformation is driven by value deficiencies and involves examining and changing work processes. This examination involves consideration of how changes are likely to affect future states of the enterprise. Potential impacts on enterprise states are assessed in terms of value consequences. Projected consequences can, and should, influence how investments of attention and resources are allocated. The problem solving and decision making abilities of management, as well as the social context, influence how and how well all of this happens.
Potential Value
Potential Defined by Markets & Technologies
c ,
3 0
c
-
I
%
c ,
.-0 e e Q)
Projected State
-
v)
3 Q -w
1
CI
. 6 =
Projected Deficiency
t
Q)
4
-
Work Processes
5 Q)
.e n
&
I ,
-
~
Projected Value
Work Process Redesiqn Driven by Value Deficiencies
Figure 3. Summary of Theory of Enterprise Transformation
Value, Work, and Transformation 367
ENDS, MEANS AND SCOPE OF TRANSFORMATION There is a wide range of ways to pursue transformation (Rouse, 2005a, 2005b, 2006a). Figure 4 summarizes conclusions drawn from numerous case studies. The ends of transformation can range from greater cost efficiencies, to enhanced market perceptions, to new product and service offerings, to fundamental changes of markets. The means can range from upgrading people’s skills, to redesigning business practices, to significant infusions of technology, to fundamental changes of strategy. The scope of transformation can range from work activities, to business functions, to overall organizations, to the enterprise as a whole.
Figure 4. Transformation Framework
368 People & Organizations
The framework in Figure 4 has provided a useful categorization of a broad range of case studies of enterprise transformation. Considering transformation of markets, Amazon leveraged IT to redefine book buying, while Wal-Mart leveraged IT to redefine the retail industry. In these two instances at least, it can be argued that Amazon and Wal-Mart just grew; they did not transform. Nevertheless, their markets were transformed. The U.S. Department of Defense’s effort to move to capabilities-based acquisition (e.g., buying airlift rather than airplanes) has the potential to transform both DoD and its suppliers. Illustrations of transformation of offerings include UPS moving from being a package delivery company to a global supply chain management provider, IBM’ s transition from manufacturing to services, Motorola moving from battery eliminators to radios to cell phones, and CNN redefining news delivery. Examples of transformation of perceptions include Dell repositioning computer buying, Starbucks repositioning coffee purchases, and Victoria’s Secret repositioning lingerie buying. The many instances of transforming business operations include Lockheed Martin merging three aircraft companies, Newel1 Rubbermaid resuscitating numerous home products companies, and Interface adopting green business practices. The costs and risks of transformation increase as the endeavor moves farther from the center in Figure 4. Initiatives focused on the center will typically involve well-known and mature methods and tools from industrial engineering and operations management. In contrast, initiatives towards the perimeter will often require substantial changes of products, services, channels, etc., as well as associated large investments. It is important to note that successful transformations in the outer bands of Figure 4 are likely to require significant investments in the inner bands also. In general, any level of transformation requires consideration of all subordinate levels. Thus, for example, successfully changing the market’s perceptions of an enterprise’s offerings is likely to also require enhanced operational excellence to underpin the new image being sought. As another illustration, significant changes of strategies often require new processes for decision making, for instance, for R&D investments as discussed in Chapter 9.
Value Deficiencies Drive Transformation Elaborating earlier value-centered arguments, there are basically four alternative perspectives that tend to drive needs for transformation:
Value, Work, and Transformation 369
Value Omortunities: The lure of greater success via market andor technology opportunities prompts transformation initiatives Value Threats: The danger of anticipated failure due to market and/or technology threats prompts transformation initiatives Value ComDetition: Other players’ transformation initiatives prompt recognition that transformation is necessary to continued success Value Crises: Steadily declining market performance, cash flow problems, etc. prompt recognition that transformation is necessary to survive The perspectives driven by external opportunities and threats often allow pursuing transformation long before it is forced on management, increasing the chances of having resources to invest in these pursuits, leveraging internal strengths and mitigating internal weaknesses. In contrast, the perspectives driven by external competitors’ initiatives and internally caused crises typically lead to the need for transformation being recognized much later and, consequently, often forced on management by corporate parents, equity markets, or other investors. Such reactive perspectives on transformation often lead to failures.
Work Processes Enable Transformation Transformation initiatives driven by external opportunities and threats tend to adopt strategy-oriented approaches such as: Markets Targeted, for example, pursuing global markets such as emerging markets, or pursuing vertical markets such as aerospace and defense Market Channels Employed, for instance, adding web-based sales of products and services such as automobiles, consumer electronics, and computers Value Proposition, for example, moving from selling unbundled products and services to providing integrated solutions for information technology management
370 People & Organizations
Offerings Provided, for instance, changing the products and services provided, perhaps by private labeling of outsourced products and focusing on support services On the other hand, transformation initiatives driven by competitors’ initiatives and internal crises tend to adopt operations-oriented approaches including: Supply Chain Restructuring, for example, simplifying supply chains, negotiating just-in-time relationships, developing collaborative information systems Outsourcing & Offshoring, for instance, contracting out manufacturing, information technology support; employing low-wage, high-skill labor from other countries Process Standardization, for example, enterprise-wide standardization of processes for product and process development, R&D, finance, personnel, etc. Process Reengineering, for instance, identification, design, and deployment of value-driven processes; identification and elimination of non-value creating activities Web-Enabled Processes, for example, online, self-support systems for customer relationship management, inventory management, etc. It is essential to note, however, that no significant transformation initiative can rely solely on either of these sets of approaches. Strategyoriented initiatives must eventually pay serious attention to operations. Similarly, operations-oriented initiatives must at least validate existing strategies or run the risk of becoming very good at something they should not be doing at all. The above approaches drive reconsideration of work processes. Processes are replaced or redesigned to align with strategy choices. Operational approaches enhance the effectiveness and efficiency of processes. Of course, the possibilities of changing work processes depend greatly on the internal context of transformation. Leadership is the key, but
Value, Work, and Transformation 371
rewards and recognition, competencies, and so on also have strong impacts on success. Social networks enormously affect implementation of change. Work processes can be enhanced (by acceleration, task improvement, and output improvement); streamlined (by elimination of tasks); eliminated (by outsourcing); and invented (by creation of new processes). An example of acceleration is the use of workflow technology to automate information flow between process steps or tasks. An illustration of task improvement is the use of decision aiding technology to improve human performance on a given process task (e.g., enabling consideration of more options). Output improvement might involve, for example, decreasing process variability. Streamlining could involve transferring tasks to others (e.g., transferring customer service queries to other customers who have addressed similar questions). Elimination involves curtailing processes, for example, Amazon created online bookstores thus eliminating the need for bookstore-related processes in their business. Invention involves creating new processes, for instance, Dell created innovative build-to-order processes.
ILLUSTRATIONS OF TRANSFORMATION Enterprise transformation is, by no means, a new phenomenon. The longbow transformed war -- as weapon technology often has -- when the English decimated the French at Agincourt in 1415. The printing press in 1453 led to the “pamphlet wars” and Martin Luther’s complaints in 1517 that seeded the transformation known as the Protestant Reformation. History is laced with many stories like this. In this section, I briefly review transformative developments and events in the transportation and computer industries, drawing on a longer work on these industries (Rouse, 1996). Attention then shifts to a range of contemporary stories of change in the telecommunications, retail, entertainment, information, and computing industries. These stories illustrate the range of ongoing transformation throughout the global economy. Transportation Before the early 1800s, the dominant forms of transportation -- horse, stagecoach, sailing ship, and so on -- had not changed substantially in
372 People & Organizations
centuries. Then, within roughly 100 years, we had steamboats, railroads, automobiles, and aircraft. In the process of moving from stagecoaches and canal boats to jet planes, humankind changed the speed at which it traveled by a factor of 100. Trips that once took days, now take minutes. Robert Fulton is traditionally credited with the invention of the steamboat. He was fortunate, however, to be able to build on a variety of earlier efforts. For example, several steamboats were demonstrated following James Watt’s improvements of the steam engine in 1769. Nevertheless, with Fulton’ s demonstration in 1807, the steamboat industry blossomed. By 1819, a steamboat had sailed from Savannah, Georgia, to Russia. The first all-steam crossing, without the use of supporting sails, occurred in 1827. By the mid 1800s, transatlantic steamboat lines were competing. The first reported self-propelled steam land vehicle was in the late 1600s and, by the late 1700s, a French-built steam car had been demonstrated in Paris. Soon after, an English-built car was demonstrated. John Blenkinsop built the first practical and successful locomotive in Britain in 1812. The beginning of the railway industry is usually reported as starting with George Stephenson who created the Stockton and Darlington Railway in Britain that opened in September 1825. Soon after, it is argued, the railway era really began with the opening of Liverpool and Manchester Railway in Britain in September 1830. By the 1850s, the railroad’s effects on the American economy were pervasive. Uniform methods of construction, grading, and bridging emerged. Much of the design of rails, locomotives, coaches, and freight cars was close to what we have today, at least in terms of appearance. Frenchman Nicolas-Joseph Cugnot designed the first true automobile in 1769. This automobile was a steam-powered tricycle and was capable of 2.25 mph for 20 minutes. Germans Carl Benz and Gottlieb Daimler are credited with the first gasoline-engine automobile in 1885. In the U.S., George Selden filed a patent for the automobile in 1879. Charles and Frank Duryea created an American gas-powered automobile in 1892-93. By 1898, there were 50 automobile companies. Between 1904 and 1908, 24 1 automobile companies went into business. Interestingly, steam propulsion retained a dominant position for quite some time - at the turn of the century, 40% of U S . automobiles were powered by steam, 38% by electricity, and 22% by gasoline. Serious speculation about flight occupied such thinkers as Roger Bacon in the 13th century and Leonard0 da Vinci in the 15th century. After a wealth of attempts over several centuries, Orville Wright, in 1903,
Value, Work, and Transformation 373
flew for 12 seconds and landed without damage. In 1914, the Census Bureau listed 16 firms as aircraft manufacturers with combined total output for the year of 49 planes. By 1918, the American aircraft industry was delivering 14,000 aircraft with 175,000 employees. However, after the signing of the World War I armistice, production dropped to 263 in 1922. Commercial aviation eventually diminished the dominance of military customers in the aircraft market. Until the late 1950s, over half of the commercial aircraft in the world were built by Douglas Aircraft, having continually built upon the success of the DC-3. However, Boeing quickly moved into jet aircraft, mostly due to military contracts. Using the military KC-135 as a starting point, Boeing introduced the 707 commercial transport in 1958. Douglas was much slower to shift paradigms. Boeing’s “bet” on jet aircraft provided the basis for its strong position in commercial aviation today. The patterns of transformation just outlined for steamboats, trains, automobiles, and airplanes are closely linked to propulsion - steam, internal combustion, and jet engines. Combined with inventions in mechanical systems, aeronautics, and manufacturing - including many, many inventions that never gained broad acceptance - these patterns moved us faster and higher, both literally and economically. In the process, many enterprises were formed, and a few transformed successfully to create the companies we know today.
Computing The evolution of computer technology and the computer industry took hundreds of years. Frenchman Blaise Pascal built the first mechanical adding machine more than 300 years ago. German Gottfried Wilhelm Liebniz, after seeing Pascal’s machine, created the Stepped Reckoner in 1673. Charles Babbage conceived the first digital computer in the 1830s. He envisioned this computer -- the Analytical Engine -- as powered by steam that, as noted in the last section, was “high tech” in the 1830s. Babbage got his idea for a digital computer from Frenchman JosephMarie Jacquard’s punch-card programmed looms, developed in the early 1800s. Jacquard’s punched card method for controlling looms also influenced American Herman Hollerith, who invented a card-based system for tabulating the results of the 1890 census. Hollerith’s venture led to what would later become IBM.
374 People & Organizations
During the latter half of the 19" century and first half of the 20" century, IBM, NCR, Burroughs, Remington Rand, and other companies became dominant in the business equipment industry with tabulators (IBM), cash registers (NCR), calculators (Burroughs), and typewriters (Remington). The dominance of these companies in their respective domains set the stage for their becoming primary players in the computer market. The emergence of digital computing and the process of maturation of the computer industry started with John V. Atansoff of Iowa State, who built a prototype of an electromechanical digital computer in 1939. By 1946, John W. Mauchly and J. Presper Eckert at the University of Pennsylvania had completed the Electronic Numerical Integrator and Calculator, ENIAC, which was the first all-purpose, all-electronic digital computer and led to Remington-Rand's UNIVAC. In the same period, John von Neumann's concepts of stored-program computing served as the model for many digital computers. Remington-Rand had some early success, including selling UNIVAC machines to the Census Bureau, which displaced IBM tabulators. However, IBM eventually beat out Remington-Rand because IBM recognized the tremendous potential of computers and how they had to be marketed. IBM recognized what was likely to happen in the business machines industry and responded by developing a customer-oriented strategy that helped their customers to deal successfully with trends that were affecting them. In the late 1950s and early 1960s, a whole new segment of the computer market emerged - interactive rather than centralized computing. IBM dismissed and then ignored this segment. They apparently could not imagine that customers would want to do their own computing rather than have IBM support and possibly staff a centralized computing function. Later IBM tried to catch up, but did so poorly. By the late 1960s, Digital Equipment Corporation (DEC) dominated interactive computing with their minicomputers. By the late 1970s, Apple was putting the finishing touches on the first microcomputer that would spark a new industry. DEC, in a classic business oversight, failed to take interactive computing to the next logical step of personal computing. Apple, exploiting pioneering inventions at Xerox, created the Macintosh in the mid 1980s. The Mac became the industry standard, at least in the sense that its features and benefits were adopted throughout the personal computer industry. Microsoft and Intel were the primary beneficiaries of this innovation.
Value, Work, and Transformation 375
Microsoft prospered when IBM chose them to create the operating system software -- DOS -- for IBM’s personal computer. DOS soon became the industry standard, except for Apple enthusiasts. Microsoft Windows replaced DOS as the standard. With the introduction of Windows, Microsoft was able to create software applications for word processing, spreadsheets, presentations, and databases and now controls these markets. More recently, of course, the Internet has dominated attention. Microsoft continues to battle with a range of competitors, hoping to transform a variety of inventions into dominant market innovations. The rules of the game have changed substantially as this industry has moved from mainframe to mini to micro and now Internet. Most inventions will not become innovations, but certainly a few will. The patterns of transformation in computing revolve around power and speed. More and more computing operations, faster and faster, differentiate the mainframe, mini, and micro eras. Increasing user control has also been an element of these patterns, although this has resulted with increasing numbers of layers between users and computation. Further, it has been argued that pervasive networking is only possible with increased centralized management of standards, protocols, etc. Thus, the latest pattern of transformation may inherently borrow from old patterns.
Contemporary Illustrations We have just skimmed through two centuries of innovations in transportation and computing. This chronicle noted the formation (and demise) of thousands of enterprises as these industries transformed. Now, let’s consider what has happened in the opening few years of this century. A summary of these vignettes is provided in Table 2. The telecommunications industry has recently provided several compelling stories of transformation, particularly failures to transform. Perhaps the biggest story is AT&T. The company underestimated the opportunities in wireless and then overpaid for McCaw Cellular to catch up and later spun the cellular business off. They attempted to get into computers via NCR and then spun it off. They overpaid for TCI and MediaOne and then spun them off. They also spun off Lucent. They came late to the Internet data market. All of this created a debt crisis. With reduced market cap, AT&T was acquired by SBC, a former Baby Bell (Economist, 2005, Feb 5).
376 People & Organizations
Transformation
Outcome
AT&T
Jame late to wireless, omputers, and cellular, laying too much to enter.
iacing a debt crisis and ,educed market cap, AT&T was acquired by SBC.
Clear Channel
Jear Channel executed a ong series of .cquisitions, accelerated ’y the 1996 deregulation.
Sost leadership, combined with bundled selling resulted .n their revenues growing 3ver 50%.
IB M
hansformed from nainframe maker to ,obust provider of ntegrated hardware, ietworking, and software ;olutions.
Earnings and share price rebounded as services business flourished.
Kellogg
Xemained committed to ts brand strategy but focused on channel needs for consumers’ changing Zoncept of breakfast.
Acquired Keebler, resulting in revenue growth of almost 50 % and operating income nearly doubling in 5-year period.
Lucent
Adopting “high tech” image, abandoned Baby Bells, overdid mergers, delayed developments of optical systems, and inflated sales.
When Internet bubble burst and customers could not repay loans, $250 billion market cap in 1999 shrunk tc $17 billion by 2005.
Company
~
Newell Rubbermaid
With a track record of successfully acquiring over 60 companies, acquisition of Rubbermaid seemed like a natural match.
~
~~
Acquisition dragged Newell down, losing 50% of the value of the investment. Brand strategy of Rubbermaid did not match Newell.
Value, Work, and Transformation 377
Nokia
New cell phone designs introduced to combat loss of market share.
Market share rebound, but likely temporary due to aggressive competitors.
Proctor & Gamble
Acquisition of Gillette, Clairol and Wella while selling off numerous brands.
Outcome uncertain as the “consumer goods industry is caught between slowing sales, rising costs, and waning pricing power.”
Siemens
Focused on cost reduction, innovation, growth, and culture change, in part by convincing people that there was a crisis.
Revenue almost doubled, net income more than tripled, and revenue per employee almost doubled over 12 years.
Thomson
Transformed itself from a traditional conglomerate into a focused provider of integrated electronic information to specialty markets .
They sold more than 60 companies and 130 newspapers, and then acquired 200 businesses, becoming a leader in electronic databases.
Table 2. Contemporary Illustrations of Transformation
Lucent, AT&T’s progeny, has not fared much better. Adopting a “high tech” image when spun off in 2000, Lucent abandoned the traditional Baby Bell customers for Internet startups who bought on credit. Lucent overdid mergers and overpaid. They delayed developments of optical systems. Of greatest impact, they inflated sales to meet market expectations. When the Internet bubble burst and customers could not repay loans, Lucent’s $250 billion market cap in 1999 quickly shrunk by more than 90% (Lowenstein, 2005). While AT&T and Lucent were stumbling, Nokia was a star of the telecommunications industry. However, by 2003, Nokia was losing market share (35% to 29%) due to stodgy designs of cellphones, unwil!ingness to adapt to cellular providers, and internal preoccupation with reorganization. They reacted with new phone designs (e.g., cameras, games, and a velvet
378 People & Organizations
cell phone!) and market share rebounded. Nevertheless, the company is being pushed down market to maintain growth in an increasingly competitive market. One expert projects they will end up with something like a 22% market share, with Asian competitors the main beneficiaries (Economist,2005, Feb 12). The retail industry has been highly competitive for several decades (Garcia, 2006). Proctor & Gamble has been one of the stalwarts of this industry. They have maintained their competitive position by boosting innovation, ditching losing brands, buying winning ones and stripping away bureaucracy. However, the consumer goods industry has found itself caught between slowing sales, rising costs, and waning pricing power. The big box retailers now have the pricing power, both via private labels and “trade spending,” that is, requiring suppliers to pay for store promotions, displays, and shelf space. The acquisition of Gillette for $50B followed P&G’s acquisition of Clairol for $5B and Wella for $7B. At the same time, P&G sold off numerous brands. China is a rapidly growing P&G market. Nevertheless, whether these changes can sustain P&G’s growth remains to be seen (Economist, 2005, Feb 5). Despite fierce competition in the breakfast foods business - including a redefinition of breakfast by time-pressured consumers -- Kellog remained committed to its broad strategy that involved excelling at new product development, broad distribution, and a culture skilled at executing business plans. To sustain this strategy, Kellogg needed a distribution channel for delivering fresh snack-like breakfast foods. They acquired Keebler which also had a brand strategy. Revenue rose by 43% between 1999 and 2003 and operating income nearly doubled (Harding & Rovit, 2004). Newell had a 30-year track record of successful acquiring over 60 companies in the household products industry. Their success was recognized by the industry’s adoption of the concept of “Newellizing” acquisitions. Rubbermaid seemed like a natural match - household products through the same sales channels. However, the acquisition dragged Newell down - losing 50% of the value of the investment. Newell’s focus on efficiency and low prices did not match Rubbermaid’s brand focus and premium prices (Harding & Rovit, 2004). Clear Channel and Thomson can illustrate transformation in the entertainment and information sectors, respectively. Clear Channel Communications executed a long series of acquisitions of radio stations, accelerated by the 1996 deregulation, rising to lead the industry with 1200 stations. They focused on cost leadership involving packaged playlists, central distribution of formats, and shared personnel. They sold bundled
Value, Work, and Transformation 379
advertising and promoted live concerts. Between 1995 and 2003, their revenues grew 55% annually and shareholder return averaged 28% annually (Harding & Rovit, 2004). From 1997 to 2002, Thomson transformed itself from a traditional conglomerate that included newspapers, travel services, and professional publications into a focused provider of integrated electronic information to specialty markets. They sold more than 60 companies and 130 newspapers. With the proceeds of $6B, they acquired 200 businesses becoming a leader in electronic databases and improving operating margins significantly (Harding & Rovit, 2004). Large high-technology companies also have to address the challenges of transformation. Following the reunification of Germany, prices in Siemens’ markets dropped dramatically, by as much as 50% in 3 years in some businesses. Siemens reacted by focusing on cost reduction, innovation as reflected by patents, growth, and culture change, prompted by the CEO convincing people that there was a crisis. They adopted many of General Electric’s ideas, i.e., only staying in businesses where they could be No. 1 or No. 2, GE’s people development ideas, and GE’s benchmarking practices. Siemens focused on financial markets, alliances, and the internal political and persuasion process. From 1992 to 2004, revenue almost doubled, net income more than tripled, and revenue per employee almost doubled (Stewart & O’Brien, 2005). By 2002, under the leadership of Louis Gerstner, IBM had been pulled back from the brink, transforming from a mainframe maker into a robust provider of integrated hardware, networking, and software solutions. The new CEO, Samuel Palmisano, continued the company’s transformation via a bottom-up reinvention of B M ’ s venerable values. The transformed values are: 1) dedication to every client’s success, 2) innovation that matters - for our company and for the world and, 3) trust and personal responsibility for all relationships. Processes and practices are now being aligned - or realigned -- with these values (Hemp & Stewart, 2004). Summarizing these ten vignettes in terms of the theory of enterprise transformation, we can reasonably assert that: Increasing shareholder value by mergers and acquisitions sometimes succeeds (Clear Channel, Kellogg and Thompson), sometimes fails (AT&T, Lucent, and Newell), and takes time to evaluate (Proctor & Gamble).
380 People & Organizations
Transformation of the enterprise’s value proposition to customers via new product and service offerings is illustrated by the success of E M , Kellogg, and Thompson and, to a lesser extent, by Nokia. Improving productivity via extensive process improvements, as illustrated by IBM and Siemens, can transform an enterprise’s value provided to customers, suppliers, and employees and increase shareholder value. Thus, experienced and/or anticipated value deficiencies drove these transformation initiatives. Process changes were accomplished either organically or via mergers and acquisitions. Success was mixed, as was the case for the many examples from early times.
Conclusions The need to transform - change in fundamental ways - has long been a central element of the economy and society (Schumpter, 1942; Jensen, 2000; Collins, 2001; Collins & Porras, 1994). Many enterprises are started; some flourish. Those that succeed eventually must face the challenges of change; some succeed in transforming, as illustrated by these vignettes. Most enterprises fail to transform. The study of enterprise transformation focuses on understanding the challenges of change and determining what practices help most to address change and successfully transform.
IMPLICATIONS FOR RESEARCH An enterprise can be described in terms of how the enterprise currently creates the value it is achieving - how it translates inputs to states to work to value. Research in enterprise transformation should, therefore, address one or more of these constructs. My conclusion is that such research should include six thrust areas: Transformation Methods and Tools Emerging Enterprise Technologies Organizational Simulation
Value, Work, and Transformation 381
Investment Valuation Organizational Culture and Change 0
Best Practices Research
Transformation Methods and Tools To better understand, design, and manage enterprises, we need methods and tools. Fortunately, there is a wide variety of systems-oriented concepts, principles, methods, and tools with which one can pursue the essential challenges of Chapter 10 and, if necessary, transform the enterprise as described in this chapter (Sage & Rouse, 1999; Rouse, 2001). In fact, the wealth of alternatives poses the problem of understanding how all these approaches fit together or, at least, where one or the other applies. We are particularly concerned with formal modeling of enterprises. One needs to understand both the “as is” and “to be” enterprise and the nature of the transformations for getting from one to the other. This is difficult because we need to determine how alternative representations interact with the range of mathematical and computational machinery that can be brought to bear, while also being able to incorporate essential economic, behavioral, and social phenomena. Beyond the difficulty of formally representing the “as is” and “to be” enterprise, it can be difficult to simply characterize the “as is” enterprise. People within the enterprise often have remarkably little perspective for the business processes to which their activities contribute. They may also be defensive and apprehensive regarding possible changes. This can be particularly difficult when activities are part of the “overhead” that does not clearly contribute to the value streams of the enterprise. Such activities are likely candidates for being outsourced or eliminated. Even when these activities are required for regulatory reasons, for example, people can be concerned that their jobs are at risk. At this point, we are working with methods drawn from management, engineering, computing, and architecture (Ashuri, et al, 2007; Caverlee, et al., 2007: Lewis, et al., 2007; McGinnis, 2007; Mykityshyn & Rouse, 2007; Stephenson & Sage, 2007). These four disciplines pursue formal methods for quite different reasons - management to represent business practices and processes, engineering to represent the physical flows in the system, computing to represent the information flows, and architecture to represent human flows within and among physical spaces. Enterprises, of
382 People & Organizations
course, include all these types of flows. We need methods that enable representation and manipulation of these different flows across a set of computationally compatible tools. These models, methods, and tools are likely to provide the basis for aiding leaders of enterprises in that they will enable making sense of and portraying what is happening in the enterprise, as well as developing and evaluating potential courses of action (Mykityshyn, 2007). Mark Mykityshyn’s research is leading to defining the strategic level of enterprise architectures - the level that defines the goals and strategies for the lower levels of the overall architecture (Mykityshyn & Rouse, 2007). One of the difficulties in employing these and other method and tools concerns the ability to estimate needed parameters, preferably as probability distributions rather than point estimates. Despite the wealth of data typically collected by many enterprises, it usually requires substantial effort to translate these data into the information and knowledge needed by these methods and tools. This leads to consideration of information technologies.
Emerging Enterprise Technologies Current and emerging enterprise technologies are both driving and enabling enterprise transformation. Computer and communications technologies are central. Information technology (IT) is a broad description. Most people see IT as the key to transformation. Yet, as noted earlier, simply “installing” these technologies does not fully address enterprise challenges. The central concern in this research area is not with what technologies will emerge, but instead with the implications of their adoption if they do emerge. In particular, the focus is on organizational implications and strategy/policy issues associated with these implications (Rouse & Acevedo, 2004). Thus, the issue is not whether it will happen, but the implications if it does happen. A good example of an emerging technology or capability is knowledge management, including its key enabler -- collaboration technology (Rouse, 2001, 2002; Rouse & Acevedo, 2004). As discussed in Chapter 7, fully leveraging this technology/capability requires a deep understanding of how knowledge is - and could be - generated and shared in an enterprise, as well as its impact on important metrics of enterprise success. The issue is not so much about how the technology functions as it is about how work
Value, Work, and Transformation 383
currently gets done and could be done with these capabilities (Cook & Brown, 1999; Brown & Duguid, 2000). Another good illustration is wireless communications and mobile computing. Rahul Basole has addressed the implications for enterprises that entertain these technologies (Basole, 2006). In particular, the ability to access all corporate information and knowledge assets at any time and any place can enhance work processes for marketing, sales, customer support, and other functions. Another implication may be greater reliance on virtual organizations and less use of traditional workspaces. However, as discussed in Chapter 9, success depends on organizational readiness.
Organizational Simulation When enterprises entertain major, transformational changes, they typically perform a wealth of feasibility and financial analyses. At some point, they may determine that “it’s worth it.” However, there still may be reluctance among key stakeholders. The problem is likely to be that economic analyses do not usually address behavioral and social concerns. Spreadsheet models and colorful graphic presentations seldom provide a sense of what the changes will feel like. As discussed in Chapter 10, organizational simulation can address these concerns (Rouse & Boff, 2005). Immersive simulations can enable decision makers and other stakeholders to experience the future, act in that future, and have the future react to them. If this is a positive experience, then decision makers can proceed with increased confidence. On the other hand, if problems are encountered, the future can be redesigned before the check is written. This research draws upon traditional modeling and simulation as well as artificial intelligence, gaming, and entertainment. Our overriding premise is that people are more likely to embrace those futures that they can experience beforehand. Embracing these futures will, in turn, enable enterprise transformation by mitigating the human and organizational concerns that often undermine transformation initiatives. Simulation of organizational futures can be particularly useful if it allows for unintended consequences to emerge. This is quite possible when a range of stakeholders “play the game” and react differently than expected to the environment and to each other. In some cases, participants may subvert the game, that is, work around the rules, and prompt
384 People & Organizations
discoveries and insights that possibly lead to innovations in strategy, policy, and strategic thinking in general (Rouse & Boff, 2005). Organizational simulations can offer interactive glimpses of the future, enable the design of operational procedures in parallel with system design, and provide rich, ready-made training environments once systems are deployed. This may result in enterprise transformation being an adventure rather than a dreaded threat. In particular, transformation can perhaps be an adventure that the key stakeholders design and redesign as they experience it.
Investment Valuation Methods and tools provide the means to design the transformed enterprise, enabled by emerging enterprise technologies, and experienced via organizational simulation. This nevertheless begs the question, “What’s it worth?’ How should we attach value to the investments needed to transform the enterprise? Of course, as discussed in Chapter 9, this question has been with us for a long time. Traditionally, we would just project revenues, costs, and profits (or savings) and discount these time series to get a Net Present Value (NPV). Unfortunately, most of the investment for transformation occurs in the near-term while much of the return from transformation occurs in the long-term. Aggressive discount rates - adopted because of the large uncertainties - will render long-term payoffs near worthless. This phenomenon also impacts investments in R&D, as well as investments in education, the environment, and so on. The value of any long-term initiatives with upstream investments, downstream returns, and large uncertainties will suffer from discounted cash flow analysis. This raises a question of the fundamental purpose of such investments. Should an enterprise invest in transformation solely to fix today’s problems? Similarly, should the R&D budget of an enterprise be justified solely on the basis of the likely contributions to today’s product and service offerings? The answer clearly is, “No.” As argued in Chapter 9, investments in R&D should provide the enterprise options for meeting contingent downstream needs, many of which are highly uncertain in nature and impact (Rouse & Boff, 2001,2003,2004). In this area, Michael Pennock is researching alternative option-pricing models for valuation of long-term, multi-stage investments such as R&D and enterprise transformation (Pennock, 2007; Pennock, Rouse & Kollar,
Value, Work, and Transformation 385
2007). We have conducted numerous case studies and, as a result, influenced many investment decisions. It is clear that options for the future are exactly what most enterprises need. These models and methods enable them to determine what these options are worth. 1 indicated in Chapter 9 that this research has provided several insights, at least one of which is fundamental. Using NPV for valuation of longterm, highly uncertain transformation initiatives tends to emphasize preservation of investment capital. In contrast, using Net Option Value (NOV) tends to maximize the value gained by the enterprise. In other words, using NPV minimizes risks to the transformation budget, while using NOV maximizes the benefits of transformation. We have found that this contrast is affected by the magnitude, timing, and uncertainties associated with these investments (Bodner & Rouse, 2007).
Organizational Culture and Change The above initiatives imply substantial changes of processes, practices, technologies, and measures of success. These changes must be pursued in the context of the organizational culture of the enterprise in question. Often this culture is not compatible with what will be needed to successfully transform. A lack of recognition of this mismatch is a fundamental organizational delusion that may enfeeble change initiatives (Rouse, 1998). Our concern is not with organizational culture and change in general. The topic is far too immense. Instead, we are interested in culture and change as they relate to the other initiatives. For instance, we have pursued the implications of deploying of new enterprise technologies, for example, collaboration suites, in terms of interactions with cultural norms of knowledge sharing and online work (Rouse & Acevedo, 2004). One particularly interesting phenomena concerns enterprises’ decisions to pursue transformation rather than, for example, incremental process improvements. A related phenomenon is the emerging recognition that transformational change is at hand despite never having explicitly decided to pursue such a fundamental initiative. Dominie Garcia’s research has focused on the antecedents of transformation decisions, including emergent decisions (Garcia, 2007). Specifically, she has been concerned with what drives and triggers such decisions and recognitions, as well as the process factors related to successful transformation. One of her key findings is the importance of leadership involvement in the process of transformation.
386 People & Organizations
Ted Hanawalt’s research is focused on decision making in the automobile industry. We have studied the ten best and ten worst cars of the past 50 years, with emphasis on how the decisions were made to offer these cars to the market. The organizational decision processes employed, or in some cases avoided, strongly affected both successes and failures, in many ways much more so than the design and engineering of the cars.
Best Practices Research As noted earlier in this chapter and in Chapter 10, there is a wealth of practices available with which to address the essential challenges as well as enterprise transformation (Collins & Porras, 1994; Collins, 200 1; Rouse, 2001). An important research issue concerns the extent to which any of these practices can be declared “best” practices, at least for specific types of situations and enterprises with particular characteristics. Quite frankly, many published practices tend to be reasonable and good ideas that are reported to have worked someplace at least once’. To declare a practice as “best,” we need to measure the benefits of employing the practice relative to alternative practices. Addressing the challenges, as well as transformational approaches to the challenges, tends to take quite a bit of time, and measuring the benefits takes even longer. As the medical profession knows well, it is very difficult to conduct studies over many years and maintain support and commitment, as well as control. This reality has led us to focus on what enterprises have done in the past and the consequences of these initiatives. We are using a database of the yearly and quarterly reports of all public companies worldwide, as well as major analysts’ projections and assessments of these companies’ performance, over the past 20-30 years. We are sleuthing what transformation initiatives these companies undertook and the subsequent benefits of these undertakings, including the time frame within which benefits typically emerge. As we have discussed this research with various senior executives, in both private and public sectors, several have asked that we not limit ourselves to just best practices. They have indicated keen interest in worst practices. Specifically, they would like to know what types of Of course, it can be quite reasonable to proceed with a g d , but less than best practice, perhaps because it is important to act immediately. Nevertheless, it is important to understand how well practices work and why some practices might work much better than others.
’
Value, Work, and Transformation 387
transformation practices have never worked in any measurable manner. They expect that this will eliminate many candidate approaches. This request points to a need to understand the whole distribution of practices, not just the tails of best and worst. Such understanding will enable insights into the internal and external factors that influence the success of practices, for instance, role of leadership, nature of industry, and state of economy. Thus, the notion of best practices is clearly a multidimensional construct rather than a one size fits all “silver bullet.” This research on best - and worst - practices is telling us what really has worked, including the conditions under which it worked. Also very important is the fact that this research is providing deep grounding on current practices and experiences implementing these practices. This grounding provides a foundation for looking further out. We are particularly interested in the 3-5 year time horizon to be able to understand the opportunities, threats, problems, and issues likely to affect enterprises just beyond their current planning horizon.
Summary Table 3 summarizes the relationships among initiatives in these six thrust areas with the state, work, value, and input constructs defined above. This tabulation can enable two important facets of enterprise systems research. First, it can provide a theoretical grounding to research initiatives. Of course, the researchers pursuing these initiatives need to elaborate these theoretical underpinnings much more specifically than presented here. This need will surely result in elaboration and refinement of this basic theory outlined. The second facet concerns the value of the outcomes of these research initiatives. One certainly can expect these outcomes to directly benefit the stakeholders in these initiatives. Beyond these direct benefits, this research should advance fundamental understanding of the nature of enterprises, how they can and should address change, and the factors that affect success and failure. Providing such advances will require paying careful attention to the constructs of state, work value and input. It is unlikely that these constructs will soon be codifiable into an axiomatic set of equations - the phenomena of interest are much too complex. Nevertheless, one can gain deeper understanding of the nature of these constructs, how they can and should be changed, and how best to accomplish such changes. Eventually, this may support formulation of a
388 People & Organizations
valid model world with axioms, theorems, proofs, etc. Along the way, the fundamental knowledge gained should help enterprises to recognize needs for fundamental change and address such challenges with success.
Enterprise Input Transformation Methods and Tools
Work Processes
Enterprise State
Enterprise output
How to represent, manipulate, optimize, and portray input, work, state, output, and value for the past, present, and future of the enterprise. ~~
How emerging enterprise technologies are likely to impact work, state and output, and the strategy/policy implications of these impacts.
Emerging Enterprise Technologies
~
Investment Valuation
Organizational Culture and Change Best Practices Research
~~~~~
How work processes affect state and the experience of the state of an enterprise.
Organizational Simulation
Affect value generated, e.g., options created
How investments of financial resources
How value priorities drive work processes, affect organizational culture and change, and thereby influence stat( and output. How past and current approaches to and changes of input, work, state, and output have impacted subsequent enterprise value creation, for better or worse.
Table 3. Relationships of Initiatives to Enterprise Model
Value, Work, and Transformation 389
EXPERIENCES FROM PRACTICE The theory of transformation can also be of use for making sense of experiences pursuing fundamental change of complex organizational systems. My first experience, as other than just a bystander, was with Search Technology. I also participated in two different periods of transformation at Georgia Institute of Technology. This section uses these two experiences to illustrate the theory, as well as drawing on several experiences supporting companies and other types of enterprises to address fundamental change.
Search Technology Russ Hunt and I formed the company in 1980 to develop and sell simulation-based training and intelligent decision support. Our initial customers were in the marine, utility, and defense industries. By the mid 1980s, however, our business was dominated by defense contracts for the types of R&D discussed in Chapters 2 and 4-7. I recall an article in 1987 in the Wall Street Journal portending a steep decline in defense spending. (Later, with the fall of the Berlin Wall in 1989, the decline accelerated.) This article prompted me to begin repositioning the company. Our first attempt focused on manufacturing applications of our technology. As discussed in Chapter 6, this move failed to generate significant sales. In late 1988, serendipity intervened. As also discussed in Chapter 6, a call from South Africa initiated our move into new product planning and eventually business planning. As chronicled in Chapters 8-10, this move resulted in developing a suite of software tools (PPA, BPA, SAA, and TIA) that we sold with consulting services. However, this product line was developed and sold by Enterprise Support Systems (ESS), not Search Technology. Russ and I intended for software product sales to make up for declining defense sales. Unfortunately, this did not happen as quickly as needed. Requirements for investment capital to grow ESS prompted its spin off in 1995. A very significant impedance, although not the only hindrance, was the attitude of Search Technology’s technical staff towards the ESS product line. After working on high-end workstations for our defense work and only having to deliver research prototypes, most of the staff distained programming for Microsoft Windows on a personal computer. They also
390 People & Organizations
were not interested in preparing software documentation and users manuals, as well as dealing with customer support calls. One staff member said, “I want the company to have the revenue and profits from a line of products, but I don’t want to be involved with developing and supporting the product line. I want to be able to play with leading-edge computers and software rather than worry about meeting a customer’s needs.” When ESS spun off, only one software engineer from Search Technology, Chuck Howard, joined the new company. Search Technology attempted to move into the software services business, with some notable successes. However, the company shrank over time to roughly 20% of its size during its peak and was eventually sold to a small group interested in a particular software application the company had developed. Thus, three attempts at transformation failed - manufacturing software, software tools, and software service. ESS was somewhat successful - modest, profitable growth -- but never achieved the sales levels envisioned when it spun off. Considering this case story in terms of the theory of enterprise transformation, it seems reasonable to argue that management was well aware of the impending value deficiency. The first move was, as elaborated in Chapter 6, focused on a poor choice of markets, despite strong advice from experts. The second move focused on a growing market and resulted in new work processes to deliver value to this market. However, the social network in the company did not like the work or the value that resulted. The third move was more acceptable to the social network but involved a highly competitive market where the value proposition was deployable skills rather then proprietary ideas and technologies. A year or so after leaving Search Technology in 1995, I published Start Where You Are (Rouse, 1996). The chapter in this book on the defense industry makes it very clear how enormously difficult it is for technologybased defense companies to transition technologies from highperformance, high-cost domains to markets where performance, quality, service, and costs involve very different tradeoffs. Interestingly, Enterprise Support Systems succeeded in transitioning technologies and ideas developed at Search Technology for defense customers. However, ESS was not successful in transitioning many of the people originally involved with developing these technologies and ideas. Of course, ESS could have operated as an independent subsidiary of Search Technology, perhaps buying software services from Search. This would have required the Search Technology management team to commit
Value, Work, and Transformation 391
to the risk associated with investing capital in ESS. This risk seemed too substantial at that time. Consequently, I took the risk and made the investment and spun the company off. This management decision coupled with the reactions of the social network with Search Technology undermined the chances of transformation, at least via the opportunity provided by ESS. In summary, the value deficiency was clearly seen and, after a false start, a new value proposition was defined and the changes of work processes were accomplished. However, a combination of management decision making and social networks hindered the company from leveraging the new value proposition and work processes. The high-tech, low-risk culture of Search Technology could not embrace the “to be” vision I articulated. In retrospect, my sense is that, for that particular set of people, with their specific needs and beliefs, they probably made the right choice.
Georgia Tech This case story involves the Georgia Institute of Technology (Rouse & Garcia, 2004). Georgia Tech was founded in 1885. Up until the early 1970s, Tech’s reputation was as an excellent undergraduate engineering school. Larger aspirations emerged with the presidency of Joseph Pettit (1972-1987), who arrived from having served as Dean of Engineering at Stanford University. Pettit’s emphasis on Ph.D. research began the Institute’s remarkable climb from being ranked a top 20 engineering program in the 1980s, to top 10 in the early 1990s, and top 5 since 1997. Review of the rankings of universities over the past couple of decades shows that this is indeed quite an accomplishment. During the last 15 years, enrollment has grown only modestly, slowing shifting the balance towards graduate education. The number of faculty has also only grown modestly. However, almost 80% of the current faculty has been hired in the past 10-12 years. This is an amazing level of turnover, especially given the tenure system in academia. Much of this change can be attributed to a progressive leadership that emphasized the need for a constant influx of new ideas and directions. Almost 5% of faculty members have been elected to the prestigious national academies. Roughly one-eighth hold endowed chairs or professorships. Over one eighth have won coveted career awards from the National Science Foundation. Thus, the turnover has resulted in greatly
392 People & Organizations
increased excellence among the faculty, in addition to infusing the university with new and fresh perspectives. During this time, annual awards of research grants and contracts have doubled, as has the Institute’s overall budget. The percentage of the budget coming from the State of Georgia has continually declined, currently at lass than 25%. Decreasing state support of public institutions is a nationwide phenomenon, and all research universities have actively pursued several other funding sources so as not to suffer as a result of reduced state university budgets. Tech has been able to maintain its size, and thus access to necessary resources, despite this decrease in support. Maintaining access to important resources is a critical factor in achieving and sustaining top rated status for universities. The quality of incoming students has continued to rise during this period. Average scores on Scholastic Achievement Tests are approaching 1400 on a 1600-point scale. The mean high school Grade Point Average is 3.80 on a 4.0 scale. Undergraduate degrees now account for only 60% of degrees granted. Although much of the emphasis has been on increasing quality of graduate education and research, the undergraduate student body still acts as the foundation of the school. Because of this, there has also been much attention placed on improving quality of undergraduate education, thus attracting students of the highest caliber. The prime focus of the undergraduate initiatives is creating an environment where the students can supplement their academic education through study abroad, undergraduate research, leadership studies, and volunteer activities. In addition, the university offers first year orientation and extensive tutoring services for all. The changes initiated by Pettit and enhanced by his successors, John P. Crecine (1987-1994) and Wayne Clough (1995-present), can be summarized as follows: 0
Greatly increased emphasis on PhD programs and sponsored research o
Plus increased emphasis on multi-disciplinary research and education
o
Plus substantial increase of endowment, for example, faculty chairs
Value, Work, and Transformation 393
o Plus substantial expansion and upgrade of research and education facilities o
Plus increased emphasis on university’s role in economic development
Top-down vision and leadership with bottom-up strategy and execution o
Plus clear institutional direction
o
Plus prioritization of opportunities
o Plus embracing pursuit of innovative strategies
o Plus support of strong entrepreneurial institutional culture Also of note is the 1996 Olympics hosted by Atlanta. This resulted in substantial investments in the Institute’s infrastructure, ranging from new dormitories and athletic venues, to greatly enhanced landscaping across campus. In parallel, a Capital Campaign targeted to raise $300 million dollars during this period yielded almost $800 million, and safely concluded before the Internet “bubble” burst. One of the primary uses of these resources was a dramatic increase in the number of chaired positions, thereby enabling the attraction of the “best and brightest.” Considering the transformation of Georgia Tech in the context of the theory of transformation, the value deficiency was clear - a top 20 program that aspired to be in the top 5. Work processes associated with PhD education and sponsored research were substantially enhanced. Decisive leadership and commitment were clear and strong. The social network, especially the network of alumni, was enthusiastic and supportive. To provide another view of Tech, the School of Industrial and Systems Engineering was riding the crest of being the top ranked program in the U.S. for ten years when I arrived in 2001 to become school chair. As I indicated in the beginning of this chapter, it was clear to me that a value deficiency was emerging. Perhaps the strongest evidence came from research sponsors who were shifting priorities away from a strong emphasis on applied mathematics as an end in itself and towards the engineering of complex systems.
394 People & Organizations
I portrayed the impending change in my interview lecture and, I suspect, this vision played a strong role in my being offered the position. The work processes that needed enhancement included those associated with soliciting sponsored research from a broader set of sources and pursuit of interdisciplinary research topics with faculty from other colleges and schools across Tech and other top institutions. I made it very clear to faculty, staff, and the advisory board that this direction was where the school needed to head if it was to maintain its number one ranking. The advisory board, composed mainly of senior executive alumni, strongly supported this vision. While some faculty members were quite supportive, most were not. The vision was well outside their comfort zones. Their needs and beliefs centered on a team of one faculty member and one Ph.D. student addressing a fairly context-free set of equations. Research sponsors, including the National Science Foundation, were shifting funds to multi-investigator interdisciplinary research, but our faculty was not shifting with them. It is difficult to see a value deficiency when you are on top and have been there for a long time. Why change if you are already the best? Being in front of the pack seems to engender a hubris that hinders the ability to see that the rules of the game are changing and you will need to change the way you play to stay in the lead. Companies such as Digital Equipment Corporation, Eastman Kodak, and General Motors have suffered from this organizational delusion and, eventually, the marketplace showed them the error of their ways. Change is difficult at universities. Traditions of academic freedom and tenure - are very strong. The system of incentives and rewards is, in many ways, defined by the broader academic community in terms of peer review of publications and proposals, as well as how awards and honors are bestowed. Overall, the social networks of disciplines and subdisciplines tend to be very strong and highly resistant to change. On the other hand, you do not have to change the whole university to create a new direction. With a few committed senior faculty members, strong research staff, and a cadre of bright and energetic graduate students, you can launch a venture like the Tennenbaum Institute. With the formation of the Tennenbaum Institute, I realized that the vision I had articulated could be realized at Georgia Tech, but could not be centered on the School of Industrial and Systems Engineering. To an extent, this seems to reflect the innovator’s dilemma as articulated by Clay Christensen (1997). The organization that spawns a potential innovation may not be capable of successfully pursuing this opportunity because it
Value, Work, and Transformation 395
does not fit in with reigning perceptions of value, work processes, and social networks. Transforming Decision Making The case stories of Search Technology and Georgia Tech involved very substantial changes - new markets and offerings for Search and greatly enhanced research programs for Tech. Many transformations are much more subtle, perhaps not even readily apparent to external observers. A great example of a subtle yet pervasive transformation involves fundamental changes of decision making processes. Numerous executives have told me that “data-driven decision making” would be transformative. More specifically, they would like to use corporate and market data to assess the merits of ideas proposed by members of the executive team rather than having decisions driven by the debating skills of team members. This desire is also related to the enormous amounts of data now available via ERP, CRM, SFA, and SCM systems, as well as point of sale systems, online sources, and various information service providers. Companies now see abilities to mine, analyze, and model data and information as key to competitive advantage. Many of these executives resonate with Tom Davenport’s recent article “Competing on Analytics” (Davenport, 2006). The tools discussed in Chapters 8-11 are oriented toward the desired analytics. We have helped clients model corporate and market data sources to predict production cost reductions, locate retail stores, and plan fundraising campaigns. In some cases, the analytics enabled understanding and leveraging competitive advantages that were latent in these large data sources.
CONCLUSIONS This chapter has outlined an overarching, albeit evolving, theory of enterprise transformation. This theory is very much a work in progress. A wide range of colleagues from numerous disciplines has offered comments and suggestions on the evolving theory, providing rich evidence of the diversity of perspectives that different disciplines bring to this broad problem area. Indeed, it can reasonably be argued that there are few
396 People & Organizations
problems as central to our society and economy as the problem of how complex organizational systems address fundamental changes. The study of enterprises as systems and the transformation of these systems is inherently transdisciplinary in the attempt to find integrated solutions to problems that are of large scale and scope (Sage, 2000). Enterprise transformation involves fundamental change in terms of redesign of the work processes in complex systems. This is clearly transdisciplinary in that success requires involvement of management, computing, and engineering, as well as behavioral and social sciences. Upon first encountering the topic of enterprise transformation, many people suggest that this must be the province of business schools. However, the functional organization of most business schools mitigates against this possibility. Academic credibility depends on deep expertise in finance, marketing, operations management, organizational behavior, or corporate strategy. Great professional risk can be associated with spreading one’s intellectual energy across these areas. In contrast, systems engineering and management can and must inherently look across functions and view the whole enterprise system. Consider automobile manufacturing as one illustration. The Toyota Production System (TPS) has transformed the automobile industry (Liker, 2004). Interestingly, development and refinement of the TPS represents business process improvement for Toyota but transformation for all the competitors that had to adopt lean production to compete with Toyota, or compete in other markets, for example, aircraft production (Kessler, 2003). In these cases, TPS could not simply be “installed.” These practices affected the whole enterprise and success depended on addressing this breadth. A more recent innovation in the automobile industry is build-to-order (Holweg & Pil, 2004). If you are Dell, where the company was founded using build-to-order, this is another case of business process improvement. On the other hand, if you are Ford or GM, adopting build-to-order affects the whole enterprise. Manufacturing, supply chains, and distribution have to change, for example, you do not really need a traditional dealer network any more. You have to look at the whole enterprise, particularly because the overall cost structure changes significantly once you no longer build cars “on spec.” Sebastian Kleinau’s research found that build to order practices have quite different impacts in the computer and automobile industries (Kleinau, 2005). Systems engineering and management have long been strong suits of defense companies. The concepts, principles, methods, and tools have
Value, Work, and Transformation 397
been applied successfully to definition, design, development, and deployment of complex platforms ranging from aircraft to ships to command and control systems. However, the emphasis has shifted recently from platforms to capabilities, for instance, from airplanes to airlift (Rouse, & Boff, 2001; Rouse & Acevedo, 2004). This requires an airlift enterprise, not just airplanes. Further, the airlift enterprise will be a transformation of current enterprises for selling airplanes and providing cargo capacity as well. Thus, enterprises and their transformation are central constructs and phenomena in the complex systems addressed by systems engineering and management. The theory outlined in this chapter provides a foundation for thinking about and addressing these challenges. The transdisciplinary perspective inherent in systems engineering and management provide us with an inherent competitive advantage in tackling complex problems. To a great extent, this theory defines the “physics” of the enterprise world where we are pursuing human-centered design of support systems for senior managers and executives. Value drives this world and work processes enable it. Individual and organizational behaviors and performance shape what happens. Human abilities in the context of enterprises are remarkable. People are able to address enormous complexity and ambiguity. Unlike flying an airplane, troubleshooting an engine, or designing a device, it is often unclear what goals should have priority and what needs to be done first. Nevertheless, people keep the enterprise running and, more often than not, being productive. Human limitations often relate to the mechanisms that people have evolved for coping with complexity and ambiguity. They search for familiar patterns, and occasionally a force fit is a bad fit. They are sometimes deluded by the cultural zeitgeist of their organization, economy, and society. Their needs and beliefs sometimes skew the information and knowledge they seek, as well as how they interpret it. Senior managers and executives usually understand these limitations. Consequently, they seek models, methods, and tools to help them overcome these limitations, both individually and organizationally. Their preferences are for models, methods, and tools that are easy to understand and employ. They do not expect, however, that strategic thinking will be easy. They also know that fundamental change will be difficult. Human-centered design to enhance abilities, overcome limitations, and foster acceptance can be an important theme in fundamental change. Beyond the philosophy, models, methods, and tools, however, one needs to
398 People & Organizations
invest the time and energy needed to understand the external and internal context of the complex organizational system being addressed. The challenges faced by enterprises can be abstracted and, in some cases, usefully reduced to a mathematical formulation. However, to the extent that the context is assumed away, it is likely that the essence of the complexity of the organization will be lost (Rouse, 2007). Senior managers and executives want help addressing the inherent complexity of their world. It does not help when the help you provide ignores the context that underlies much of the complexity. Consequently, research on enterprise transformation is likely to more reflect the practices of Darwin than Newton.
REFERENCES Argyris, C., & Schon, D.A. (1978). Organizational learning: A theory of action Derspective. Reading, MA: Addison-Wesley. Ashuri, B., Rouse, W.B., & Augenbroe, G. (2007). Different views of work. Information Knowledge Systems Management, 6 (1). Bailey, D.E., & Barley, S.R. (2005). Return to work: Toward postindustrial engineering. IIE Transactions, 37 (8), 737-752. Basole, R.C. (2006). ModelinP and analysis of comdex technology adomion decisions: An investigation in the domain of mobile ICT. Ph.D. Dissertation, School Of Industrial & Systems Engineering, Georgia Institute of Technology. Bodner, D.A. & Rouse, W.B. (2007). Understanding R&D value creation with organizational simulation. S y s t e m s , lo (1), 64-82. Brown, J.S., & Duguid, P. (2000, May-June). Balancing act: How to capture knowledge without killing it. Harvard Business Review, 73-80. Burt, R.S. (2000). The network structure of social capital. In R.1 Sutton & B.M. Staw, Eds., Research in Organizational Behavior (Vol. 22). Greenwich, CT: JAI Press. Caverlee, J., Bae, J., Wu, Q., Liu, L., Pu, C., & Rouse, W.B. (2007). Workflow management for enterprise transformation, Information Knowledge Systems Management, 6 (1).
Value, Work, and Transformation 399
Christensen, C.M. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Boston: Harvard Business School Press. Collins, J.C. (2001). Good to meat: Why some companies make the leap and others don’t. New York: Harper Business. Collins, J.C., & Porras, J.I. (1994). Built to last: Successful habits of visionary companies. New York: Harper Business. Cook, S.D.N., & Brown, J.S. (1999). Bridging epistemologies: The generative dance between organizational knowledge and organizational knowing. Organization Science, lo (4), 38 1-400. Davenport, T.H. (2006). Competing on analytics. Harvard Business Review. January. Davenport, T.O. (1999). Human capital: What it is and why people invest it. San Francisco: Jossey-Bass. Diesing, P. (1962). Reason in society: Five types of decisions and their social conditions. Urbana, IL: University of Illinois Press. Deming, W.E. (1986). Out of crisis. Cambridge, MA: MIT Press. Drucker, P.F. (2001). The essential Drucker: In one volume the best of sixty years of Peter Drucker’s essential writing on management. New Y orK: HarperBusiness. Economist, (2005, February 5 ) . The fall of a corporate queen. Economist, 57-58.
The
Economist, (2005, February 5). Consumer goods: The rise of the superbrands. The Economist, 63-65. Economist. (2005, February 12). Nikoa’s turnaround: The giant in the palm of your hand. The Economist, 67-69 Flannery, T.P., Hofrichter, D.A., & Platten, P.E. (1996). People, performance, and pay: Dynamic compensation for changing organizations. New York: Free Press. Garcia, D. (2006), Process and outcome factors of enterprise transformation: A study of the retail sector. Ph.D. dissertation, School of Industrial & Systems Engineering, Georgia Institute of Technology.
400 People & Organizations
a
Gavetti, G. & Rivkin, J.W. (2005). How strategists think: Tapping the power of analogy. Harvard Business Review, (4), 54-63 George, B. (2003). Authentic Leadership: Rediscovering the Secrets to Creating Lasting Value. San Francisco: Jossey-Bass. Gladwell, M. (2005). Blink: The power of thinkinp without thinking. Boston: Little, Brown. Granovetter, M. (2005). The impact of social structure on economic (l), 33-50. outcomes. Journal of Economic Perspectives, Hammer, M., & Champy, J. (1993). Reengineering the corporation: A manifesto for business revolution. New York: Harper Business. Harding, D., & Rovit, S. (2004, September). Building deals on bedrock. Harvard Business Review, 121-128. Hauser, J.R., & Clausing, D. (1988, May-June). The house of quality. Harvard Business Review, 63-73. Hemp, P., & Stewart, T.A. (2004, December). Leading change when business is good. Harvard Business Review, 60-70. Hollnagel, E. (1993). Human reliability analysis: Context and control. London: Academic Press. Holweg, M., & Pil, F.K. (2004). The second century: Reconnecting customer and value chain through build-to-order. Cambridge, MA: MIT Press. Jensen, M. C. (2000). A theory of the firm: Governance, residual claims, and organizational forms. Cambridge, MA: Harvard University Press. Kaplan, R.S., & Norton, D.P. (1996, Jan-Feb). Using the balanced scorecard as a strategic management tool. Harvard Business Review, 7585. Katzenbach, J.R., & Smith, D.K. (1993). The wisdom of teams: Creating hinh-performance organizations. Boston, MA: Harvard Business School Press. Kessler, W.C. (2002). Company transformation: A case study of Lockheed Knowledge 0 Systems Martin Aeronautics Company. Information ManaRernent, 3 (l), 5-14.
Value, Work, and Transformation 401
Klein, G.A. (1998). Sources of power: How people make decisions. Cambridge, MA: MIT Press. Klein, G. (2002). Intuition at work: Why developing your gut instincts will make you better at what you do. New York: Currency. Kleinau, S. (2005). The build-to-order transformation: An analysis on the financial impact of build-to-order practices in the automotive and computer industries. M.S. Thesis, School Of Industrial & Systems Engineering, Georgia Institute of Technology. Kouzes, J.M., & Posner, B.Z. (1987). The leadership challenge: How to get extraordinary things done in organizations. San Francisco: Jossey-Bass. Lewis, M., Young, B., Mathiassen, L., Rai, A., & Welke, R. (2007). Workflow assessment based on stakeholder perceptions. Information Knowledge Systems Management, 6 (1). Liker, J.K. (2004). The Toyota way: 14 management principles from the world’s greatest manufacturer. New York: McGraw-Hill. Lowenstein, R. (2005, February). How Lucent lost it: The telecommunications manufacturer was a Potemkin village. Technology Review, 78-80. McGinnis, L. (2007). Enterprise modeling and enterprise transformation. Information 0 Knowledge 0 Systems Management, 6 (1). Mintzberg, H. (1975, July/August). The manager’s job: Folklore and fact. Harvard Business Review, 49-61. Mintzberg, H., Ahlstrand, B., & Lampel, J. (1998). Strategy safari: A guided tour through the wilds of strategic management. New York: Free Press. Mintzberg, H. & Lampel, J. (1999, Spring). Reflecting on the strategy process. Sloan Manapement Review, 21-30. Mohrman, S.A., Tenkasi, R.V., & Mohrman, A.M. Jr. (2003). The role of networks in fundamental organizational change. Journal of Applied Behavioral Science, 39, (3), 301-323. Mykityshyn, M. G., (2007). Toward strategic and operational alignment: A svstem-of-systems architecture approach for enterprise systems. Ph.D: dissertation, School of Industrial & Systems Engineering, Georgia Institute of Technology.
402 People & Organizations
Mykityshyn, M.G., & Rouse, W.B. (2007). Supporting strategic enterprise processes: An analysis of various architectural frameworks. Information 0 KnowledPe Systems Management, 6 (1). Nadler, D.A., & Tushman, M.L. (1989). Organizational frame bending: Principles for managing reorientation. Academy of Management Executive, 3 (3), 194-204. Pennock, M.J. (2007). The economics of enterprise transformation, Ph.D. dissertation, School of Industrial & Systems Engineering, Georgia Institute of Technology. Pennock, M.J., Rouse, W.B., & Kollar, D.L., (2007). Transforming the acquisition enterprise: A framework for analysis and a case study of ship acquisition. Systems Engineering. 10 (2). Rasmussen, J. (1986). Information Processing and Human-Machine Interaction. New York: Elsevier. Rasmussen, J., Pejtersen, A.M., & Goodstein, L.P. (1994). Cognitive Systems Engineering. New York: Wiley.
a
Rooke, D., & Torbert, W.R. (2005). Seven transformations of leadership. Harvard Business Review, (4), 66-76. Rouse, W.B. (1982). On models and modelers: N cultures. IEEE Transactions on Systems. Man, and Cybernetics, SMC-12(5), 605-610. Rouse, W. B. (1993). Enterprise support systems: Training and aiding people to plan and manage. Industrial Management, 35(6),23-27. Rouse, W.B. (1996). Start where YOU are: Matching your strategy to your marketplace. San Francisco: Jossey-Bass. Rouse, W.B. (1998). Don’t iump to solutions: Thirteen delusions that undermine strategic thinking. San Francisco, CA: Jossey-Bass. Rouse, W.B. (2000). Managing complexity: Disease control as a complex adaptive system. Information Knowledge Systems Management, 2 (2), 143-165. Rouse, W.B. (2001). Essential Challenges of Strategic Management. New York: Wiley.
Value, Work, and Transformation 403
Rouse, W.B. (2003a). Engineering complex systems: Implications for research in systems engineering. IEEE Transactions on Systems. Man. and Cybernetics, 33 (2), 154-156. Rouse, W.B. (2003b). Invention and innovation in technology and art. In B. B. Borys and C. Wittenberg, Eds., From Muscles to Music. Kassel, Germany: University of Kassel Press. Rouse, W.B. (2004). Embracing the enterprise. Industrial Engineer, March, 3 1-35. Rouse, W .B. (2005a). Enterprises as systems: Essential challenges and enterprise transformation. Systems Engineering, 8 (2), 138-150. Rouse, W.B. (2005b). A theory of enterprise transformation, Systems Engineering, 8 (4), 279-295. Rouse, W.B. (Ed.). (2006a). Enterurise transformation: Understanding and enabling fundamental change. New York: Wiley. Rouse, W.B. (2006b). Enterprise transformation. In McGraw-Hill Yearbook of Science and Technology (121-123). New York: McGraw-Hill. Rouse, W.B. (2007). Complex enpineered, organizational, and natural systems. Atlanta, GA: Tennenbaum Institute, Georgia Institute of Technology. Rouse, W.B., & Acevedo, R. (2004). Anticipating policy implications of emerging information technologies. Information Knowledge 0 Systems Management, 3 (2), 77-93. Rouse, W.B., & Baba, M. (2006). Enterprise Communications of the ACM, (7), 66-72.
transformation.
Rouse, W.B., & Boff, K.R. (2001). Impacts of next-generation concepts of military operations on human effectiveness. Information 0 Knowledge Systems Management, 2 (4), 347-357. Rouse, W.B., & Boff, K.R. (Eds.). (2005). Organizational simulation: From modeling and simulation to games and entertainment. New York: Wiley. Rouse, W.B., & Garcia, D. (2004). Moving up in the rankings: Creating and sustaining a world-class research university. Information 0 Knowledge Systems Management, 4 (3), 139-147.
404 People & Organizations
Rouse, W.B., & Howard, C.W. (1993). Software tools for supporting planning. Industrial Engineering, 25(6), 5 1-53. Rouse, W.B., Pennock, M.J., & Kollar, D.L., (2006). Transforming the acquisition enterprise. Proceedings of the Fourth Annual Conference on Acquisition Research, Monterey, CA: Naval Postgraduate School. Rouse, W.B., Sage, A.P., & Wohl, J.G. (1982). Are there supplemental ways to enhance international stability? E E E Transactions on Systems, Man, and Cybernetics, SMC-12(5), 685-686. Sage, A. P. (2000). "Transdisciplinarity perspectives in systems engineering and management," in M. A. Somerville and D. Rapport, Eds. Transdisciplinaritv: Recreating Integrated Knowledge (pp. 158-169). Oxford, UK: EOLSS Publishers. Sage, A.P., & Lynch, C.L. (1998). Systems integration and architecting: An overview of principles, practices, and perspectives. Systems Engineering, 1(3), 176-227. Sage, A.P., & Rouse, W.B. (Eds.).( 1999). Handbook of systems engineerinp and management. New York: Wiley. Schumpter, J. (1942). Capitalism, socialism. and democracy. New York: Harper Senge, P.M. (1990). The fifth discidine: The art and practice of the learning organization. New York: Doubleday/Currency. Simon, H.A. (1957). Models of man: Social and rational. New York: Wiley. Simon, H.A. (1969). The sciences of the artificial. Cambridge, MA: MIT Press. Slywotsky, A.J. (1996). Value migration: How to think several moves ahead of the competition. Boston, MA: Harvard Business School Press. Slywotsky, A.J., & Morrison, D.J. (1997). The profit zone: How strategic business design will lead you to tomorrow's profits. New York: Times Books. Snow, C.P (1962). Two Cultures: Cambridge, UK: Cambridge University Press.
Value, Work, and Transformation 405
Stephenson, S.V., & Sage, A.P. (2007). Architecting for enterprise resource planning. Information Knowledge 0 Systems Management, 6 (1). Stewart, T.A., & O’Brien, L. (2005, February). Transforming an industrial giant. Harvard Business Review, 115-122. Tenkasi, R.V., & Chesmore, M.C. (2003). Social networks and planned organizational change. Journal of Applied Behavioral Science, 3, (3), 28 1300. Weiss, T.B., & Hartle, F. (1997). Reengineering performance management: Breakthroughs in achieving strategy through people. Boca Raton, FL: St. Lucie Press Womack, J.P., & Jones, D.T. (1996). Lean thinking: Banish waste and create wealth in your corporation. New York: Simon & Schuster.
This Page Intentionally Left Blank
Chapter 12
PEOPLE, ORGANIZATIONS, AND SERENDIPITY
INTRODUCTION In this book, I have addressed essential phenomena associated with the behavior and performance of operators, maintainers, designers, researchers, and managers. The particulars of these phenomena are captured in the chapter titles: Estimation, Mental Models and Teams Processes, Networks and Demands Tasks, Errors and Automation Failures, Detection and Diagnosis Displays, Controls, Aiding and Training Information, Knowledge and Decision Making Products, Systems and Services Invention, Innovation and Options Challenges, Situations and Change Value, Work and Transformation We explored human behavior and performance in the context of the essential phenomena associated with these topics. 407
408 People & Organizations
Human-Centered Design The primary theme has been human-centered design -- a process of assuring that the concerns, values, and perceptions of all stakeholders in a design effort are considered and balanced. The three overarching objectives of human-centered design are enhancing human abilities, overcoming human limitations, and fostering human acceptance. In each of the topical areas listed above, these three objectives were revisited and elaborated in the context of the chapter at that point. The four decades of research - as well as engineering and management - reviewed in this book represents, at least in retrospect, an ongoing research project focused on formulating, developing, and applying humancentered design. While the focus changed from operators to maintainers to designers and so on, the emphasis on training and aiding represented a continuum rather than fully distinct endeavors. As I reflect on managing two companies and several academic entities, I realize that I naturally approached these experiences as research projects in themselves. A senior executive once commented to me, “You can take the researcher out of academia, but you can’t take academia out of the researcher.” This orientation led to many interesting observations, conclusions, and publications, although I imagine that this orientation was not always fully helpful to the organizations I was managing.
Role of Serendipity The secondary theme has been serendipity and how unforeseen connections and distinctions enable innovative approaches to problems as well as solution concepts. The serendipitous connections and distinctions discussed in this book stem from perspectives that cut across domains and disciplines, and sometimes cultures. Connections between seemingly distinct endeavors, for example, led to new insights about how to address the research questions at hand. A wealth of ideas about people and organizations are discussed in this book. These ideas may appear -- again, at least in retrospect -- to follow a fairly orderly path from individuals to teams to organizations to enterprises. Yet, as illustrated by the many stories and vignettes in the earlier chapters, the path is only smooth when looking backward. Looking forward, the ideas emerged from serendipitous insights and matured independently at first, only coming together as a coherent whole more recently.
People, Organizations, and Serendipity 409
As noted in the Preface, I have come to view planning as a process of placing yourself in the path of serendipity (Rouse, 1994, 1996a). As discussed in this chapter, better yet if you can be at the crossroads of serendipity. Prepared with a clear set of intentions and at least notional plans, you’ll recognize serendipity when it happens. The process will be one of “smart” luck rather than “dumb” luck. As a result, you’ll pursue your plans and achieve success in ways not previously imagined. This assertion conflicts with the idea that unrelenting execution of well-honed plans is the key to success. While that sometimes works, success often surprises us in both its nature and timing. If we are primed to notice such twists of events, we can intelligently and aggressively take advantage of serendipity. We might not “make the times,” but we can take advantage of them. This book has reported on a journey laced with serendipity, a journey to understand how people and organizations function, as well as devise means for supporting them to function better. This journey has been a mixture of inching forward, striding occasionally and, in several situations, leaping to new insights and ideas enabled by serendipity.
INTERSECTING PERSPECTIVES By cutting across domains and disciplines, and sometimes cultures, you can see different views of common problems and needs. The resulting transdisciplinary perspective may not be easy to achieve. In particular, finding “common ground” may require much effort (Klein, et al., 2005). My experience at the Tennenbaum Institute, as well as in other endeavors, is that the pursuit of common ground is easier if there is a shared, overarching objective such as developing a decision support system, training simulator, or an online game. Formulating a shared computational model - or modeling tool -- can also facilitate integration of perspectives. The need to make decisions in order to proceed tends to drive this integration. An excellent example of this was often experienced when using the Product Planning Advisor (see Chapter 8) to plan new market offerings. These workshops typically included people from marketing, sales, engineering, manufacturing, finance, and product support. The discussions associated with developing the market and product models (see Figure 1 in Chapter 8) were often laced with dialog focused on finding common
410 People & Organizations
ground. From this perspective, the modeling tool and activity facilitated spanning disciplinary boundaries. In contrast, collaboration that is limited to pursuing related research in the same domain may not result in finding common ground. If the level of integration is limited to simply citing each others’ publications, and being juxtaposed in the organization’s annual report, then any perceived common ground may be illusory. In other words, such research is more multidisciplinary than transdisciplinary. I hasten to note that this conclusion does not limit the value of chance ideas crossing disciplinary boundaries. A good example is my exchange with an electrical engineering faculty member discussed in Chapter 3, where I learned that a problem I was addressing was a “standard” problem is his discipline. This can be one of the benefits of shared coffee urns and seminars. My most recent experience of seeking common ground has been the process of “selling” transformation research at Georgia Tech. Many faculty members see the topic as much too broad and risky -- certainly not appropriate for junior faculty aspiring to be tenured. Some see it as not appropriate for Ph.D. students who aspire to be junior faculty members, so that they can aspire to be tenured. Perhaps they are correct in perceiving that transdisciplinary research is not for the risk averse. Another objection, more to the point here, is the perceived level of effort to both understand the domain of transformation studied and achieve common ground with collaborators from the various disciplines that would need to be involved to address fundamental change in the context of interest. This is also a valid point. .Publications do not tend to come as quickly for this type of research. Also, the outlet for these publications may not always be the standard journals of the subdisciplines of the researchers involved. The key to overcoming these objectives, I have found, is to focus on the importance and relevance of the research issues being addressed. The fact that these issues tend to attract financial resources is certainly an important factor also. For some faculty members, these factors overcome their risk aversion, if any, and cause them to include transdisciplinary research as at least a portion of their research portfolio.
CROSSROADS OF SERENDIPITY I termed the phrase “crossroads of serendipity” to connote an environment where many people are on their own paths of serendipity. This type of
People, Organizations, and Serendipity 411
crossroads is a place where many connections and distinctions are recognized, many creative inventions emerge, and a higher than average percentage of these inventions transition to become market innovations.
Current Crossroads Silicon Valley comes to mind immediately for many people when this phrase is mentioned - although, sometimes there is a brief delay while I explain the phrase. Having grown up near Boston and gone to graduate school at MIT, it is natural to think of Route 128 as another crossroads. Several visits to Singapore over the past few years have also lead me to see this city-state as an emerging crossroads. Not all crossroads are equivalent, as Saxenian (1996) has shown in her comparison of Silicon Valley and Route 128. The greater job mobility of engineers and scientists in Silicon Valley has led to broader and faster dissemination of new ideas and best practices. Consequently, Silicon Valley has eclipsed Route 128 in most people’s perceptions. Farther back in the pack are Austin, San Diego, and the Research Triangle in North Carolina. There are also “wannabes” like New York and Atlanta who have some of the ingredients to become a crossroads, but have not quite yet made it (Rouse, 1996b). In various surveys, metropolitan Atlanta has received high marks for quality of life and cost of living. It received low marks for quality of K-12 education, government leadership and support, and availability of investment capital. In Chapter 9, I discussed Joel Mokyr’s (1990) four conditions for technology fostering economic growth. Considering Atlanta in terms of these conditions, my sense is that the metropolitan area scores well in terms of placing a high value on economic activity and having a non-centralized government. The city is steadily improving in terms of having the necessary human capital. However, Atlanta is challenged in terms of having a non-conservative culture. Thus, for example, it is still much easier to get press coverage for a big real estate deal than it is for an initial public offering of a rapidly-growing high tech company. Richard Florida’s highly regarded book, The Rise of the Creative Class (2004), provides another insight. Economic prosperity is correlated with the proportion of the population working in creative jobs. This includes science and technology, but also marketing, publishing, and the arts. A highly diverse culture also helps. With these findings in mind, Singapore has encouraged immigration of not only scientists and engineers, but also
412 People & Organizations
artists. While this may seem at odds with Singapore’s notoriously conservative culture, they have also shown a great ability to change rapidly when opportunities warrant it. A common denominator of all current crossroads is strong academic ties: 0
Silicon Valley - Stanford, Berkeley, et al. Route 128 - MIT, Harvard, et al.
0
Austin - University of Texas
0
Research Triangle - Duke, North Carolina, North Carolina State
It is important to recognize that the “academic community” includes more than just the academic programs and their faculties. Research institutes, teaching hospitals, business incubators, and alumni business interests are all important elements of the extended community. In fact, much of the innovation happens in the extended academic community rather than in the core academic programs.
The Legacy of Bologna Despite this reality and the fact that I have been a faculty member at four universities - two in leadership positions and two in one-year visiting positions - academia is a bit of enigma to me. In terms of content, in my case science and engineering, universities tend to be hotbeds of new ideas, often expounded by very talented and impressive free thinkers. However, as an organization, universities are among the most conservative I have encountered. How can groups of highly educated, mostly liberal intellectuals create such conservative organizations? The organizational structure of disciplinary departments, schools, and colleges is almost sacred - having begun with the founding of the University of Bologna in 1119. The incentive and reward system that totally emphasizes individual accomplishments by faculty members is, in fact, sacred. As a junior faculty member who made it up through the ranks to tenured full professor, the system was great. There was no doubt about what counted and no doubt about the rules of the game. If you could
People, Organizations, and Serendipity 413
publish lots of journal articles, bring in sizable amounts of grant money, and earn above-average teacher ratings from students, your success was assured. However, incentives and rewards that focus solely on creating stars do not necessarily create great organizations. They also do not create institutions that can contribute to solutions of complex transdisciplinary problems in areas such as education, environment, and health care. What they do produce, however, are students, especially at the graduate level, who are programmed to recreate the “standard” academic organization wherever they work. This, in turn, makes it very difficult for academic organizations to face their own organizational delusions. The strong organizational traditions of academia undermine abilities to significantly contribute to solutions of complex transdisciplinary problems. The need to bridge this gap has led me to a vision of a top-rated global university where transdisciplinary initiatives are central elements of the university’s research and education portfolio, and one of the main reasons why the university is so highly rated. The rallying cry for this vision - and those of us pushing the transdisciplinary rock up the academic hill - could be “2119!” which refers to the one thousandth anniversary of the organizational traditions emerging from Bologna. We have only 112 years to make this vision a reality! Prospects for Academia
Academia has several affordances and impedances that affect its ability to play a central role at the crossroads of serendipity. Affordances include a culture of intellectual freedom, at least after tenure. Just as important are the bright, ambitious, and energetic students. I have found that students are the “free electrons in the system.” They bind to new ideas and new ways of thinking and bring the faculty along, sometimes slowly but inevitably. Max Planck and later Paul Samuelson, citing Planck, observed that science advances funeral by funeral. Thus, change may be very slow, but it does happen. The impedances include the increasingly intense specialization of faculty members and the increasing economic pressures on academia. Where C.P. Snow (1962) once articulated the cultural divide between science and the arts, we now find more than four decades later cultural divides between axiomatic and empirical science within the same academic department. I have even witnessed intellectual divides between researchers
414 People & Organizations
in linear and nonlinear mathematics. James Duderstat (2000), former president of the University of Michigan, notes that the disciplines have been deified, yielding a dominance of reductionism. This presents a challenge for transdisciplinary research, particularly in terms of valuing a diversity of approaches and more flexible visions of faculty career paths. Economic pressures are far from new. Crossroads, as hubs of creativity, can consume enormous amounts of resources. MIT, for example, addressed this need and secured these resources via its close relationship with the federal government (Killian, 1985). In general, as Richard Levin (2003), former president of Yale University, indicates, “Competitive advantage based on the innovative application of new scientific knowledge - this has been the key to American economic success for at least the past quarter century.” However, Derek Bok (2003), recent president of Harvard University, is particularly concerned with the commercialization of the university in response to a plethora of “business opportunities” for universities. He notes, “Increasingly, success in university administration came to mean being more resourceful than one’s competitors in finding funds to achieve new goals. Enterprising leaders seeking to improve their institution felt impelled to take full advantage of any legitimate opportunities that the commercial world had to offer.” In any enterprise, when resource needs dominate, creative early-stage ideas can whither. Thus, there are pros and cons relative to academia playing a central role in fostering crossroads of serendipity. On balance, however, my experience is that certain types of investigations only happen in academic settings. For example, our studies of parallels of invention and innovation in technology and the arts (Rouse, 2003) and the nature of teamwork in the performing arts (Rouse & Rouse, 2004) are unlikely to have happened in industry or government. The same can be said of many cross-cultural studies. Academia usually allows the freedom to pursue “off the wall” topics, as long as you occasionally pay your dues to the mainstream.
IMPLICATIONS FOR A FLAT WORLD There is another overarching reason for academia to step up to the task of fostering crossroads. Thomas Friedman’s best seller, The World Is Flat (2005), provided a clarion call to the science and technology communities, both within and outside academia. Asian universities, mainly China and India, graduate roughly 1,000,000engineers per year. The U.S. graduates
People, Organizations, and Serendipity 415
65,000 per year. While the numbers can be debated, this difference, compounded over years, is likely to have an astounding impact. One of my public policy colleagues who studies global innovation noted that China, India, Japan, Korea, and Taiwan have caught up with the U.S. in terms of articles published in top journals and patents issued. They have learned our game and are excelling in their play. My reaction was that this trend is great. We have taught them our game. We can now outsource to them reductionist specialization. And, we can move on. The value deficiency where we have unique competitive advantages involves addressing and solving complex systems problems. We have the abilities to formulate messy problems with engineered, organizational, and natural components. We have the abilities to figure out how to create economic value with these solutions. We have the abilities to determine how work processes should change to leverage our overall abilities. The people side of the transformation problem - management decision making and social networks - are, admittedly, more difficult (Rouse, 2006). Management needs to feel some compelling sense of urgency, a sense that business as usual cannot sustain their enterprise. Similarly, the social network needs to understand that change is necessary and, consequently, help to make it happen. As discussed in earlier chapters, we know what works (Burke, 1996; Mokyr, 1990). Technology, in a broad sense, will enable economic prosperity. The key question is who will prosper. While everyone tends to think in terms of “us” and “them,” the answer needs to be, “Everyone.” But, this immediately raises the question of how this can happen. I think what is needed is a deeper understanding of change. Upon observing the success of Portuguese immigrants in my hometown of Portsmouth, RI, my mother often observed, “It’s their turn.” This suggests a progression from the Greeks to the Romans, then the French and the English, and more recently the United States. And, by the way, China is back in the rotation. However, another view is possible. Transformation is inevitable. Technologies change, organizational models fade, and culture evolves. We may try to preserve the past - some would argue to maintain our lifestyles of energy consumption, for example. However, the past is not sustainable. The 1940s cars of which I am so fond are great to look at, at least if you are beyond a certain age, but not as much fun to drive. Similarly, many things change. Singapore provided low-cost electronics manufacturing until their rising standard of living meant they were no longer competitive in manufacturing, except perhaps for
416 People & Organizations
pharmaceuticals. China is now the low-cost manufacturer. In a recent trip to Vietnam, meeting with senior officials gave me the sense that they are right behind China. Where do we go after that? Indonesia, Philippines, or maybe Africa? From the perspective of human-centered design, it is not difficult to see the patterns. We need inventors, innovators, and consumers. Different societies can play different roles, changing roles as time progresses. There is no stasis. Time cannot stand still. Conservatism, no matter how appealing, cannot endure. However, human-centered design can and does provide an enduring philosophy. There are always stakeholders who have interests. There is always a sweet spot amongst those interests where the support of primary stakeholders will be maximized. There are always considerations that will help secondary stakeholders support the initiative of interest. The difficulty, of course, is the perception that everything is a win-lose situation. If jobs go to Mexico or China or South Africa, my country is somehow losing. However, win-win situations are quite possible. The U.S. loses manufacturing jobs to Mexico and China, as England lost them to the U.S. The U.S. becomes a service-oriented knowledge economy, followed by Japan then Mexico then China as manufacturing migrates to Vietnam, then Africa. The key international debate is not about political power, sectarian conflicts, and preserving ways of life. It’s about enhancing all lives by considering the full range of stakeholders and their abilities, limitations, and preferences. We need to train and aid everyone to assure that most stakeholders are delighted and all stakeholders are supportive. Better yet if everyone is delighted. I’m not completely sure how that can be accomplished, but I’m fully confident that serendipity will yield unexpected but wonderful paths.
REFERENCES Bok, D. (2003). Universities in the marketplace: The commercialization of higher education. Princeton, NJ: Princeton University Press. Burke, J. (1996). The pinball effect: How Renaissance water gardens made the carburetor possible and other iournevs through knowledge. Boston: Little, Brown.
People, Organizations, and Serendipity 417
Duderstadt, J.J. (2000). A university for the 2lStcentury. Ann Arbor, MI: University of Michigan Press. Florida, R. (2004). The rise of the creative class: An how it is transforming work, leisure, community, and everyday life. New York: Basic Books Friedman, T.L. (2005). The world is flat: A brief history of the twenty-first century. New York: Farrar, Strauss and Giroux. Killian, J.R., Jr. (1985). The education of a college president: A memoir. Cambridge, MA: MIT Press. Klein, G.A., Feltovich, P.J., Bradshaw, J.M., & Woods, D.D. (2005). Common ground and coordination in joint activity. In W.B. Rouse and K.R. Boff, Eds., Organizational Simulation: From Modeling and Simulation to Games and Entertainment (Chap. 6). New York: Wiley. Levin, R.C. (2003). The work of the university. New Haven, CT: Yale University Press. Mokyr, J. (1990). The lever of riches: Technological creativity and economic progress. New York: Oxford University Press. Rouse, W.B. (1994). Best laid plans. New York: Prentice-Hall. Rouse, W.B. (1996a). Best laid ulans: Discovering who YOU are and where you are headed. The Futurist, 30, 34-38. Rouse, W.B. (1996b). Atlanta: The next Route 128 or Silicon Valley? Competitive Edge!, October/November, 16-19. Rouse, W.B. (2003). Invention and innovation in technology and art. In B. B. Borys and C. Wittenberg, Eds., From Muscles to Music. Kassel, Germany: University of Kassel Press. Rouse, W.B. (Ed.). (2006). Entemrise transformation: Understanding: and enabling fundamental change. New York: Wiley. Rouse, W.B., & Rouse, R.K. (2004). Teamwork in the performing arts. Proceedings of the IEEE, 2 (4), 606-615. Saxenian, A. (1996). Regional advantage: Culture and competition in Silicon Valley and Route 128. Cambridge, MA: Harvard University Press. Snow, C.P (1962). Two Cultures: Cambridge, UK: Cambridge University Press.
This Page Intentionally Left Blank
INDEX
3M, see Minnesota Mining and Manufacturing Company
Aegean Sea, 123 Aegis Cruiser, 147, 343 Aegis Team Training, 40 aerial reconnaissance, 84 aerospace industry, 4, 185, 189, 208 aesthetic value, 161 affordability analysis, 159 affordances, 4 13 Africa, 4 16 agility, 224 Agincourt, 37 1 Ahlstrand, B., 401 aiding, 13, 84, 94, 134, 145, 148, 155, 171,344,407,416 incorrect, 149 misleading, 149 rule-based, 138 KARL, 148 Air Force, 17, 28, 103, 104, 181 Air Force Aerospace Medical Research Laboratory, 84 Air Force Cambridge Research Laboratory, 28 Air Force Flight Dynamics Laboratory, 84 Air Force Research Laboratory, 16,265 Air Force Scientific Advisory Board, 260 air traffic control, 13, 22 aircraft, 5 , 78, 155, 190, 317, 373
Abbott Labs, 17 Abbott, K., 229 academia prospects for, 413 academic community, 412 academic freedom 394 acceptability, 23 1, 243 Acevedo, R., 302,308, 382,385, 397,403 acquisition, 56 acquisition sources selection, 56 action plans designing, 300 executing, 301 activities, 3 13, 362 adaptive aiding, 78, 81, 84, 100, 104, 105 allocation, 86, 89 architecture, 89 First Law of, 90 framework for design, 86 partitioning, 85, 89 principles of adaptation, 87 principles of interaction, 88 transformation, 86, 89 adaptive filtering, 7 1 419
420 Index
companies, 322 engines, 1 manufacturers, 5 pilots, 5 , 189 powerplants, 92 aircraft maintenance training, 92 troubleshooting, 92 airlift enterprise, 397 airlines, 5 allocation process hierarchical, 58 Amazon, 368 American College Testing Score, 122 Amram, M., 304 analogical thinking, 364 analytical engine, 373 Anderson, D.L., 150 Andes, R.C., Jr., 88, 106 Annual Conference on Manual Control, 78, 85 applied mathematics, 393 appropriateness, 158 arbitrage, 277 architects, 354 architecture, 4, 361, 381 Adaptive Aiding, 89 Error Monitoring, 96 Intelligent Interface, 101 Interface Manager, 102 Operator Model, 102 Support System, 188 Argyris, C., 346, 365,398 Armageddon, 40 Arnott, D., 147 article service, 218 artificial intelligence, 77, 103, 383 artists, 354 arts, 4,47, 303, 355,414 Ashuri, B., 381, 398
Asimov, I., 90, 106 assets financial, 269 assumptions incorrect, 330 AT&T, 375,376,379 Atansoff, J., 374 Atkinson, H., 67 Atlanta, GA, 109,344, 393 atmospheric physics, 28 attention allocation, 79 attributes economic, 233 non-economic, 233 Augenbroe, G., 381,398 Austin, TX, 41 1 automation, 13,77,78, 134, 171, 183 intelligent, 78 decisions, 183, 184 automobile industry, 3, 322 automobiles, 3 17, 373 axiomatic science, 413 axiomatic traditions, 355 axioms, 388
Baba, M., 353,403 Babbage, C., 373 Bacon, R., 372 Bae, J., 381, 398 Bailey, D.E., 362, 398 balanced scorecard, 296, 358, 360 ballet, 43 Ballhaus, W., Jr., 260, 304 Barley, S.R., 362, 398 Barnett, J., 103 Barrett, C., 16, 35 Basole, R.C., 302, 304, 383, 398 Battelle, J., 72,73
Index 421
battery eliminators, 324 Bayesian analysis, 277 Bayha, F.H., 272,306 BBC, 258 Beckenbaugh, W., 265 Bedford, MA, 28 behavioral and social sciences, 1, 4,396 beliefs, 334 Bennis, W., 315, 346 Benz, C., 372 Berchtesgaden, Germany, 78 Berlin Wall, 389 beyond visual range combat, 88 big bets, 340 big graphics and little screens, 173 Billings, C.E., 5 , 19, 229,253 binomial methods, 276 biology, 35 biomechanics, 35 biotechnology industry, 303 Bitran, J., 170 Black, F., 276, 265, 304 Blenkinsop, J., 372 Bodner, D.A., 282,304,398 Boeing, 17, 170, 323, 373 B707,373 B777,99 KC-135,373 Boer, F.P., 269, 304 Boff, K.R., 16, 161, 162, 169, 171, 174, 189, 190, 193, 194, 207, 208,225,227,228, 265, 280,285,293,295,297,308, 309, 338, 341, 343, 349, 351, 383,384,397,403 Bok, D., 416 Booher, H.R., 5 , 19, 158, 169, 182, 190,229,253 book overview, 9 Boolean logic, 206
border crossings, 5 Boston, MA, 41 1 Box-Jenkins time series models, 71 Boyle, P.P., 276, 304, 305 Bradshaw, J.M., 417 Brigham, E.F., 264, 305 Broadie, M., 276, 305 Brown, J.S., 315, 346, 347, 383, 398,399 budget allocation, 282 Burke, C., 51, 73 Burke, J., 3, 19,258, 305, 346, 415,416 Burley, J., 260, 310 Bums, J.J., 48 Burroughs, 374 Burt, R.S., 398 Bush, V., 51,73,205,225 business as usual, 415 business functions, 367 business incubators, 412 Business Planning Advisor, 214, 31 1,344,389 business process improvement, 353 Business Process Reengineering, 329,353 business processes, 13, 17, 189, 190,315 business situations common, 3 17 typical transitions, 320
cable television, 257 Cadillac, 330 CAIN, 132 calculators, 317, 322, 374 Calderwood, J.R., 191 Caldwell, F., 201, 225
422 Index
Campaign Public Library, 68 cannibalization, 245, 329 Cannon-Bowers, J.A., 41,43,48, 49 capabilities, 268, 397 future, 302 capital campaign, 393 Card, S.K., 5, 19,229,254 Carlson, J.N., 116, 150 Cams, W.E., 228,270,309 Carr, P., 276,305 case stories, 66 option purchases acquire capacity, 274 acquire competitor, 275 invest in R&D, 274 run the business, 274 product planning, 246 technology investments, 274 cash flow, 291 cash registers, 3 17, 322, 374 Cassimon, D., 276,305 Casti, J., 167, 319, 346 Castronova, E., 341,346 Caverlee, J., 381, 398 cellular phones, 324 census bureau, 373,374 challenges, 31 1,407 change, 313 focus, 313 future, 313 growth, 313 knowledge, 3 13 time, 313 value, 3 13 chamber orchestra, 43 Champaign, IL, 68 champions, 286,295 Champy, J., 353,361,400 change, 311,313,315,407 disruptive, 295 fundamental, 353
implementing, 185 routine, 353 characters real, 339 synthetic, 339, 343 Charan, R., 330, 346 checklist, 80 chemical industry, 260 chemical processes, 257 chemistry, 35 Chesmore, H., 365,405 Chevrolet, 330 1949,3 China, 378,414,415,416 chorus, 43 Christensen, C.M., 258, 295, 305,315,346,394,399 Chu, Y., 81, 106 circuit routing, 15 city management, 343 Clairol, 378 Clausing, D., 254, 358,400 Clear Channel, 376, 378, 379 Clough, W., 392 CNN, 26,368 coaches, 296 Coca-Cola, 17 Cody, W.J., 207, 208, 225,227, 228 cognitive mechanisms, 29 cognitive modeling, 35 cognitive style, 121 collaboration, 303 collaboration technology, 382 Collins, J.C., 313, 315, 321, 325, 346,380,386,399 Colvin, G., 315,330, 346 Combat Information Center, 40 commodity trap, 3 19 common ground, 409 common sense, 10 communications networks, 130
Index 423
Community Dialog Project, 38 compatibility, 21 1 competencies, 371 competitive analyses, 232 Competitive Edge, 344 complex organizational systems, 352 complex systems, 109, 155, 180, 190,393,397,415 complexity, 43, 116, 133, 355, 398 relationship to intentions, 118 strategic, 133 structural, 144 Compton, W.D., 272, 306 computational representations, 342 computer and communications technologies, 382 computers, 257, 3 17 power, 375 programming, 34 science, 78 speed, 375 computing, 1,4, 373,381,396 Computing Surveys, 83 concepts, 38 1 confidence, 171 Congress, U.S., 171 Connections, 258 connections, 408 consolidation, 319 constituencies, 159, 356 consumers, 328,416 content analysis, 157 Contextually Augmented Integrated Network, 132 contract services, 327 control, 14, 15, 77, 156 actuators, 155 theory, 166 controllability, 354
controllers non-linear, 173 controls, 155, 172,407 convergence proofs, 34 Cook, S.D.N., 3 15,347,383,399 Cooper, R.G., 262,305 Coordinated Science Laboratory, 16,77 Cornfield Cruiser, 40 cost efficiencies, 367 cost savings, 29 1 codbenefit analysis 158, 159 comparison of frameworks, 161 methodology, 162, 163 costleffectiveness analysis, 159 Council of Library Resources, 5 1 Covey, S.R., 316,347 Cox, J.C., 276, 305 creation, 156 creative jobs, 4 11 creativity, 328 Crecine, J.P., 392 CRM, see Customer Relationship Management Crossing the Chasm, 318 crossover, 3 18 cue utilization, 33 Cugnot, N., 372 cultural impacts of technology, 158 culture, 356, 393,415 cultures, 1, 4, 260, 296 Eastern, 5 Western, 5 Curry, R.E., 103, 104,07 customer, 359 Customer Relationship Management, 199, 395 customer support, 312,390 customers, 53
424 Index
cyclical theory of business growth, 3 17
da Vinci, L., 372 Dad’s Garage, 47 Daimler, G., 372 DARPA, see Defense Advanced Research Project Agency Darwin, C., 354, 398 data collection, 64, 201, 240, 268 database search, 205 structure, 205 Database Access and Search Environment, 205 Davenport, T.H., 356,395,399 Davis, M., 35,47 Davis, P.J., 35, 47 Day, G.S., 347 DBASE, see Database Access and Search Environment DEC, see Digital Equipment Corporation deciding what to do, 77 decision criterion, 282 decision making, 4, 78, 171, 197, 212, 286,291, 315, 354, 395, 407 agility, 224 data-driven, 14 design, 207 R&D, 205 decision processes multi-stage, 261 decision support systems, 328, 409 decision theory, 328 decisions funding, 293 R&D investment, 260
decline graceful, 3 14 defense, 185 conversion, 335 electronics, 336 industries, 389 Defense Advanced Research Projects Agency, 17,35, 100, 103, 104 deficiencies, 300 DeJohn, W., 66 Delft University of Technology, 34,92 Delft, The Netherlands, 9 1 Dell, 368, 396 delusions, 302 principles, 333 questions, 332 risks, 331, 332 demands, 5 1 forecasting, 52, 69 Deming, W.E., 353,399 Department of Mechanical and Industrial Engineering, University of Illinois, 77 design, 14, 15,94, 156,207,225 challenges, 210 engineering phase, 8 environment, 207,209 evaluation, 182 for success, 8 framework, 6 , 7 guidance, 179 information requirements, 202 information world, 208, 212 marketing phase, 7 methodologies, 179, 189 methods and tools, 21 1 naturalist phase, 7 objectives, 6,229 of aiding, 174 of training, 179
Index 425
process, 297 sales and service phase, 8 solutions, 232 spirals, 21 1 support, 208,209,211 teams, 210 theory, 208 tools, 2 12 Designer’s Associate, 174 designers, 189,407 detection, 407 detection performance, 111 diagnosis, 407 CAIN, 132 Failure, 135 FAULT, 121 impact of aiding, 112, 121, 128, 134 impact of cognitive style, 121 impact of feedback, 114 impact of incentives, 129 impact of network size, 112 impact of pacing, 112 impact of redundancy, 114 impact of training, 112, 121, 128, 134 MABEL, 132 Performance, 1 11 summary of results, 134 summary, 123 TASK, 112 Diesing, P., 355, 399 differential equations, 3 1 Digital Equipment Corporation, 24, 170,374,394 Digital DEC-10, 104 Digital PDP-8, 24 digital signal processors, 247 diminishing returns, 58 direct mail, 312 disciplines, 1 , 4 discount rate, 264
discounted cash flow models, 261,263 discriminant models, 79 displays, 155, 171, 172,407 predictor, 22,23 symbols, 172 distinctions, 408 distribution, 396 diversions, 315 Dixit, A.K., 276, 305 domains, 1,4, 315, 358 comparison, 221 Douglas Aircraft Company, 373 DC-3, 373 Doyle, J., 116, 150 Draper, S.W., 5, 19, 229, 254 Drucker, P.F., 359, 399 Duderstadt, J.J., 414,417 Duguid, P., 315, 346, 383,398 Duke University, 412 Duncan, P.C., 42,48, 196 Durlach, N.I., 340, 347 Duryea, C., 372 Duryea, F., 372 Dutch Organization for Applied Scientific Research, see TNO dynamic programming, 59 dynamic systems, 25,83
E Systems, 322 Eastern Airlines, 99 Eastman Kodak, 394 Eckert, J.P., 374 ecology, 43 economic analysis traditional, 160 economic development, 393 economic growth, 41 1 economic pressures, 414 economic progress, 259
426 Index
economic prosperity, 4 11 economic value, 4 15 Economist, 375,378,399 economists, 354 education, 413 Edwards, S.L., 168, 195 efficient frontiers, 288 egress, 69 Einstein, A., 354 Electric Power Research Institute, 183 electric utility industry, 186 electrical design, 156 electrical engineering, 15, 18, 23, 78,253 electronic checklists, 93, 98 Electronic Numerical Integrator and Calculator, 374 electronic polling system, 39 electronics industry, 4 electronics manufacturing, 415 Elmore, G., 103 Elsevier, 166 Embedded Figures Test, 121 emergence, 4 emergencies, 69 emerging enterprise technologies, 382 emission control systems, 246 empirical science, 4 13 empirical traditions, 355 end users, 229,3 12 energy, 4 15 Engelen, P.J., 276, 305 engineering, 1,4,354, 381,396 engineering design, 155 Engineering Library University of Illinois, 60, 69 engineering science, 155 England, 4 16 ENIAC, 374 Enstrom, K.D., 30,49,79, 106
enterprise, 1, 190,316,408 architectures, 382 information systems, 34 1 mobility, 303 model, 357 physics, 397 seas, 355 stasis, 313 state, 357 support system, 174 systems, 352 technologies, 380 Enterprise Resource Planning, 199,395 Enterprise Support Systems, 17, 238,254,270, 305, 31 1, 326, 347,351,389 enterprise transformation, 352, 396 context, 356 theory, 360, 395 entertainment, 383 entrepreneurs, 5, 259 environment, 18,413 compelling, environmental management, 335 Erector Set, 3 Ergonomics, 123 ergonomics, 157 Ericsson, K.A., 32,48 ERP, see Enterprise Resource Planning Error, 77,407 classification, 97 identification, 97 mean-squared prediction, 27 monitor, 100, 104 monitoring, 96, 105 remediation, 89,97 tolerance, 96 tolerant systems, 95
Index 427
ESS, see Enterprise Support Systems essential challenges, 312, 314, 386 essential phenomena, 9, 11, 14, 53,77, 109,407 estimation, 21,407 estimation theory, 166 evaluation guidelines, 183 online journal, 220 Evans, H., 3, 19 evolution, 3 18 Excel, 149 executives, 344, 397 expected value, 27 experimental environments, 342 expert system, 182
facilitation, 216, 249 elements of, 251 factory, 190 failure, 259 detection, 84, 109, 135, 171, diagnosis, 84, 109, 135, 168, 171 measure of performance, 122 summary of models, 143 failures, 109,407 lack of execution, 297 misalignment, 297 false alarms, 77 false beliefs, 330 family, 5 Farley, M., 303, 305 FASTPAS S, 5 5 Fath, J.L., 125, 152 fault diagnosis, 15 FAULT, 111, 118, 125, 143, 145, 147
Federal Aviation Administration, 99 federal government, 414 Feltovich, P.J., 417 Ferrell, W.R., 191 filtering, 22 financial instruments, 269 financial resources, 3 16 financial statements, 358, 360 First Law of Adaptive Aiding, 90 Flannery, T.P., 336, 347, 356, 399 flat world, 414 flight management, 80 flight operations, 124 Florida, R., 411,417 focus, 313, 315 food services, 327 Ford, 396 forest products industry, 283, 284 formulation, 156 Fortune 500, 17 Foster, R., 271, 305 Framework for Aiding the Logical Understanding of Troubleshooting, see FAULT free cash flow, 269 free electrons, 4 13 Frey, P.R., 172, 186, 190, 194, 196,208,228 Friedman, T.L., 414,417 front-end analysis, 184 Fu, K.S., 154 Fulton, R., 372 functions, 362 product, 23 1, 24 1 service, 23 1, 24 1 system, 23 1,24 1 fundamental change, 353, 397, 410 fundamental changes of markets, 367
428 Index
fundamental limits, 16, 34 future, 259, 313,315 functions, 136, 139 deploying, 340 designing, 340 evaluation, 340 fuzzy sets fuzzy rule-based model, 139 fuzzy set membership fuzzy set model, 136 fuzzy set theory, 136, 166
gaming, 383 Gapenski, L.C., 264,305 Garcia, D., 378,, 385, 391, 399, 403 Garg, D.P., 174, 194 Garnham, A., 18 Garris R.D., 190 gatekeepers, 296 Gavetti, G., 364,400 GE, 322 Geddes, N.D., 103, 104, 105, 107, 108 Gell-Mann, M., 116, 150 General Dynamics, 322 General Motors, 394, 396 George, B., 315, 347, 356,364, 400 Georgia Institute of Technology, 100, 124, 166, 186,351, 391, 395,410 Germany, 186 Gerstner, 379 Geske, R., 276,305 Gesterfield, K., 67 Gillette, 378 Gladwell, M., 337,347, 364,400 Glasserman, P., 276, 305 Glymour, C., 35,48
goals, 313 choices, 330 good plans quickly, 25 1 Goodstein, L.P., 348, 402 Google, 52 Gopher, D., 165, 194 governance, 286,29 1 government agencies, 4 Govindaraj, T., 106 grade point average, 392 graduate education, 392 Granovetter, M., 365,400 graphics large, 172 Greenstein, J.S., 142, 150 Gregston, G., 48 Griffin, A., 295, 306 group decision making, 38 groups, 38 growth, 313 Grumman, 322
Hammer, J.M., 35, 36,38,49, 93,98, 104, 105, 106, 107, 168, 172, 173, 190, 195, 196 Hammer, M., 353, 361,400 Hammond, J.S., 232, 254 Hanawalt, T., 386 Hancock, W.M., 272,306 Handbook of Systems Engineering and Management, 156 Hanscomb Field, 28 Harding, H., 378,379,400 Harris, K., 201, 225 Hartle, F., 336, 349,356,405 Harvard University, 412,414 Hauser, J.R., 254, 358,400 Health Advisor, 343 health care, 18,413
Index 429
hedges, 315 Heim, J.A., 272,306 Heisenberg, W.Z., 34,48 helicopter maintenance, 172 Hemp, P., 379,400 Henneman, R.L., 130, 150, 151, 172 heterarchical structures, 292 Hewlett-Packard, 321 hierarchical multi-page electronic displays, 172 hierarchical structures, 292 hierarchical view of production, 187 hierarchy abstraction, 13, 172 aggregation, 13, 172 Hitachi, 17 Hofrichter, D.A., 336, 347, 399 Hollerith, H., 373 Hollnagel, E., 364,400 Holweg, M., 396,400 Honda, 32 Honeywell, 17 House of Quality, 236,358 Howard, C.W., 228,238,255, 270,309,35 1,390,404 hubris, 394 Hughes, 17,323 human abilities, 1, 6, 10, 11, 14, 105, 134, 150, 155, 158, 225, 312,344,397,416 enhancing, 230,408 human acceptance, 6, 105, 150, 158, 183 fostering, 230,408 guidelines, 183 human adaptation, 24, 95, 129, 168 human behavior, 165 human capital, 41 1 human effectiveness, 158, 165
Human Effectiveness Directorate, 265 human error, 6,91 demographics, 9 1 error analysis, 93 error classification, 93 leading indicators, 85 reduction, 95 studies, 9 1 tolerance, 95 human expectations, 224 Human Factors Group, 156 human factors, 157 human inclinations, 134 human information processing, 10, 171 human limitations, 1, 6, 11, 12, 14, 105, 134, 150, 155, 158, 312,337,344,416 overcoming, 230,408 human performance, 165, 168 human preferences, 3 12, 344, 416 human roles, 155 humadmachine interaction, 170 human-centered design, 47,72, 149, 189,225,229,252,285, 304,343,344,397,408,416 definition, 229 tools, 252 human-centered philosophy, 3 16, 416 human-centered planning, 3 11 human-computer communication, 84 human-computer interaction, 83, 171 humanities, 4 human-machine interaction, 84, 165, 179, 190 human-system tasks approaches to supporting, 177
430 Index
general, 175 demands, 176 Hunt, R.M., 100, 118, 121, 123, 139, 143, 145, 147, 151, 153, 172, 186, 187, 190, 194,303, 309,389
IBM, 373,374,376,379,380 IDEF models, 282 identification algorithm, 79 IKSMOnZine, 218, 220 ILLINET, 61,65,66,70,71 Illinois Library and Information Network, see ILLINET Illinois State Library, 61, 66, 71 impedances, 413 improv theater, 43,47 incentive and reward systems, 286,300,336,394,412 India, 414,415 individuals, 1,408 affiliations, 286, 294 Indonesia, 416 industrial electronics, 336 industrial engineering, 16,55, 171, 187,351 industries, 4 information, 197,407 availability, 64 capture, 216,249 definition, 20 1 desks, 55 ease of access, 207 external, 213 flows, 293 internal, 213 location, 64 needs, 198 requirements, 200 seeking, 204, 208, 212, 222
structure, 205 support, 220 technology, 382 theory, 168 time relevance, 213 types, 203 utilization, 207 value, 204 vs. knowledge, 202 Information Knowledge Systems Management, 2 18 information and communications technologies, 302 Information Manager, 38 information processing, 167 knowledge-based, 167 rule, 167 skill, 167 systems, 77 information systems agility, 224 expectations, 224 ingress, 69 initial public offering, 4 11 innovation, 47, 190, 257, 280, 407,411,414,415 funnel, 261 technology, 258 vs. invention, 257 innovator’s dilemma, 394 innovators, 416 input, 387 Institute for Mechanical Construction, 91 Institute of Aviation, 145 instruction, 33 integration, 2 11 Intel, 374 intellectual freedom, 413 intelligent agents, 197 intelligent interfaces, 5,96,98, 303
Index 431
architecture, 101 fundamental limits, 36 intelligent manufacturing systems, 187 intelligent tutoring systems, 280 Intent Model, 103 interactive graphics, 28 interface management, 105 Interface Manager, 100, 102, 104 architecture, 102 internal combustion engines, 373 Internal Rate of Return, 264 Internet, 197,213,257, 375 intraprise, 356 intuitions, 364 invention, 47,257, 303,407, 411,414 vs. innovation, 257 inventors, 259,416 investments decisions, 212 management policies, 283 returns, 315 under uncertainty, 276 valuation, 17, 381, 384 Iowa State University, 374 Iran, 40 IRR, see internal rate of return
Jacquard, J., 373 Jain, R.K., 296, 306 Japan, 415,416 jazz, 43 Jensen, M.C., 380,400 jet engines, 373 job mobility, 41 1 jobs, 155,313 Johannsen, G., 78, 124, 151, 165, 190 John Wiley, 8
Johnson, W.B., 92, 106, 121, 145, 147, 151, 179, 186, 191 Johnston, J.H., 48 Joint Services Electronics Program, 84 Jones, D.T., 282, 310, 315, 349, 361,405 Journal of Systems Engineering, 352 joysticks, 172
Kahneman, D., 26,48 Kalman filtering, 143 Kang, J.H., 71, 74 Kaplan, R.S., 262,296, 305, 306, 358,400 KARL,, 111, 140, 143, 148 Kash, D.E., 258,309 Katzenbach, J.R., 43,48, 356, 400 Keebler, 378 Keeney, R.L., 27, 57,74, 232, 254 Kellogg, 376, 379, 380 Kelly, K., 48 Kessler, W.C., 282, 306, 361, 396,400 keyboards, 172 Kiel, G.C., 186, 191 Killian, J.R., Jr., 4 14, 4 17 Kisner, R.A., 194 Kissinger, H., 33 Kivinen, J., 186, 191 Klein, G.A., 10, 19, 139, 174, 191, 337, 347, 364,401,409, 417 Kleinau, S., 396,401 Knaeuper, A.E., 140, 148, 151 knowledge, 197,313,315,407 definition, 201
432 Index
engineering, 38 executable, 253 external, 213 flows, 293 internal, 2 13 management, 199,201 needs, 198 operational, 179 seeking, 208, 212,222 sharing, 385 support, 220 system, 180 types, 203 Knowledge Capital, 160 Knowledgeable Application of Rule-based Logic, see KARL Kober, N., 159, 194, 195 Kollar, D.L., 277, 307, 384,402, 404 Korea, 4 15 Kouzes, J.M., 315, 347, 356, 364,401 Kuhn, T.S., 34,48 Kulatilaka, N., 304 Kusiak, A., 282, 306
Lampel, J., 401 Lander, D.M., 276, 306 Lantz, N., 3 Larson, N., 282, 306 law, 4,355 Laws of Robotics, 90 leaders, 364 influence, 293 power, 293 leadership, 43, 304, 3 15, 356, 370,393 involvement, 385 skills, 159 styles, 364
transformational, 3 15 lean processes, 361 lean thinking, 329 learning double-loop, 365 single-loop, 3 15 Lee, I., 272, 306 Legacy of Bologna, 412 lending policies, 62 Levin, R.C., 414,417 Lewis, C.M., 35,49, 381,401 Li, L., 150 libraries, 13, 51 floor plans, 69 operations, 205 performance, 62 library and information sciences, 51 library circulation, 70 Library Network Model, 62 library networks, 1, 15,60, 66, 205 case studies, 66 Licklider, J.C.R., 5 1,74, 205, 225 life insurance, 276 life-cycle costing, 159 lifestyles, 415 Liker, J.K., 361, 396,401 limitations, 225 limits compensation, 37 detection, 37 diagnosis, 37 Lincoln, K.R., 169, 190 line balancing, 282 Lint, O., 277, 307 literature, 355 Liu, L., 381, 398 Liverpool and Manchester Railway, 372
Index 433
Lockheed, 16, 17, 100, 103, 104, 322,323 Lockheed Martin, 368 F-22, 105 longbow, 371 Los Alamos National Laboratory, 16,35 Low, S., 150 Lowenstein, R., 377, 401 Lucent, 376,377, 379 Luehrman, T.A., 269,306 Luenberger, D.G., 269,306 Luther, M., 371 Lynch, C.L., 362,404
MABEL, 111, 130, 131 machines, 352 Macintosh, 374 Maddox, M.E., 186, 191 Magaziner, I., 319, 347 mailing, 3 12 maintainability, 18, 171 maintainers, 189,407 maintenance displays, 172 Majd, S., 277,306 management, 1,4, 187,212, 354, 381,396 decision making, 363, 391, 415 information requirements, 202 processes, 17 services online, 218 management teams, 333 knowledge requirements, 202 managers, 189,407 senior, 344, 397 MANPRINT, 182 manual control, 171 manufacturing, 185, 186, 303, 352,396,415
manufacturing industry, 189 Marietta, GA, 100 marine industries, 389 Marine Safety International, 147 market, 356 capitalization, 358 channels employed, 369 conditions, 357 maturity, 27 1 models, 409 perceptions, 367 selection, 245 strategies, 341 targeted, 369 valuation, 359 Markham, S.K., 295,306 Martin Marietta, 322, 323 Martin, T., 158, 186, 191 Martzoukos, S.H., 277,306 Matching Familiar Figures Test, 121 Material Resource Planning, 199 mathematics, 33, 34 linear, 4 14 non-linear, 4 14 Mathiasssen, L., 381, 401 Mati, Greece, 123, 168 maturity market, 27 1 technology, 271 Mauchly, J., 374 Mavor, A.S., 155, 159, 191, 194, 195,340,347 Maxwell, J., 167 McCaw Cellular, 375 McDonnell-Douglas, 323 McGinnis, L., 381,401 measures of performance, 122 errors, 122 inefficiency, 122 time, 122 mechanical design, 156
434 Index
mechanical engineering, 18, 187, 253 MediaOne, 375 medical imaging systems, 247 medical industry, 303 medicine, 4 mental arithmetic, 79 mental models, 12,21, 30,41, 46, 159, 167,407 definition, 3 1 functions, 3 1 identification, 3 1 shared, 42 task work, 41 team work, 41 mental workload, 168 mergers and acquisitions, 246 Merton, R.C., 265,276,277,307 Messer, S.B., 121, 151 methods, 381 design, 21 1 methods and tools, 380, 381 Mexico, 4 16 Meyer, P.S., 271, 307 microprocessors, 155, 247 Microsoft, 374 DOS, 375 Office, 149 Windows, 375,389 Miller, D.B., 296, 306 Miller, G., 51 Miller, W.L., 316, 347 Minnesota Mining and Manufacturing Company, 17, 325 Mintzberg, H., 316, 347, 364, 40 1 missed events, 77 Mission Planner, 100 MIT,22, 38, 51,72, 173,411, 412,414 Humanities Library, 55
undergraduates, 27 Mitsubishi, 235 Mixed-Fidelity Approach to Training, 146 mobile computing, 383 mobile readiness, 303 model worlds, 354 modeling data collection, 64 coefficient of variation, 65 models competitive scenario, 272 descriptive, 13 fading memory, 29 failure detection, 135, 142 failure diagnosis, 135 functions/features, 272 human behavior and performance, 165 human problem solving, 144 impact on decision making, 68, 71 limited memory, 26 limits, 36 market, 230, 236 markethtakeholders, 272 mental workload, 167 multi-attribute, 232 needs-beliefs-perceptions, 334 option pricing, 268 prescriptive, 13 product learning, 272 product, 230,236 rule-based, 137 s-curve, 27 1 signal processing, 35 simulation, 383 spiral, 240 state transition, 143 symbol processing, 35 tool, 409 utility theory, 232
Index 435
modularity, 2 1 1 Mohr’s Law, 253 Mohrman, A.M., Jr., 365,401 Mohrman, S.A., 365,401 Mokyr, J., 307,411,415,417 Monitor, Access, Browse, and Evaluate Limits, see MABEL Monte Carlo simulation, 270, 276,288 Moore, G.A., 318,348 Moore’s Law, 105 Moorestown, NJ, 40 Moran, T.P., 5, 19, 83,229,254 Moray, N.P., 158, 168, 191 Morehead, D.R., 205, 206,225, 228 Morris, L., 347 Morris, N.M., 31,49, 84, 94, 96, 106, 107, 125, 128, 134, 138, 141, 148, 149, 151, 152, 168, 183, 191, 194 Morrison, D.J., 316, 349, 359, 404 Morse, P.M., 51, 52, 55, 69, 72, 74 Motorola, 17,265, 266,321, 324, 368 MRP, see Material Resource Planning multi-attribute utility models, 161,273 multi-attribute utility theory, 27, 39,57,230,232 multi-task performance, 80, 82 multi-task decision making, 77 music, 355 musical theater, 43 Mykityshyn, M.G., 362,381, 382,401 myths dysfunctional, 291
Nadler, D.A., 363,402 NASA, 8, 17,22, 84, 128, 170, 229 NASA Ames Research Center., 78,85 NASCAR, 1,47 National Academy of Engineering, 186 National Institute for Mental Health, 28 National Science Foundation, 38, 351,391,394 NATO Conference on Human Detection and Diagnosis of Systems Failures, 124 NATO Conference on Human Supervisory Control, 78, 123 NATO Conference on Mental Workload, 123, 168 NCR, 266,374,375 needs, 334 needs and beliefs, 391 dysfunctional, 344 Net Option Value, 269, 282,285, 288,292,301,385 Net Present Value, 233, 264, 269,282,285,288,384,385 network models, 13 individual libraries, 63, 68 networking, 375 networks, 5 1,52, 54 hierarchical, 61 impact of technology, 63 large-scale dynamic, 130 libraries, 60 routing, 61 neural networks, 31,32 neuromotor systems, 77 New Jersey Turnpike, 40
436 Index
new product and service offerings 367 New Product Development, 230 New York, NY, 41 1 Newell Rubbermaid, 368,376, 378,379 Newell, A., 5 , 19,229, 254 Newton, I., 167, 354, 398 Newton’s Laws, 253 Nichols, N.A., 307 Nokia, 377,380 Norman, D.A., 5, 19, 229,254 North Carolina State University, 412 Northrop, 322 Norton, D.P., 296, 306,358,400 Nobel Prize in Economics, 266 NOV, see Net Option Value NPD, see New Product Development NPV, see Net Present Value nuclear power industry, 172 nuclear power plants, 14,78, 183
O’Brien, L., 379,405 O’Toole, J., 315, 346 Obi Wan Kenobi, 338 objectives, 313, 362 observability, 354 offerings provided, 370 Office of Naval Research, 40, 173 offshoring 370 Ohm’s Law, 253 Olympics, 393 Oncken, W., Jr., 316,348 Online College Library Center, 63 online games, 409 online work, 385
Open Book Management, 329 operability, 171 operations research, 5 1 Operator Model, 101, 102 architecture, 102 operators, 189,407 opportunities, 315,357 optimal control, 82 optimal preview control, 82 optimization, 3 16 option pricing models, 268, 384 option pricing theory, 160 option values calculating, 269 option-based thinking, 3 15 options, 257,407 call, 276 example calculations, 270 exercise, 268 extensions, 276 financial, 265 framing, 268 limitations, 276 purchase, 268 put, 276 real, 265,285 technology, 266 orange peril, 166 Orasanu, J., 191 organization, agile, 336 lean, 336 non-profit, 4 organizational belief systems, 212 organizational boundaries, 293 organizational change, 381,385 organizational culture, 38 1, 385 organizational delusions, 212, 330,331,344,394,413 organizational design, 299 organizational development, 159
Index 437
organizational implications, 34 1 organizational learning, 365 organizational learning disabilities, 3 15 organizational performance, 34 1 organizational simulation, 17, 337,380,383 applications, 343 architecture, 338 organizational story, 338 organizational structure, 286, 292 organizations, 1,407,408 organizing for value, 297 OrgSim, see organizational simulation outsourcing, 370 overhead, 38 1
packaging, 3 12 pagers, 324 Palace Option, 28 Palmisano, S., 379 Paradigms Lost, 3 19 parameter estimation, 268 Paris, C.R., 43, 48 Park, D., 35,47 Pascal, B., 373 patents, 415 Patinkin, M., 319, 347 pattern recognition, 6 Patterson, F.G., Jr., 282, 307 Pejtersen, A.M., 205,206,225, 347,402 Pellegrino, S.J., 137, 154 Pennings, E., 277, 307 Pennock, M.J., 277, 307, 384, 402,404 people, 407 perceptions, 334 perceptual systems, 77
performance, 190 aids, 13 deficiencies, 42 model, 101 performing arts, 17, 43,414 ecology, 43,46 leadership, 43 teams, 43 Perrow, C., 152 Persian Gulf, 40 perspective cross-disciplinary, 294 transdisciplinary, 1, 18 Pettit, J., 391 Pew, R.W., 155, 191 pharmaceutical industry, 303, 416 phenomena holistic, 18 Philippines, 4 16 physical resources, 316 physical space, 21 1,242 physics, 35 physiology, 35 Pil, F.K., 396, 400 Pilot’s Associate, 16, 35, 38, 84, 100 pilot-centered design, 229 Pilot-Vehicle Interface, 38 Pinches, G.E., 276, 306 Pindyck, R.S., 276,277, 305, 306 Planck, M., 413 plan-goal graphs, 37, 38, 103, 167 planning, 3 depth, 124 event-driven, I25 time-driven, 125 sessions, 216 plans implementation, 330 PLANT, 111,125, 140, 143, 148
438 Index
diagnostic performance, 128 malfunctions, 127 operating procedures, 127 Platten, P.E., 336,347, 399 Plymouth 1952,2, 18 point-of-sale terminal, 105 policy implications, 302 politics, 355 Porras, J.I., 3113, 315, 346, 380, 386,399 portfolio, 286, 287, 288 technology strategy, 289 technology options, 288 Portsmouth, RI, 3,415 positioning competitive, 232 Posner, B.Z., 315,347,356,401 Post-Its, 3, 4, 258 power plant, 13 practices best, 213, 260, 381,386 worst, 386 prediction, 14, 15, 22 cognitive model, 25 preference space, 2 11 preferences, 6,225 Prendergast, E.J., 228, 266, 270, 309 principles, 38 1 printing press, 371 private sector, 313,341 probabilistic models, 35 problem representation, 242 solvers, 364 solving, 4,78, 171, 315 space, 242 Proceedings of IEEE, 170 process, 215,249,319 control, 125, 185 improvement, 396, 380 reengineering, 370
standardization, 370 process plants, 78 dynamic, 124 processes, 51,52, 358,407 back office, 52 customer-facing, 52 queueing, 53 Proctor & Gamble, 377, 378, 379 product functions, 23 1,241 models, 409 plans, 212 product planning, 2 12 facilitation, 248 objections and responses, 249 Product Planning Advisor, 214, 230,236,237,238,271,272, 3 11,312,344,389,409 case stories, 246 interpreting results, 243 training, 24 1 typical users, 245 usage guidelines, 239 usage, 239 production engineers, 171 operations, 186 planning, 186 processes, 352 scheduling, 186 systems, 166 workers, 189 Production Levels and Network Troubleshooting, see PLANT productivity, 190, 289, 380 products, 229, 311,407 profit, 190,291 Project Intrex, 5 1 projections hockey stick, 264 proofs, 388 propulsion engines, 373
Index 439
Protestant Reformation, 37 1 protocols, 375 psychological distance, 136 Pu, C., 381,398 public dialog, 38 policy, 354,415 sector, 313, 341 puppetry, 43 purpose, 362 push technology, 197
quality, 280 management, 335 of life, 161 Quality Function Deployment, 230,23 1,235,273 queueing, 53 models, 54, 79, 80 processes, 61,69 theory, 166, 168
R&D, 160,205,259,296,303, 3 84 purpose, 260 scorecard, 287 strategies, 3 11 value-centered, 285 value scorecard, 297 R&D investments, 246 valuation, 290 R&D World, 260, 282,284, 343 radios, 324 Rai, A,, 381,401 Raiffa, H., 57, 74, 232, 254 railroad companies, 3 17, 322
Rasmussen, J., 13, 19, 123, 136, 139, 152, 166, 172, 186, 191, 348,362,402 Raytheon, 17, 18,51, 61, 156, 322,323 reductionism, 4 14 reengineering processes, 336, 36 1 regression linear, 71 regulatory bodies, 5 reimbursement policies, 62 Reliability & Maintainability Department, 156 reliability and maintainability analysis, 18, 156 Remington-Rand, 374 renaissance, 258 representation, 156 representativeness, 27 research, information requirements, 202 institutes, 412 knowledge requirements, 202 multi-disciplinary, 4 10 portfolio, 4 10 transdisciplinary, 4 10 transformation, 380 Research Triangle, 4 11 researchers, 407 resource allocation, 13, 58, 59, 212 Resource Model, 101 resources allocation, 363 attention, 363 control 293 response, 354 retailing, 4 return, 289, 301 return on investment analysis, 160
440 Index
reward and recognition systems, 234,356,371 Rijnsdorp, J.E., 186, 191 risk, 210, 234, 289, 301 averse attitudes, 5 correlated, 289 diversification, 289 downside, 265, 315 Rivkin, J.W., 364,400 Roberson, D., 266 Rodd, M.B., 186, 191 Rogers, D.M., 258, 309 Rogers, S., 40,48 Rogers, W., 40,48 Rolling Prairie Library System, 66 Rooke, D., 364,402 Ross, S.A., 276, 305 Roughan, M., 150 Rouse, W. H., 52 Rouse, R.K., 43,48,304, 309, 414,417 Rouse, S.H., 51, 52, 55, 61, 62, 65, 66,69,74, 75,93,98, 106, 108, 116, 121, 137, 152, 153, 154, 194, 195,204,205, 228 Roussel, P., 271, 309 routing policies, 62 Rover, 17 Rovit, S., 378, 379,400 Rubbermaid, 378 Rubinstein, M., 276, 305 Russia, 372 Rycroft, R.W., 258, 309
safety, 161, 190 Sage, A.P., 156, 195,201,228, 282, 309, 351, 362, 381, 396, 404,405
Salas, E., 41,43,48,49 Sales Force Automation, 199 Salvendy, G., 169, 171, 195 Samuelson, P., 413 San Diego, CA, 41 1 satisficing, 12, 316 Savannah, GA, 372 Saxenian, A., 41 1,417 SBC, 375 Scheines, R., 48 Scholastic Achievement Tests, 392 Scholes, M., 265,276, 304 Schon, D.A., 346,365,398 School of Industrial and Systems Engineering, 35 1,393,394 Schumpter, J., 380,404 scientific knowledge, 342 SCM, see Supply Chain Management search database, 205 Search Technology, 16,40, 100, 103, 147, 169,237, 351, 389, 395 security, 18 selection, 171 semiconductors, 324 Senge, P.M., 3 15,349,365,404 sense of urgency, 415 sensitivity analyses, 270, 277 serendipity, 1, 15,73, 105, 189, 208,407,408 crossroads, 409,410,413,414 path, 1,409 role, 3 service, 229, 3 11,407 functions, 231, 241 operations, 72 policy, 81 processes, 54 sciences, 52
Index 441
systems, 72 Sewell, D.R., 104, 108 SFA, see Sales Force Automation Shalunov, S., 150 Shannon, C., 5 1 share price, 358 shareholder value, 379 Sheridan, T.B., 13, 20,22,38, 39,49 Shimura, M., 154 ships, 78 shop floor, 352 Sides, W.H., 186, 172, 190 Siemens, 377,379,380 Silent War, 3 19 Silicon Graphics, 105 Silicon Valley, 41 1 silver bullet, 313, 387 Simmons College, 5 1 Simon, H.A., 10, 20, 32,34,48,49, 51, 77, 108, 316,349,404 simulation context specific, 118 context-free, 111 organization, 343 Singapore, 41 1,415 Singapore Ministry of Defense, 266 situation assessment, 100, 316 example, 327 issues, 326 methodology, 3 16 Situation Assessment Advisor, 214,317,326,328,344,389 situations, 3 11, 3 16, 407 commodity trap, 3 19 consolidation, 3 19 crossing the chasm, 3 18 crossover, 3 18 evolution, 3 18
familiar vs. unfamiliar, 337 frequent vs. infrequent, 337 novel, 341 paradigm lost, 3 19 process, 319 silent war, 319 steady growth, 3 19 vision quest, 3 18 Slate, M.P., 66, 75 Slovic, P., 48 Slywotsky, A.J., 315, 349, 359, 404 Small Business Innovation and Research, 321 Small, R.L., 172, 182, 195, 196 Smit, H.T.J., 276, 309 Smith, D.K., 43,48, 356,400 Smith, J.M., 68, 75 Smithson, C.W., 269, 310 smoothing, 22,29 Snow, C.P., 404,413,417 social dimensions of technology, 158 social network, 354, 365, 371, 390,391,394, 395,415 strongly connected, 365 weakly connected, 365 sociotechnical systems, 158 soft landing, 314 software documentation, 390 products, 257, 259 services, 390 tools, 179 solutions, 232 how to improve, 244 sonar systems, 18 South Africa, 170,416 South Seas, 354 specialization, 413 spiral design, 179, 21 1
442 Index
Spirtes, P., 48 spreadsheets, 217 S-Rules, 143, 147 stability, 354 staffing, 55, 171 stakeholders, 5, 72, 159, 189, 229,304,316,387,408 multiple, 234 standards, 375 Stanford University, 391,412 Starbucks, 34, 368 Stassen, H., 34, 92, 174 State of Georgia, 392 state, 358, 387 statistical inference, 354 statistical pattern recognition, 142 steady growth, 3 19 steam engines, 373 steamboat companies, 322 steamboats, 317, 372, 373 Stephenson, G., 372 Stephenson, S.V., 381,405 Stevens, G.A., 260,310 Stewart, T.A., 379,400,405 stochastic estimation tasks, 29 Stockton and Darlington Railway, 372 stories, 316 branching and pruning, 324, 325 classic life cycle, 315, 320 death spiral, 315, 322 false start, 321, 325 reinventing the company, 323, 325 straight theater, 43 strategic management, 212, 3 12, 340 essential challenges, 344 tools, 214 strategic objectives, 212
strategic planning, 212, 3 11 human-centered, 3 11 strategy, 3, 341 strengths, 357 Su Doku, 55,78, 118 subdisciplines, 34 submarines, 18 Suburban Library System, 66 success, 3, 322 Sun, 105 Super Glue, 3 , 4 supervisory control, 171 supply chain design, 352 supply chains, 52, 341, 396 management, 352 restructuring, 370 support approaches, 11, 12 design, 21 1 system architecture, 188 supportability, 171 supporting humans obstacles to applications, 178 surprises, 216,249 sustainability, 190 Symbolics, 105 symphony, 43 system, 229, 3 11,407 approach, 51 design, 169 engineering, 18, 156,351 human aspects, 157 failure types, 110 functions, 231,241 systems engineering and management, 396,397
tabulator companies, 3 17, 322, 374 Tactics Planner, 100
Index 443
TADMUS Program, 40 Taiwan, 415 Tanaka, K., 154 Tanaka, R., 150 task performance, 13 tracking, 79 TASK, 111, 125, 136, 145 tasks, 77, 155, 313, 362,407 TCI, 375 Teaching hospitals, 412 Team Model Trainer, 343 Team Model Training, 42 teams, 1,21, 38,407,408 affiliations, 286, 294 competencies, 356 design, 210 mental models, 167 training, 44 teamwork, 304,4 14 technology, 47,414 adoption, 302 deployment, 8 feasibility, 8 innovation, 258 investments, 246 maturity, 27 1 off-the-shelf, 263 options, 265,285,286 plans, 212 proprietary, 259 refinement, 8 strategies, 258, 3 11 transition, 189,287 Technology Investment Advisor, 214, 270,271, 272, 312, 344, 389 case stories, 274 telecom industry, 260 televisions, 324 Tenkasi, R.V., 365,401,405
Tennenbaum Institute, 352, 394, 409 Tennenbaum, M., 352 The Netherlands, 35, 186 theorems, 388 theory design, 208 enterprise transformation, 352, 353 summary, 366 Thomas, B.G.S., 265,309 Thomassen, L., 276, 305 Thompson, 379,380 Thomson, 377,378,379 threats, 357 time, 313, 315 delays, 55 series, 25, 26 value of money, 263 TNO, 91 Tokyo, Japan, 26 tools, 381 assessment, 333 design, 211, 212 human-centered design, 237, 252 impacts, 215 licenses, 237 managers’ desires, 2 15 mistakes, 237 strategic management, 214 Torbert, W.R., 364,402 Total Quality Management, 329, 353 Toyota Production System, 396 TPS, see Toyota Production System tradeoffs functiodfeature, 245 TRAIDOFF, 181, 182 training,l4, 94, 145, 155, 171, 344,407,416
444 Index
FAULT, 145 mixed-fidelity approach, 146 simulator, 409 TASK, 145 team, 44 traditional instruction, 145 vs. aiding tradeoffs, 180 trains, 373 transdisciplinary research, 4 14 transdisciplinary perspective, 409 transdisciplinary problems, 413 transfer of training, 145 transformation, 35 1, 396,407, 415 contemporary illustrations, 375,377 context, 355 costs, 368 definition, 353 ends, 367 framework, 367 illustrations, 37 1 initiatives, 388 means, 367 practice, 389 processes, 365 risks, 368 scope, 367 theory, 353 transistors, 155 transportation, 37 1 Trezza, A., 61 Triandis, H.C., 296, 306 Trigeorgis, L., 276, 277, 306, 309,310 Troubleshooting via Application of Structural Knowledge, see TASK troubleshooting, 34 T-Rules, 139, 143, 147 Tufts University Libraries, 56, 58 Tufts University, 28, 72, 170
Turing machines, 35 Tushman, M.L., 363,402 Tversky, A., 26,48 typewriters, 317, 322, 374
uncertainty, 210, 259, 261, 286, 287 uncertainty principle, 34 undergraduate education, 392 understanding change, 328 levels, 14 UNIVAC, 374 University of Bologna, 412 University of California at Berkeley, 412 University of Illinois at UrbanaChampaign, 16,40,60,67, 72, 100, 104, 145, 147, 166, 170 University of Michigan, 414 University of North Carolina, 412 University of Pretoria, 170 University of Texas, 412 U P S , 368 user-centered design, 5,229 users end, 280 intermediate, 280 manuals, 390 population, 70 U.S. Department of Defense, 368 USS Ticonderoga, 40 USS Vincennes, 40 utility functions, 231 typical, 234 utility industries, 389 utility theory theoretical issue, 235
Index 445
validation, 8, 37 validity, 23 1, 243 valuation, 286 case stories, 267 example projections, 273 option-based, 267 value, 158, 222, 258, 303, 313, 328,351,358,387,395,407 assessing, 286, 290, 303 chain, 162 characterizing, 285, 286, 303 competition, 369 crises, 369 deficiencies, 353, 360, 368 deficiency, 390, 391, 393, 394, 415 economic, 290 flow, 278,292 implication, 222 information, 222 knowledge, 222 managing, 286, 303 networks, 278, 281,286, 290, 315 opportunities, 369 principles, 286, 298 proposition, 315, 369, 380 recognition, 206 specification, 206 story, 301 streams, 278, 279, 286, 290, 296,315 dysfunctions, 280 support, 222 threats, 369 total deployed, 282 value-centered enterprise, 301 value-centered R&D, 285, 286, 304 principles, 285 values, 356 Valusek, J., 179, 195
Van Eekhout, J.M., 91, 92, 108 Van Wouwe, M., 276,305 venture capital, 5 verbal protocols, 32 verification, 8, 37 viability, 23 1, 243 Vicente, K.J., 313, 349 Victoria’s Secret, 368 Vietnam War, 28 Vietnam, 4 16 Virginia Beach, 299 virtual reality, 340 Vision Quest, 3 18 vision, 393 volatility, 269, 283 Volvo, 32 von Neumann, J., 374 Vrooman, H., 61
vw
1969,39
Waffle House, 1,47 waiting, 54 Walden, R.S., 80, 108 Wal-Mart, 368 Walt Disney World, 55 Walter, C., 28 Ward, S.L., 84, 106 Wass, D.L., 348 Watt, J., 372 weaknesses, 357 web-enabled processes, 370 Weiss, T.B., 316, 336, 349, 356, 405 Welke, R., 381,401 Whiting, R., 315, 349 Wickens, C.D., 171, 196 Wiener, N., 5 1 Wilhelm, G., 373 Willinger, W., 150
446 Index
wireless communications, 383 Wohl, J.G., 351,404 Womack, J.P., 282, 310, 315, 349,361,405 Woods, D.D., 417 word processors, 173 work, 4, 35 1, 358,360, 387,407 activities, 367 implications, 341 processes, 354, 357, 361, 369, 371,391,395,396,415 workflow, 4 workload, 82, 124 models, 168 World War I, 373 World War 11, 18, 83, 169 world dynamic, 339 Wright, O., 372
Wu, Q., 381,398 Xerox, 374 Xerox Palo Alto Research Center, 83 Yale University, 414 Young, B., 381,401 Young, P., 271, 310 Your Enterprise World, 338 Zack, M.H., 315,349 Zadeh, L.A., 136, 154 Zenyuh, J.P., 181, 196 Ziman, J., 34,49 Zsambok, C.E., 191
W I L E Y SERIES IN SYSTEMS E N G I N E E R I N G AND M A N A G E M E N T Andrew P. Sage, Editor ANDREW P. SAGE and JAMES D. PALMER
Software Systems Engineering WILLIAM B. ROUSE
Design for Success: A Human-Centered Approach to Designing Successful Products and Systems LEONARD ADELMAN
Evaluating Decision Support and Expert System Technology ANDREW P. SAGE
Decision Support Systems Engineering YEFIM FASSER and DONALD BREnNER
Process Improvement in the Electronics Industry, 2/e WILLIAM B. ROUSE
Strategies for Innovation ANDREW P. SAGE
Systems Engineering HORST TEMPELMEIER and HEINRICH KUHN
Flexible Manufacturing Systems: Decision Support for Design and Operation WILLIAM B. ROUSE
Catalysts for Change: Concepts and Principles for Enabling Innovation LlPlNG FANG, KEITH W. HIPEL, and D. MARC KILGOUR
Interactive Decision Making: The Graph Model for Conflict Resolution DAVID A. SCHUM
Evidential Foundations of Probabilistic Reasoning JENS RASMUSSEN, ANNELISE MARK PEJTERSEN, and LEONARD P. GOODSTEIN
Cognitive Systems Engineering ANDREW P. SAGE
Systems Management for Information Technology and Software Engineering ALPH0NS E C HAPAN IS
Human Factors in Systems Engineering
YACOV Y. HAIMES Risk Modeling, Assessment, and Management, 2/e DENNIS M. BUEDE The Engineering Design of Systems: Models and Methods ANDREW P. SAGE and JAMES E. ARMSTRONG, Jr. Introduction to Systems Engineering WILLIAM B. ROUSE Essential Challenges of Strategic Management YEFIM FASSER and DONALD BRETTNER Management for Quality in High-Technology Enterprises THOMAS B. SHERIDAN Humans and Automation: System Design and Research Issues ALEXANDER KOSSIAKOFF and WILLIAM N. SWEET Systems Engineering Principles and Practice HAROLD R. BOOHER Handbook of Human Systems Integration JEFFREY T. POLLOCK AND RALPH HODGSON Adaptive Information: Improving Business Through Semantic Interoperability, Grid Computing, and Enterprise Integration
ALAN L. PORTER AND SCOTT W. CUNNINGHAM Tech Mining: Exploiting New Technologies for Competitive Advantage REX BROWN Rational Choice and Judgment: Decision Analysis for the Decider WILLIAM B. ROUSE AND KENNETH R. BOFF (editors) Organizational Simulation HOWARD EISNER Managing Complex Systems: Thinking Outside the Box
STEVE BELL lean Enterprise Systems: Using IT for Continuous Improvement
J. JERRY KAUFMAN AND ROY WOODHEAD Stimulating Innovation in Products and Services: With Function Analysis and Mapping WILLIAM B. ROUSE Enterprise Tranformation: Understanding and Enabling Fundamental Change JOHN E. GIBSON AND WILLIAM T. SCHERER How to Do Systems Analysis
WILLIAM F. CHRISTOPHER Holistic Management: Managing What Matters for Company Success WILLIAM B. ROUSE People and Organizations: Explorations of Human-Centered Design