59• Systems, Man, and Cybernetics
59• Systems, Man, and Cybernetics Computational Intelligence Abstract | Full Text: PD...
56 downloads
1580 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
59• Systems, Man, and Cybernetics
59• Systems, Man, and Cybernetics Computational Intelligence Abstract | Full Text: PDF (85K) Cybernetics Abstract | Full Text: PDF (166K) Data Fusion Abstract | Full Text: PDF (137K) Decision Analysis Abstract | Full Text: PDF (87K) Human Centered Design Abstract | Full Text: PDF (169K) Human Machine Systems Abstract | Full Text: PDF (416K) Information Technology Abstract | Full Text: PDF (82K) Intelligent Transportation Systems Abstract | Full Text: PDF (373K) Interface Design Abstract | Full Text: PDF (102K) Modeling and Simulation Abstract | Full Text: PDF (383K) Petri Nets and Their Applications Abstract | Full Text: PDF (159K) Quality Control Abstract | Full Text: PDF (172K) System Requirements and Specifications Abstract | Full Text: PDF (102K) Systems Analysis Abstract | Full Text: PDF (160K) Systems Architecture Abstract | Full Text: PDF (106K) Systems Engineering Trends Abstract | Full Text: PDF (180K)
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20E...ERING/59.Systems,%20Man,%20and%20Cybernetics.htm (1 of 2)15.06.2008 14:05:13
59• Systems, Man, and Cybernetics
Systems Reengineering Abstract | Full Text: PDF (426K)
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20E...ERING/59.Systems,%20Man,%20and%20Cybernetics.htm (2 of 2)15.06.2008 14:05:13
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7101.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Computational Intelligence Standard Article James F. Peters1 and Witold Pedrycz2 1University of Manitoba, Manitoba, Canada 2University of Alberta, Alberta, Canada Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7101 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (85K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Introduction Genetic Algorithms Fuzzy Sets and Systems Neural Computing Rough Sets About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7101.htm15.06.2008 14:09:42
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
COMPUTATIONAL INTELLIGENCE
INTRODUCTION There are a number of interpretations of the notion of Computational Intelligence (CI) (1–9). Computationally intelligent systems have been characterized by Bezdek (1, 2) relative to adaptivity, fault-tolerance, speed, and error rates. In its original conception, a number of technologies were identified to constitute the backbone of Computational Intelligence, namely, neural networks (75, 76), genetic algorithms (75, 76), fuzzy sets and fuzzy systems (75, 76), evolutionary programming (75, 76) and artificial life (10, 11). More recently, rough set theory and its extensions to approximate reasoning and real-time decision systems have been considered in the context of computationally intelligent systems (3,6–9,12,13,46,75,76), which naturally led to the generalization along the line of Granular Computing. Overall, CI can be regarded as a field of intelligent system design and analysis which dwells upon a well-defined and clearly manifested synergy of genetic, granular and neural computing. A detailed introduction to the different facets of such a synergy along with a discussion of various realizations of such synergistic links between CI technologies is given in (3,4,44,46,65,66,75,76). GENETIC ALGORITHMS Genetic algorithms were proposed by Holland as a search mechanism in artificially adaptive populations (14). A genetic algorithm (GA) is a problem-solving method that simulates Darwinian evolutionary processes and naturally occurring genetic operations on chromosomes (15). In nature, a chromosome is a threadlike linear strand of DNA and associated proteins in the nucleus of animal and plant cells. A chromosome carries genes and serves as a vehicle in transmitting hereditary information. A gene is a hereditary unit which occupies a specific location on a chromosome and which determines a particular trait in an organism. Genes can undergo mutation (alteration or structural change). A consequence of the mutation of genes is the creation of a new trait in an organism. In genetic algorithms, the traits of artificial life forms are stored in bit strings which mimic chromosome strings found in nature. The traits of individuals in a population are represented by a set of evolving chromosomes. A GA transforms a set of chromosomes to obtain the next generation of an evolving population. Such transformations are the result of applying operations such as reproduction based on survival of the fittest and genetic operations such as sexual recombination (also called crossover) and mutation. Each artificial chromosome has an associated fitness, which is measured with a fitness function. The simplest form of fitness function is known as raw fitness, which is some form of performance score (e.g., number of pieces of food found, amount of energy consumed, number of other life forms found). Each chromosome is assigned a probability of reproduction which is proportional to its fitness. In a Darwinian system, natural selection controls evolu-
tion (16). Consider, for example, a collection of artificial life forms with behaviors resembling ants. Fitness will be measured relative to the total number of pieces of food found and eaten (partially eaten food is counted). Reproduction consists in selecting the fittest individual x and weakest individual y in a population, and replacing y with a copy of x. After reproduction, a population will then have two copies of the fittest individual. A crossover operation consists in exchanging genetic coding (bit values of one or more genes) in two different chromosomes. The steps in a crossover operation are (1) randomly select a location (also called interstitial location) between two bits in a chromosome string to form two fragments, (2) select two parents (chromosomes to be crossed), and (3) interchange the chromosome fragments. Because of the complexity of traits represented by a gene, substrings of bits in a chromosome are used to represent a trait (17). The evolution of a population resulting from the application of genetic operations results in changing fitness of individual population members. A principal goal of GAs is to derive a population with optimal fitness. The pioneering works of Holland (15) and L. J. Fogel and others (18) gave birth to the new paradigm of populationdriven computing (evolutionary computation) resulting in structural and parametric optimization. Evolutionary programming was introduced by L. J. Fogel in the 1960s (19). The evolution of competing algorithms defines evolutionary programming. Each algorithm operates on a sequence of symbols to produce an output symbol that is likely to maximize an algorithm’s performance relative to a welldefined payoff function. Evolutionary programming is the precursor of genetic programming (15). In genetic programming, large populations of computer programs are genetically bred.
FUZZY SETS AND SYSTEMS A fuzzy systems (models) are immediate constructs that results from a description of real-world systems (say, social, economic, ecological, engineering, or biological) in terms of information granules- fuzzy sets and relationships between them (20). The concept of fuzzy set introduced by Zadeh in 1965 (21, 22) becomes of paramount relevance when formalizing a notion of partial membership of element. Fuzzy sets are distinguished from the fundamental notion of a set (also called a crisp set) by the fact that their boundaries are formed by elements with whose degree of belongingness are allowed to assume numeric values in the interval [0, 1]. Let us recall that the characteristic function for a set X returns a Boolean value {0, 1} indicating whether an element x is in X or is excluded from it. A fuzzy set is non-crisp inasmuch as the characteristic function for ˜ x be a a fuzzy set returns a value in [0, 1]. Let U, X, A, universe of objects, subset of U, fuzzy set in U, and an individual object x in X, respectively. For a set X, µA˜ : X → [0, 1] is a function which determines the degree of member˜ is then defined to be ship of an object x in X. A fuzzy set A ˜ a set of ordered pairs where A = {(x, µA˜ (x)) | x ε X}. The counterparts of intersection and union (crisp sets) are the t-norm and s-norm operators in fuzzy set theory. For the intersection of fuzzy sets, the min operator was suggested
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.
2
Computational Intelligence
by Zadeh (29), and belongs to a class of intersection operators (min, product, bold intersection) known as triangular or t-norms. A t-norm is a mapping t : [0, 1]2 → [0, 1]. The s-norm (t-conorm) is a mapping s : [0, 1]2 → [0, 1] (also triangular co-norm) is commonly used for the union of fuzzy sets. The properties of triangular norms are presented in (82). Fuzzy sets exploit imprecision in conventional systems in an attempt to make system complexity manageable. It has been observed that fuzzy set theory offers a new model of vagueness (13). Many examples of fuzzy systems are given in Pedrycz (23), and in Kruse, Gebhardt, and Klawonn (24). NEURAL COMPUTING Neural networks offer a powerful and distributed computing architecture equipped with significant learning abilities (predominantly as far as parametric learning is concerned). They help represent highly nonlinear and multivariable relationships between system variables. Starting from pioneering research of McCulloch and Pitts (25), and others (26, 27), neural networks have undergone a significant metamorphosis and have become an important reservoir of various learning methods (28) as well as an extension of conventional techniques in statistical pattern recognition (29). Artificial Neural Networks (ANNs) were introduced to model features of the human nervous system (25). An artificial neural network is collection of highly interconnected processing elements called neurons. In ANNs, a neuron is a threshold device, which aggregates (“sums”) its weighted inputs, and applies an activation function to each aggregation to produce a response. The summing part of a neuron in an ANN is called an Adaptive Linear Combiner (ALC) in (30, 31). A McCulloch-Pitts neuron ni is a binary threshold unit with ALC that computes a weighted sum an n net where net = w j x j . A weight wi associated with j=0 xi represents the strength of connection of the input to a neuron. Input x0 represents a bias, which can thought of as an input with weight 1. The response of a neuron can be computed in a number of ways. For example, the response of neuron ni can be computed using sgn(net), where sgn(net) = 1 for net > 0, sgn(net) = 0 for net = 0, and sgn(net) = −1, if net < 0. A neuron comes with adaptive capabilities that could be fully exploited assuming that there is an effective procedure is introduced to modify the strengths of connections so that a correct response is obtained for a given input. A good discussion of learning algorithms for various forms of neural networks can be found in Freeman and Skapura (32) and Bishop (29). Various forms of neural networks have been successfully used in system modeling, pattern recognition, robotics, and process control applications (46,50,51,54,75,76). ROUGH SETS Rough sets introduced by Pawlak in 1981 (77, 78) and elaborated in (13,33,34,67,68,74,79–81) offer another approach to CI by drawing attention to the importance of set approximation in knowledge discovery and information granula-
tion. Rough set theory also offers a model for approximation of vague concepts (69, 83). In particular, rough set methods provide a means of approximating a set by other sets (33, 34). For computational reasons, a syntactic representation of knowledge is provided by rough sets in the form of data tables. In general, an information system IS is represented by a pair (U, F), where U is a non-empty set of objects and F is a non-empty, countable set of probe functions that are a source of measurements associated with object features. For example, a feature of an image may be color with probe functions that measure tristimulus values received from three primary color sensors, brightness (luminous flux), hue (dominant wavelength in a mixture of light waves), and saturation (amount of white light mixed with a hue). Each f ε F maps an object to some value. In effect, we have f : U → Vf for every f ε F. The notions of equivalence and equivalence class are fundamental in rough sets theory. A binary relation R Ⲵ X × X is an equivalence relation if it is reflexive, symmetric and transitive. A relation R is reflexive if every object x ε X has relation R to itself. That is, we can assert x R x. The symmetric property holds for relation R if xRy implies yRx for every x, y ε X. The relation R is transitive for every x, y, z ε X, then xRy and yRz imply xRz. The equivalence class of an object x ε X consists of all objects y ε X so that xRy. For each B Ⲵ A, there is associated an equivalence relation IndA (B) = {(x, x’) | ∀α ε B. α(x) = α(x’)} (indiscernibility relation). If (x, x’) ε IndA (B), we say that objects x and x’ are indiscernible from each other relative to attributes from B. This is a fundamental concept in rough sets. The notation [x]B is a commonly used shorthand that denotes the equivalence class defined by x relative to a feature set B. In effect, [x]B = {y ε U | x IndA (B) y}. Further, partition U/IndA (B) denotes the family of all equivalence classes of relation IndA (B) on U. Equivalence classes of the indiscernibility relation (called B-granules generated by the set of features B (13)) represent granules of an elementary portion of knowledge we are able to perceive relative to available data. Such a view of knowledge has led to the study of concept approximation (40) and pattern extraction (41). For X ε U, the set X can be approximated only from information contained in B by constructing a B-lower and B-upper approximation denoted by B∗ X = {x ε U | [x]B Ⲵ X} and B*X = {x ε U | [x]B ∩ X = Ø}, respectively. In other words, a lower approximation B∗ X of a set X is a collection of objects that can be classified with full certainty as members of X using the knowledge represented by features in B. By contrast, an upper approximation B∗ X of a set X is a collection of objects representing both certain and possible uncertain knowledge. In the case where B∗ X is a proper subset of B∗ X, then the objects in X cannot be classified with certainty, and the set X is rough. It has recently been observed by Pawlak (13) that this is exactly the idea of vagueness proposed by Frege (41). That is, the vagueness of a set stems from its borderline region. The size of the difference between lower and upper approximations of a set (i.e., boundary region) provides a basis for the “roughness” of an approximation. This is important because vagueness is allocated to some regions of what is known as the universe of discourse (space) rather than to the whole space as encountered in fuzzy sets. The
Computational Intelligence
study of what it means to be a part of provides a basis for what is known as mereology introduced by Lesniewski in 1927 (36). More recently, the study of what it means to be a part of to a degree has led to a calculus of granules (8,37– 39,71,73). In effect, granular computing allows us to quantify uncertainty and take advantage of uncertainty rather than blindly discarding it. Approximation spaces introduced by Pawlak (77), elaborated by (33,34,66,69,70–73), applied in (6–8,40,46,59,64) serve as a formal counterpart of our perception ability or observation (69), and provide a framework for approximate reasoning about vague concepts. In its simplest form, an approximation space is any pair (U, R), where U is a nonempty set of objects (called a universe of discourse) and R is an equivalence relation on U (called an indiscernibililty relation). Equivalence classes of an indiscernibility relation are called elementary sets (or information granules) determined by R. Given an approximation space S = (U, R), a subset X of U is definable if it can be represented as the union of some of the elementary sets determined by R. It was originally observed that not all subsets of U are definable in S (69). Given a non-definable subset X of U, our observation restricted by R causes X to be perceived as a vague object. An upper approximation B*X is the least definable subset of U containing X, and the lower approximation B∗ X is the greatest definable subset of U contained in X. Fuzzy set theory and rough set theory taken singly and in combination pave the way for a variety of approximate reasoning systems and applications representing a synergy of technologies from computational intelligence. This synergy can be found, for example, in recent work on the relation between fuzzy sets and rough sets (13,35,46,60,65), rough mereology (37–39,65,66), rough control (42, 43), fuzzy-rough-evolutionary control (44), machine learning (34,45,59), fuzzy neurocomputing (3), rough neurocomputing (46) diagnostic systems (34, 47), multi-agent systems (8,9,48), real-time decision-making (12, 49), robotics and unmanned vehicles (50–53), signal analysis (55), and software engineering (4,55–58).
7.
8.
9.
10. 11. 12.
13.
14.
15.
16.
17. 18. 19. 20.
BIBLIOGRAPHY 21. 1. J. C. Bezdek, On the relationship between neural networks, pattern recognition and intelligence, Int. J. Approximate Reasoning, 6, 1992, 85–107. 2. J. C. Bezdek, What is Computational Intelligence? In: J. Zurada, R. Marks, C. Robinson (Eds.), Computational Intelligence: Imitating Life, Piscataway, IEEE Press, 1994, 1–12. 3. W. Pedrycz, Computational Intelligence: An Introduction. Boca Raton, FL: CRC Press, 1998. 4. W. Pedrycz, J. F. Peters (Eds.), Computational Intelligence in Software Engineering, Advances in Fuzzy Systems— Applications and Theory, vol. 16. Singapore: World Scientific, 1998. 5. D. Poole, A. Mackworth, R. Goebel, Computational Intelligence: A Logical Approach. Oxford: Oxford University Press, 1998. 6. N. Cercone, A. Skowron, N. Zhong (Eds.), Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing Special Is-
22.
23. 24. 25.
26.
27.
3
sue. Computation Intelligence: An International Journal, vol. 17, no. 3, 2001, 399–603. A. Skowron, S. K. Pal (Eds.), Rough Sets, Pattern Recognition and Data Mining Special Issue. Pattern Recognition Letters, vol. 24, no. 6, 2003, 829–933. A. Skowron, Toward intelligent systems: Calculi of information granules. In: T. Terano, T. Nishida, A. Namatane, S. Tsumoto, Y. Ohsawa, T. Washio (Eds.), New Frontiers in Artificial Intelligence, Lecture Notes in Artificial Intelligence 2253. Berlin: Springer-Verlag, 2001, 28–39. J. F. Peters, A. Skowron, J. Stepaniuk, S. Ramanna, Towards an ontology of approximate reason, Fundamenta Informaticae, vol. 51, nos. 1–2, 2002, 157–173. R. Marks, Intelligence: Computational versus Artificial, IEEE Trans. on Neural Networks, 4, 1993, 737–739. D. Fogel, Review of“ Computational Intelligence: Imitating Life”, IEEE Trans. on Neural Networks, 6, 1995, 1562–1565. J. F. Peters, Time and Clock Information Systems: Concepts and Roughly Fuzzy Petri Net Models. In: J. Kacprzyk (Ed.), Knowledge Discovery and Rough Sets. Berlin: Physica Verlag, a division of Springer Verlag, 1998. Z. Pawlak, A. Skowron, Rudiments of rough sets, Information Sciences, 177, 2006, 3–27. See, also, J. F. Peters, A. Skowron, Zdzislaw Pawlak life and work (1926–2006), Information Sciences, 177, 1–2, Z. Pawlak, A. Skowron, Rough sets: Some extensions, Information Sciences, 177, 28–40 and Z. Pawlak, A. Skowron, Rough sets and Boolean reasoning, Information Sciences, 177, 41–73. J. H. Holland, Adaptive plans optimal for payoff-only environments, Proc. of the Second Hawaii Int. Conf. on System Sciences, 1969, 917–920. J. R. Koza, Genetic Programming: On the Progamming of Computers by Means of Natural Selection. Cambridge, MA: MIT Press, 1993. C. Darwin, On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. London: John Murray, 1959. L. Chambers, Practical Handbook of Genetic Algorithms, vol. I. Boca Raton, FL: CRC Press, 1995. L. J. Fogel, A. J. Owens, M. J. Walsh, Artificial Intelligence through Simulated Evolution, Chichester, J. Wiley, 1966. L. J. Fogel, On the organization of the intellect. Ph.D. diss., UCLA, 1964. R. R. Yager and D. P. Filev, Essentials of Fuzzy Modeling and Control. NY: John Wiley & Sons, Inc., 1994. L. A. Zadeh, Fuzzy sets, Information and Control, 8, 1965, 338–353. L. A. Zadeh, Outline of a new approach to the analysis of complex systems and decision processes, IEEE Trans. on Systems, Man, and Cybernetics, 2, 1973, 28–44. W. Pedrycz, Fuzzy Control and Fuzzy Systems, NY: John Wiley & Sons, Inc., 1993. R. Kruse, J. Gebhardt, F. Klawonn, Foundations of Fuzzy Systems. NY: John Wiley & Sons, Inc., 1994. W. S. McCulloch, W. Pitts, A logical calculus of ideas immanent in nervous activity, Bulletin of Mathematical Biophysics 5, 1943, 115–133. F. Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, Washington: Spartan Press, 1961. M. Minsky, S. Pappert, Perceptrons: An Introduction to Computational Geometry, Cambridge: MIT Press, 1969.
4
Computational Intelligence
28. E. Fiesler, R. Beale (Eds.), Handbook on Neural Computation. UK: Institute of Physics Publishing and Oxford University Press, 1997. 29. C. M. Bishop, Neural Networks for Pattern Recognition. Oxford: Oxford University Press, 1995. 30. B. Widrow, M. E. Hoff, Adaptive switching circuits, Proc. IRE WESCON Convention Record, Part 4, 1960, 96–104. 31. B. Widrow, Generalization and information storage in networks of adaline “neurons”. In M. C. Yovits, G. T. Jacobi, G. D. Goldstein (Eds.), Self-Organizing Systems. Washington, Spartan, 1962. 32. J. A. Freeman and D. M. Skapura, Neural Networks: Algorithms, Applications and Programming Techniques. Reading, MA, Addison-Wesley, 1991. 33. Z. Pawlak, Rough sets, Int. J. of Information and Computer Sciences, vol. 11, no. 5, 1982, 341–356, 1982 34. Z. Pawlak, Rough Sets. Theoretical Aspects of Reasoning about Data, Dordrecht, Kluwer Academic Publishers, 1991. 35. W. Pedrycz, Shadowed sets: Representing and processing fuzzy sets, IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, 28/1, Feb. 1998, 103–108. 36. S. Lesniewski, O podstawach matematyki (in Polish), Przeglad Filozoficzny, vol. 30, 164–206, vol. 31, 261–291, vol. 32, 60–101, and vol. 33, 142–170, 1927. 37. L. Polkowski and A. Skowron, Implementing fuzzy containment via rough inclusions: Rough mereological approach to distributed problem solving, Proc. Fifth IEEE Int. Conf. on Fuzzy Systems, vol. 2, New Orleans, Sept. 8–11, 1996, 1147–1153. 38. L. Polkowski, A. Skowron, Rough mereology: A new paradigm for approximate reasoning, International Journal of Approximate Reasoning, vol. 15, no. 4, 1996, 333–365. 39. L. Polkowski, A. Skowron, Rough mereological calculi of granules: A rough set approach to computation, Computational Intelligence: An International Journal, vol. 17, no. 3, 2001, 472–492. 40. Bazan, H. S. Nguyen, A. Skowron, M. Szczuka: A view on rough set concept approximation, In: G. Wang, Q. Liu, Y. Y. Yao, A. Skowron, Proceedings of the Ninth International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing RSFDGrC’2003), Chongqing, China, 2003, LNAI 2639, 181–188. 41. J. Bazan, H. S. Nguyen, J. F. Peters, A. Skowron, M. Szczuka: Rough set approach to pattern extraction from classifiers, Proceedings of the Workshop on Rough Sets in Knowledge Discovery and Soft Computing at ETAPS’2003, April 12–13, 2–3, Warsaw University, electronic version in Electronic Notes in Computer Science, Elsevier, 20–29. G. Frege, Grundlagen der Arithmetik, 2, Verlag von Herman Pohle, Jena, 1893. 42. T. Munakata, Z. Pawlak, Rough control: Application of rough set theory to control, Proc. Eur. Congr. Fuzzy Intell. Technol. EUFIT’96, 1996, 209–218. 43. J. F. Peters, A. Skowron, Z. Suraj, An application of rough set methods to automatic concurrent control design, Fundamenta Informaticae, 43(1–4), 2000, 269–290. 44. T. Y. Lin, Fuzzy controllers: An integrated approach based on fuzzy logic, rough sets, and evolutionary computing. In: T. Y. Lin, N. Cercone (Eds.), Rough Sets and Data Mining: Analysis for Imprecise Data. Norwell, MA, Kluwer Academic Publishers, 1997, 109–122. 45. J. Grzymala-Busse, S. Y. Sedelow, W. A. Sedelow, Machine learning & knowledge acquisition, rough sets, and the English semantic code. In: T. Y. Lin, N. Cercone (Eds.), Rough
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
Sets and Data Mining: Analysis for Imprecise Data. Norwell, MA, Kluwer Academic Publishers, 1997, 91–108. S. K. Pal, L. Polkowski, A. Skowron (Eds.), Rough-Neuro Computing: Techniques for Computing with Words. Berlin: Springer-Verlag, 2003. R. Hashemi, B. Pearce, R. Arani, W. Hinson, M. Paule, A fusion of rough sets, modified rough sets, and genetic algorithms for hybrid diagnostic systems. In: T. Y. Lin, N. Cercone (Eds.), Rough Sets and Data Mining: Analysis for Imprecise Data. Norwell, MA, Kluwer Academic Publishers, 1997, 149–176. R. Ras, Resolving queries through cooperation in multi-agent systems. InT. Y. Lin, N. Cercone (Eds.), Rough Sets and Data Mining: Analysis for Imprecise Data. Norwell, MA, Kluwer Academic Publishers, 1997, 239–258. A. Skowron, Z. Suraj, A parallel algorithm for real-time decision making: a rough set approach. J. of Intelligent Systems, vol. 7, 1996, 5–28. J. F. Peters, T. C. Ahn, M. Borkowski, V. Degtyaryov, S. Ramanna, Line-crawling robot navigation: A rough neurocomputing approach. In:C. Zhou, D. Maravall, D. Ruan (Eds.), Autonomous Robotic Systems. Berlin: Physica-Verlag, 2003, 141–164. J. F. Peters, T. C. Ahn, M. Borkowski, Object-classification by a line-crawling robot: A rough neurocomputing approach. In:J. J. Alpigini, J. F. Peters, A. Skowron, N. Zhong (Eds.), Rough Sets and Current Trends in Computing, LNAI 2475. SpringerVerlag, Berlin, 2002, 595–601. M. S. Szczuka, N. H. Son, Analysis of image sequences for unmanned aerial vehicles. In:M. Inuiguchi, S. Hirano, S. Tsumoto (Eds.), Rough Set Theory and Granular Computing. Berlin: Springer-Verlag, 2003, 291–300. H. S. Son, A. Skowron, M. Szczuka, Situation identification by unmanned aerial vehicle. In: Proc. of CS&P 2000, Informatik Berichte, Humboldt-Universitat zu Berlin, 2000, 177–188. J. F. Peters, L. Han, S. Ramanna, Rough neural computing in signal analysis, Computational Intelligence, vol. 1, no. 3, 2001, 493–513. J. F. Peters, S. Ramanna, Towards a software change classification system: A rough set approach, Software Quality Journal, vol. 11, no. 2, 2003, 87–120. M. Reformat, W. Pedrycz, N. J. Pizzi, Software quality analysis with the use of computational intelligence, Information and Software Technology, 45, 2003, 405–417. J. F. Peters, S. Ramanna, A rough sets approach to assessing software quality: Concepts and rough Petri net models. In:S. K. Pal andA. Skowron (Eds.), Rough-Fuzzy Hybridization: New Trends in Decision Making. Berlin: Springer-Verlag, 1999, 349–380. W. Pedrycz, L. Han, J. F. Peters, S. Ramanna, R. Zhai, Calibration of software quality: Fuzzy neural and rough neural approaches. Neurocomputing, vol. 36, 2001, 149–170. J. F. Peters, C. Henry, Reinforcement learning with approximation spaces. Fundamenta Informaticae, 71 (2-3), 2006, 323–349. W. Pedrycz, Granular computing with shadowed sets. In:D. Slezak, G. Wang, M. Szczuka, I. Duntsch, Y. Yao (Eds.),Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, LNAI 3641. Springer, Berlin 2005, 23–31. W. Pedryz, Granular computing in knowledge integration and reuse. In:D. Zhang, T. M. Khoshgoftaar, M.-L. Shyu (Eds.), IEEE Int. Conf. on Information Reuse and Integration. Las Vegas, NV, USA, 15–17 Aug. 2005.
Computational Intelligence 62. W. Pedrycz, G. Succi, Genetic granular classifiers in modeling software quality. Journal of Systems and Software, 76(3), 2005, 277–285. 63. W. Pedrycz, M. Reformat, Genetically optimized logic models. Fuzzy Sets & Systems, 150(2), 2005, 351–371. 64. J. F. Peters, Rough ethology: Towards a biologically-inspired study of collective behavior in intelligent systems with approximation spaces. Transactions on Rough Sets III, LNCS 3400, 2005, 153–174. 65. L. Polkowski, Rough mereology as a link between rough and fuzzy set theories: A survey. Transactions on Rough Sets II, LNCS 3135, 2004, 253–277. 66. L. Polkowski, Rough Sets. Mathematical Foundations. Advances in Soft Computing, Physica-Verlag, Heidelberg, 2002. 67. Z. Pawlak, Some issues on rough sets. Transactions on Rough Sets I, LNCS 3100, 2004, 1–58. 68. Z. Pawlak, A treatise on rough sets. Transactions on Rough Sets IV, LNCS 3700, 2005, 1–17. 69. E. Orlowska, Semantics of Vague Concepts. Applications of Rough Sets. Institute for Computer Science, Polish Academy of Sciences, Report 469, March 1981. 70. A. Skowron, J. Stepaniuk, Generalized approximation spaces. In:Lin, T. Y., Wildberger, A. M. (Eds.), Soft Computing, Simulation Councils, San Diego, 1995, 18–21. 71. A. Skowron, J. Stepaniuk, J. F. Peters, R. Swiniarski, Calculi of approximation spaces. Fundamenta Informaticae, 72 (1–3), 2006, 363–378. 72. A. Skowron, J. Stepaniuk, Tolerance approximation spaces. Fundamenta Informaticae, 27(2–3), 1996, 245–253. 73. A. Skowron, R. Swiniarski, P. Synak, P., Approximation spaces and information granulation. Transactions on Rough Sets III, LNCS 3400, 2005, 175–189. 74. A. Skowron, J. F. Peters, Rough sets: Trends and Challenges. In:Wang, G.Liu, Q., Yao, Y., Skowron, A. (Eds.), Proceedings 9th Int. Conf. on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing (RSFDGrC2003), LNAI 2639, SpringerVerlag, Berlin, 2003, 25–34. 75. IEEE World Congress on Computational Intelligence. Vancouver, B.C., Canada, 16–21 July 2006. 76. Hamza, M. H. (Ed.), Proceedings of the IASTED Int. Conf. on Computational Intelligence. Calgary, AB, Canada, 4–6 July 2005. 77. Z. Pawlak, Classification of Objects by Means of Attributes. Institute for Computer Science, Polish Academy of Sciences, Report 429, March 1981. 78. Z. Pawlak,Rough Sets. Institute for Computer Science, Polish Academy of Sciences, Report 431, 1981. 79. Z. Pawlak, Rough classification. Int. J. of Man-Machine Studies, 20 (5), 1984, 127–134. 80. Z. Pawlak, Rough sets and intelligent data analysis, Information Sciences: An International Journal, 147 (1–4), 2002, 1–12. 81. Z. Pawlak, Rough sets, decision algorithms and Bayes’ theorem, European Journal of Operational Research, 136, 2002, 181–189. 82. E. P. Kement, R. Mesiar, E. Pap, Triangular Norms, Kluwer, Dordrecht, 2000. 83. J. Bazan, A. Skowron, R. Swiniarski, Rough sets and vague concept approximation: From sample approximation to adaptive learning, Transactions on Rough Sets V, LNCS 4100, Springer, Heidelberg, 2006, 39–62.
JAMES F. PETERS WITOLD PEDRYCZ University of Manitoba, Manitoba, Canada University of Alberta, Alberta, Canada
5
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7102.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Cybernetics Standard Article Robert B. Pinter1, Abdesselem Bouzerdoum2, Bahram Nabet3 1University of Washington, Seattle, WA 2Edith Cowan University, Perth, Australia 3Drexel University, Philadelphia, PA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7102 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (166K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Biological Neurons, or Nerve Cells Fundamental Mathematical ideas for Cybernetics Nonlinear Nervous System Computation Cybernetics of a Visual System Shunting Inhibitory Motion Detection Electronic Analogs of Cybernetic Models About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7102.htm15.06.2008 14:10:04
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
478
CYBERNETICS
CYBERNETICS The modern definition of cybernetics arose in the study of machines containing feedback and computing subsystems. The second world war and available technology combined to give a generation of more ‘‘intelligent’’ machines than previously utilized. One of the more important persons in this endeavor was Norbert Wiener, mathematician, electrical engineer, and professor at Massachusetts Institute of Technology. He had been at MIT since 1920 and was involved in deriving several feedback and communications filtering theories, some of which were classified in the second world war but all subsequently published. Wiener’s important theories on nonlinear systems were developed shortly after the second world war. As a result of Professor Wiener’s broad and incisive knowledge in many areas of science and engineering beyond matheJ. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
CYBERNETICS
matics, he proposed a new word, cybernetics, in 1947, to describe a scientific frontier: a parallel and interacting study of intelligent machines and living organisms (1,2). The objectives of such parallel study are to increase understanding of living organisms by mathematically modeling their many systems and subsystems, with an important engineering goal, in many cases, of improved design of machines. An obvious example is the pilot-aircraft feedback control problem. In more recent studies both parallel and interacting, one may build electromechanical models of aspects of motor physiology, which can then be incorporated in robots, and further apply known sensory response characteristics found in perception and physiology studies to make the robot more adaptive and thus more intelligent. Computational models of the central nervous system can then be used to further this design paradigm in robotics, another branch of cybernetics. The original definition of cybernetics is still that found in many languages today and it also applies to organized groups of living organisms, such as societies with their political, social, and economic subsystems and their interactions. Computational modeling is also applied to these problems to find dynamic properties that might be utilized in predictions, and in several fields this endeavor is nominally cybernetics. It is possibly this aspect of cybernetics that is related in its etymology most closely to the Greek root Kuber, which is also the root word for government. In a sense, cybernetics was intended also to put government on a scientific and rational basis, and an extensive series of meetings of the ‘‘Cybernetics Group’’ from 1946 to 1953 brought forth consideration of these diverse aspects of cybernetics, including especially the social aspects (3). However, the cybernetics of greatest interest to electrical and electronic engineers is the parallel study of nonlinear feedback and nonlinear signal processing circuits and systems to model the peripheral and central nervous systems of living organisms. Thus, cybernetics can be ‘‘modeling the brain’’ in a very imprecise definition, but still related etymologically to the other, closer Greek root word, Kubernetes, a steersman or navigator. The earliest meeting of the minds on this subject extended to 1935, and one of the well-known later results was the ‘‘Pitts-McCulloch Neuron,’’ the ancestor of much of modern work in the field of neural networks, another descendent field of cybernetics. In some areas of physiology and biophysics the quantitative analysis and modeling of living processes was indeed already on the level that Wiener and colleagues envisioned. The cybernetics meetings included some of these pioneering physiologists, biophysicists, and psychologists. No aspect of cybernetics arose in a vacuum, but the emergent viewpoint was different. It was and is the parallel study of machines and living organisms, a liberal view that includes but is not restricted to precise modeling, data analysis, and design. Further, the assumptions of modeling living organisms must be grounded in some known aspects of the physiology or biophysics. Thus there are overlapping areas of cybernetics with many fields: physiology, biophysics, computer science, robotics, artificial neural networks, and vision science to name a few. Cybernetics simply follows the scientific method, where the theorists and the experimentalists are not necessarily the same people. Cybernetic models are actually hypotheses, and they flow into experimental science as well as to engineering design. It is never immediately clear how good these models
479
are from a scientific or engineering standpoint, and thus cybernetics is a continuously growing field. Historically the field of cybernetics grew at MIT, where since 1935 seminars of physiologists, biophysicists, neurologists, electrical engineers, physicists, mathematicians, and psychologists took place continually. This intellectual effort spread to other institutions, such as the California Institute of Technology, the University of California Los Angeles, the Max Planck Institute for Biological Cybernetics in Tubingen, Germany, the University of Southern California, the University of California San Diego, Cambridge University, Northwestern University, the Australian National University, Boston University, the University of Adelaide, the University of Pennsylvania, and Drexel University. The field of cybernetics and its biological aspects are represented in at least 20 modern archival journals and many conferences sponsored by the Institute of Electrical and Electronic Engineers (IEEE), International Neural Network Society, Biophysical Society, Optical Society of America, Biomedical Simulations Resource of the University of Southern California, and Society for Neuroscience and the Biomedical Engineering Society, among others.
BIOLOGICAL NEURONS, OR NERVE CELLS By 1947, a fairly precise but simple picture of how nerve cells computed and signaled their outputs had been assembled. It was already clear that nerve cells did not operate as ‘‘all-ornone’’ except in their long-distance impulse transmission along axons. The impulse is essentially a solitary wave, propagated without loss by means of active ionic processes involving sodium, potassium, and calcium. The passive resistancecapacitance electrical properties of the membrane enclosing the nerve cell were known, and it was clear that nerve cells could, and did, compute continuous sums and differences and products. Among others, the work of Rushton, Hodgkin, Eccles, and Hartline showed this. These computations by nerve cells were quantitatively described by continuous mathematics of differential equations, so the ‘‘brain as a digital computer’’ could be seen then as an oversimplification. The brain is modular, so in some aspects, such as general flow of information or at gating synapses, parallels could be drawn between the brain and the digital computer, but such theory lacks the necessary thorough grounding in neurophysiology. Passive Nerve Cell Input Computations Because the treelike structure of the nerve cell, the dendritic tree, is often electrically passive, the number of synaptic contacts on the dendrites and their electrically continuous nature implies a number of states of the biological neuron far in excess of two. But how the immense combinatoric sums are used in actual computation is still not clear. It does not seem possible at this point to utilize these facts to demonstrate clearly how to mediate or represent functions of memory or consciousness. But, on the other hand, a model involving growth and decay, by darwinian algorithms, of hexagonal regions of cortical activity is a strong beginning to representation of thought or cognition (4). Studies on more peripheral and sensory neurophysiology have led to somewhat deeper levels of understanding, in knowing exactly how the environmental information is encoded and transmitted. This has often been
480
CYBERNETICS
accomplished by neurophysiological studies on lower species such as insects. FUNDAMENTAL MATHEMATICAL IDEAS FOR CYBERNETICS Modern control system theory and communication theory in electrical engineering forms a good basis for mathematical descriptions in cybernetics. An overwhelming need in this work is the ability to include the effects of nonlinearities, of both the no-memory and the memory type. Professor Wiener’s outstanding contribution to this mathematics was first reported in 1949, as means to analyze and synthesize nonlinear systems. From the viewpoint of perturbation theory, a linear approximation will usually be a satisfactory beginning for analysis around the equilibrium state. But biological systems will always possess significant nonlinearities, so nonlinear theory is essential. The forms of mathematical descriptions are typically differential equations, or integral operators on a known input, or a mixture of both. Under a very general set of assumptions, the Volterra integral operator, or functional operator, series can be derived as a solution to a given nonlinear differential equation (5). This can be considered a polynomial inversion of the differential operators. The uniform convergence of this solution under a wide category of conditions has been proven. In general, however, useful system identification methods apply to only the kernels of the integral operators, in the orthogonalized form derived by Wiener (6,7). The nonlinear differential equation does not admit a general, direct, and useful method of identification of its parameters and functions. NONLINEAR NERVOUS SYSTEM COMPUTATION Nonlinear functions are common in the transduction of information in the nervous system. Threshold, saturation, compression, adaptation of gain, light and dark adaptation, automatic gain and bandwidth control, and directional sequence dependence are most evident among the many discovered. The synapse in the nervous system is substantially more complex than the simple sum or difference operator. The synapse has temporal dynamics and significant nonlinear properties and can compute products and quotients. For example, the two variables at a point of connection of a synapse to a postsynaptic nerve membrane, the postsynaptic conductance and the postsynaptic voltage, are multiplied by the Ohm’s law property of nerve membranes to produce a product of presynaptic activity and existing postsynaptic activity. In many cases this becomes dominated by the linear sum, which is a function of the ionic species of the conductance. In others, where the equilibrium potential of the ion is close to the existing membrane potential, the multiplication dominates. The latter is often called ‘‘shunting inhibition.’’ In principle, any change of conductance may include a shunting component. The possibility of a distribution of multipliers in the nervous system fits well with the theorem, proven by M. Schetzen (7), that any Volterra operator can be synthesized to a specific accuracy by a finite number of multipliers and linear systems. This provides a close mathematical link to the nervous system. In many physiological studies there is a need to know whether nerve cell connections are positive (excitatory) or negative (inhibitory), or feedback or feedforward. For ex-
ample, the visual systems of many living organisms are capable of good velocity estimation, and to know how this is accomplished requires knowledge of the polarities and points of connection of synapses. Only a limited analysis of this problem is possible using neuroanatomical methods. However, given a sufficiently complex nonlinear theory, such as the Volterra series, capable of being reduced to multipliers and linear blocks, a parallel, cybernetic array of computational and physiological experiments may give some aid to understanding how the nervous system accomplishes the computations it is making. First one constructs such a model, subjects it to computation, and then alters the model to improve the fit to the data. In the course of this synthesis by iterated analysis, further experimental tests may be suggested. Volterra and Wiener Series One of the most important cybernetic developments by Wiener was to bring mathematics of systems analysis to a more usable form to both identify and characterize the nonlinear system. Applications of this work have been extensive in electrical and electronic engineering (6–10). First, the Volterra series is a generalization of linear convolution to multiple convolution integrals, over instants of time and space that encompass the integral-functional nature of system nonlinearities. The conditions for convergence are not severe, consisting alone of systems without infinite memory or output step discontinuities. Wiener took the Volterra series and orthogonalized it and further derived a set of functions to express the kernels. The time-dependent behavior of the kernels of the integrals is expressed by Laguerre function impulse responses, with Hermite polynomials and multipliers to combine the Laguerre functions into the kernels themselves. Gaussian distributed white-noise input to the system is needed to define, or find, the parameters by expectation operations. The most important further development was by Y. W. Lee and M. Schetzen, who showed that a nonparametrized Wiener kernel could be derived from crosscorrelating the input with the output of the system. This method, called the Lee-Schetzen algorithm, and its many variations and improvements are the bases by which most modern applications of the Volterra-Wiener theory are made (7). The sum-of-sinusoids and M-sequence methods have found considerable recent usage (8,9) in improving the signal-to-noise ratio of the kernel estimates. Because the orthogonalization of Volterra series by the Gram-Schmidt procedure yields the Wiener series, the Volterra kernels can be calculated from the Wiener kernels, and vice versa. Usually the Volterra kernels would be derived from a given known or assumed system structure, and the Wiener kernels from the input-output data by the Lee-Schetzen algorithm with other improvements. The M-sequence, for example, tends to improve the signal-to-noise ratio of the kernel estimates and shorten the required record lengths (8,9). The Volterra-Wiener theory and method is the most general for characterizing and identifying smooth, nonoscillating nonlinear systems with finite memory. The Volterra-Wiener theory also generates a procedure for finding the inverse of a nonlinear system to any given degree. This is essentially how the differential equation is solved by assuming a Volterra series solution and applying perturbation theory (5,7,10). Further, the inverse of the nonlinear part aids the analysis and
CYBERNETICS
design of nonlinear feedback systems. In general, an iterative procedure for analysis of nonlinear systems can be set up in the following way. An unknown nonlinear system can be subjected to appropriate noise inputs and the Wiener kernels identified by the crosscorrelation or other algorithm. From some minimal knowledge of the structure of the system, diagrams of linear operators and multipliers can be developed that then give the Volterra kernels, from which the corresponding Wiener kernels can be calculated. These can be compared, and modifications can be made to the assumed structures to give a better fit, in some sense. Therefore, this represents a nonlinear input–output systems analysis with identification experiments and iterative computational experiments to synthesize a system structural model. CYBERNETICS OF A VISUAL SYSTEM A wide-ranging study of biology, neuroanatomy, neurophysiology, and mathematics of systems is required to make a meaningful cybernetic model of only a part of an organism. Since the number of nerve cells is smaller in insects than in vertebrates, and insect behavior is perhaps more stereotyped, insect vision has received considerable study in a cybernetic manner. The Max-Planck Institute for Biological Cybernetics in Tubingen, Germany was founded on this kind of work, which always views the animal in a feedback loop with its predominating visual sensing of position, velocity, and acceleration paramount in the nervous system. Indeed, the majority of nerve cells in the brains of insects respond to visual stimuli. Of these, perhaps velocity is the most important. Some relative velocity between organism and environment is necessary for evolution, development, and learning (12). Velocity can be considered a most basic biological variable. Thus the more detailed cybernetics in the following subsection has concentrated on the insect’s visual system and its transduction of the relative velocity or motion of the organism and environment. Motion Detection System in Insects The primary visual system of insects has a highly regular structure, dominated by a retinotopic organization. It consists of a pair of multifaceted eyes known as the compound eyes, two optic lobes, one on each side of the head, and the tracts and projection centers of the visual interneurons in the protocerebrum [see Strausfeld (12) for more details]. In flies, each compound eye is composed of approximately 3000 to 4000 ommatidia (tiny eyes). Each ommatidium is a functional unit comprising a lenslet and a retinula, containing eight receptor or retinular cells labeled R1–R8. The optic lobes convey information from the compound eyes to the brain. They each comprise three retinotopically connected visual ganglia, commonly known (from the periphery inward) as the lamina, the medulla, and the lobula, or lobula complex in some insect orders. In Diptera (true flies) the lobula complex is divided into two parts: an anterior part, the lobula, and a posterior part, the lobula plate. The synaptic neuropils in the visual ganglia are strictly organized into columns and strata. Both the lamina and medulla are composed of structurally identical parallel synaptic compartments, or columns, that exactly match in number the ommatidia in the retina. However, the retinotopic periodicity
481
of the lamina and medulla is coarsened by the columnar structure of the lobula: There is only one lobular column for every six medullary ones. Each column in the lamina, known as an optic cartridge, receives inputs from a group of six photoreceptors (R1–R6) that share the same visual axis as the overlaying ommatidium, and projects outputs to the medulla column lying directly beneath it. Each lamina cartridge houses six relay cells, the most prominent of which are the large monopolar cells (LMCs) L1 and L2. These two cells form the central elements in every optic cartridge. They receive the majority of the photoreceptor synapses and project retinotopically to the medulla. They are considered a major channel for relaying information about the intensity received by a single sampling station from the retina to the medulla. (For more comprehensive reviews of the anatomical structure and function of the lamina pathways, see, e.g., pp. 186–212 and 317– 359 in Ref. 13 and pp. 457–522 in Ref. 14.) The medulla has the most complex anatomical structure of any neuropil in the optic lobe and is characterized by an extensive network of lateral connections (see pp. 317–359 in Ref. 13 and pp. 428–429 in Ref.10 for a review). It contains a variety of functionally different units ranging from simple contrast detectors to directional and nondirectional motion sensitive neurons (pp. 377–390 in Ref. 10). Although little is known about the synaptic interconnections within the medulla, two major retinotopic projection modes, directly involving laminar units, have been recognized: one involved in color coding, the other in motion information processing. In the first, wide-field transmedullary neurons get inputs from a laminar cell L3 and the receptor pair R7/R8, which make no contacts in the lamina, and output to a large variety of retinotopic columnar neurons in the lobula. In contrast, the second projection mode involves small-field medullary relays: they derive their inputs from R1–R6 via the LMCs and synapse onto two bushy cells, T4 and T5, which provide inputs to wide-field color blind motion-sensitive neurons in the lobula plate. This suggests that at the level of the lamina there is already segregation of retinotopic projections to the lobula and lobula plate. The medulla is the most peripheral structure in which movement detection takes place. However, the motion computation center in flies appears to be the lobula plate, the posterior part of the third visual ganglion. The lobula plate houses about 50 identifiable neurons, all of which are directionally selective movement detecting (DSMD) neurons and appear to form part of the optomotor control system of the insect. Most of these cells are wide-field DSMD neurons that seem to share a common network of presynaptic elements derived from the medulla. This group of DSMD neurons comprises several classes of tangential cells that respond to whole-field horizontal or vertical motion (pp. 317–359 and 391–424 in Ref. 13 and pp. 483–559 in Ref. 14). They receive both excitatory and inhibitory inputs from large retinotopic arrays of smallfield elementary movement detectors (EMDs), which possess opposite preferred directions. Figure 1 illustrates the basic functional structure of a wide-field DSMD neuron. It is not yet known whether these small-field EMDs reside in the medulla, lobula, or lobula plate. Nonetheless, it is widely believed that they operate on the principle of a nonlinear asymmetric interaction of signals derived from adjacent cartridges of the ommatidial lattice [see, e.g., Kirschfeld (15), pp. 360– 390 in Ref. 13, and pp. 523–559 in Ref. 14].
482
CYBERNETICS
...
Figure 1. Schematic representation of a DSMD neuron. The DSMD neuron receives excitatory and inhibitory signals from an array of functionally identical EMDs, which differ only with respect to the orientation of their sampling bases. Each EMD receives two inputs from adjacent lamina cartridges (box L), which are fed by the receptor cells (R).
R
R
R
R
L
L
L
L
τ
EMD
τ
τ
EMD
τ
EMD
τ
EMD
EMD
τ
τ
EMD
τ
EMD
...
EMD
... ...
One wide-field DSMD neuron that has been extensively studied for more than two decades is the giant heterolateral H1 neuron of the fly. It responds to horizontal motion presented to the ipsilateral eye in the forward (regressive) direction, and it is inhibited by motion presented in the backward (progressive) direction. There is only one H1 neuron in each lobula plate. The main role of the H1 neuron appears to be the control of the optomotor torque response. The two bilaterally symmetric H1 cells exert mutual inhibition; thereby each cell is particularly sensitive to either clockwise or anticlockwise rotatory (yaw) motion of the visual field. The EMDs feeding the H1 neuron derive their inputs from the photoreceptors R1–R6. Franceschini and co-workers (pp. 360–390 in Ref. 13) recorded a sequence dependent response from the H1 neuron by successively stimulating the photoreceptor pair (R1, R6) within a single ommatidium. In particular, they found that the sequence R1 씮 R6 evoked an excitatory response whereas the sequence R6 씮 R1 induced an inhibitory or no response, which was in accordance with the preferred and nonpreferred directions of the H1 neuron, respectively. Elementary Movement Detection The EMD is the minimum prerequisite for directionally selective detection of motion in the visual field. It is based on the principle of asymmetrical interaction between two adjacent channels (Fig. 2). The visual field is sampled at two receptor regions, R1 and R2. The signal from one receptor is passed through an ‘‘appropriate’’ time delay, such as a low-pass filter of time constant , before interacting with the signal from the adjacent channel. The asymmetry between the two input channels is necessary for the detector to acquire direction selectivity. For if the system were symmetric, the two input channels could be interchanged without altering its response. This would be equivalent to reversing the direction of motion but still obtaining the same response. Therefore, without an asymmetrical interaction, the movement detector loses its ability to respond differentially to motion in opposite directions. There exist mainly two general schemes for the realization of the asymmetrical interaction. One works by detecting a
Excitatory synapse
Inhibitory synapse
Null direction R2
DSMD neuron
Preferred direction R1
R2
R1
τ
τ
C
V
(a)
(b)
Figure 2. The elementary movement detector. (a) Conjunctive scheme: If the signals from the two adjacent channels arrive simultaneously at C (preferred direction), then a conjunction of excitation is detected signaling motion; whereas if the two signals arrive separately, the unit C remains quiescent. (b) Veto scheme: If the two signals reach V concurrently (null direction), they cancel each other and no motion is signaled. However, if the two signals arrive to V separately (preferred direction), the veto signal is unable to suppress the other signal, which indicates motion.
specific conjunction of excitation in the preferred direction (Fig. 2a), the other by rejecting the null stimulus by a veto operation (Fig. 2b). The best-known conjunctive scheme is the correlation model proposed by Hassenstein and Reichardt in 1956 to account for the characteristics of insect optomotor response (14). In this model, the interaction between adjacent channels is implemented by a multiplication followed by an infinite time-averaging operation (i.e., a correlation). The first veto scheme was proposed by Barlow and Levick in 1965 (16), who discovered that inhibition is the mechanism underlying directional selectivity of ganglion cells in the rabbit retina. They suggested that inhibition is triggered selectively in such a way that at each subunit of the receptive field a delayed inhibitory mechanism vetoes the excitatory re-
CYBERNETICS
sponse in the null direction, but appears too late to cancel the response in the preferred direction. Barlow and Levick demonstrated that directional selectivity of retinal ganglion cells is based on sequence discrimination within small-field synaptic subunits, or EMD. More specifically, they showed that over the whole receptive field, successive stimulation of two subunits close to each other caused a response that depends on whether the order corresponds to motion in the preferred or null direction, but the effect decreased at greater separations. The initial stages of movement detection in insects also appear to be based on sequence discrimination by EMDs. Both behavioral and electrophysiological experiments on flies indicate that movement detection takes place between neighboring points of the sampling lattice of the eye. For example, sequential stimulation confined to pairs of identified photoreceptors in single ommatidia induced optomotor turning reactions in the fly (15) and evoked directional responses in the H1 neuron of the fly (pp. 360–390 in Ref. 13). However, the nature of the mechanism mediating direction selectivity in insects remains unresolved, despite numerous investigations attempting to unlock the mystery. Some scientists believe that it is excitatory, others suggested that it is inhibitory, yet there are others who believe that it is both (see Ref. 17 for a further discussion). It is not the aim here to resolve the conflict; however, in the next section we present a neural network architecture based on the mechanism of shunting inhibition that can account for the response of the H1 neuron.
Shunting lateral inhibition is a biophysical process in which the synaptic conductance of inhibitory channels is controlled in a nonlinear manner by voltages of nearby cells or cell subunits. It can be described by a nonlinear ordinary differential equation of the form
(1)
where e represents the activity of a cell or cell subunit, interpretable as the deviation of the membrane voltage from the resting potential; L(t) ⱖ 0 is the external input to the cell; a ⬎ 0 is the passive decay constant; kj ⱖ 0 represents the connection strength of the jth inhibitory synapse; vj is the potential controlling the conductance of the jth synapse; and f j is the activation function: it is continuous, positive, monotonic increasing for positive arguments and represents the output transfer function that converts the membrane voltage vj to a firing rate f(vj). The shunting inhibitory motion detector (SIMD) is a movement detector where the nonlinear interaction at the EMD level is mediated by shunting lateral inhibition. The response of each EMD is described by a pair of coupled ordinary differential equations:
dv(t) 1 = L1 (t) − v(t), dt τ de(t) = L2 (t) − ae(t)[1 + k f (v)] dt
where e(t) is the EMD output, L1 and L2 are the external inputs of the EMD, v is the delayed inhibitory input, and is the time constant of the delay filter. Next, we will discuss the functional characteristics of this detector and compare them to those of fly DSMD neurons. Characteristic Responses of SIMD In this section we investigate both the transient and steadystate response characteristics of the SIMD and compare them to those of the fly H1 neuron. To conduct an analysis of the SIMD functional properties, an approximation of its response is in order. In general, the system of Eq. (2) is not amenable to an elementary treatment to yield an explicit solution. However, approximate solutions can be obtained if the input signals satisfy certain conditions. Perturbation methods are used to obtain approximate solutions of nonlinear differential equations. For inputs of the form Li (t) = L0 + cli (t), i = 1, 2 where the contrast 兩c兩 ⬍ 1, the system of differential equations of Eq. (2) admits a unique solution that is continuous in both t and c. Therefore, e(t) and v(t) can be expressed as
v(t) = x0 + cx(t) e(t) = y0 + cy1 + c2 y2 + · · · =
c n yn
(3)
n=0
Differentiating Eq. (3) and substituting for f(v) ⫽ f(x0 ⫹ cx) its Taylor series expansion into Eq. (2) yields the following equations:
SHUNTING INHIBITORY MOTION DETECTION
de(t) = L(t) − ae(t) 1 + k j f j (v j ) dt j
483
(2)
x0 = τ L 0 x˙ = l1 (t) −
1 x = l1 (t) − bx, τ
L0 , where α = a[1 + k f (x0 )] α y˙1 = l2 (t) − ak f (x0 )y0 x − αy1 , n f j (x0 ) j x yn− j − αyn , for n ≥ 2 y˙n = −ak j! j=1 y0 =
(4)
The pth order approximation of the EMD response is given by
e p (t) =
p
c j y j (t)
(5)
j=0
Note that the smaller is c, the more accurate is the approximation. Thus, for low-contrast stimuli one could always get a fairly good approximation to the response by solving the set of linear differential equations in Eq. (4). Response to Drifting Gratings Sine-wave gratings are commonly used in vision to evoke the spatial and temporal frequency responses of visual systems. Drifting gratings have been extensively used to study the response of the motion detection system in insects. Let L(s, t) be a drifting sine-wave grating, L(s, t) = L0 + cL0 cos(2π ft t + 2πµ f s s + ϕ)
(6)
484
CYBERNETICS
where s is the spatial dimension, f s is the spatial frequency in cycles/deg, f t is the contrast frequency in hertz, t is time, 애 is the direction of motion (애 ⫽ ⫺1 for leftward motion and ⫹1 for rightward motion), and is the initial phase. The steadystate response of a SIMD to such drifting sine-wave grating usually oscillates around an average response that depends strongly on the direction, contrast, and spatial and temporal frequency contents of the moving pattern. If only nonlinearities up to second order are considered, then the response of a SIMD consisting of two mirror symmetric EMDs, sharing the same inputs but having different polarities (one contributing an excitatory response, the other an inhibitory one), is approximated by m2 (t) = 2c(yE,1 − yI,1 ) + c2 (yE,2 − yI,2 )
sine-wave grating in Eq. (6) is given by Mr =
akc2 L20 f (x0 )(α − b)ω sin(2πµ f s s) α(b2 + ω2 )(α 2 + ω2 )
where 웆 ⫽ 2앟f t is the angular frequency and ⌬s is the interreceptor angle. From Eq. (8), we see that the SIMD mean steady-state response depends on contrast frequency f t ⫽ 웆/ 2앟, spatial frequency f s, mean luminance L0, and contrast c of the moving grating. Note that Mr is insensitive to contrast reversal (it depends on c2) and is fully directional (i.e., its sign depends on the direction of grating motion 애). Dependence on Contrast Frequency
(7)
where yE,j(j ⫽ 1, 2) is the jth response component of the excitatory EMD, and yI,j is the jth response component of the inhibitory EMD. Let Mr ⫽ m2(t) denote the time-averaged, or mean steadystate, response of a SIMD caused by second-order nonlinearities. Then it can easily be shown that Mr due to the moving
The SIMD is a contrast frequency detector, not a velocity detector. The dependence of the mean-steady state response Mr on the contrast frequency, f t, and angular velocity, f t /f s, is depicted in Figs. 3(a) and 3(b), respectively. The curves were obtained from Eq. (8) with parameters a ⫽ b ⫽ 15 and k ⫽ 5 and a linear activation function f(v) ⫽ v. The mean steadystate response increases with contrast frequency, or speed, until it reaches a maximum, and then falls off at higher fre-
1
Normalized Mr
0.8 0.6 0.5
0.4 0.2 0
0 10–1
101 100 Temporal frequency (Hz)
102
–0.2
0
0.5 Spatial frequency, (cycles/deg)
1
(c)
(a)
Normalized Mr
1
0.5
0.5
100
101
102 103 Velocity (deg/s) (b)
(8)
104
0 10–1
100 101 102 Mean luminance L0 (d)
Figure 3. Mean steady-state response computed from Eq. (8) as a function of contrast frequency (a), velocity (b), spatial frequency (c), and mean luminance (d). The curves in (b) are obtained (after being normalized) for three different spatial frequencies: f s ⫽ 0.2 (solid), f s ⫽ 0.05 (dashed), and f s ⫽ 0.01 (dotted). The curve in (c) was obtained by including the effect of contrast attenuation at the receptor level, Eq. (10).
104
CYBERNETICS
quencies. The peak contrast frequency, the frequency of maximum steady-state response, is given by
ft =
−(b2 + α 2 ) +
(b2 + α 2 )2 + 12b2α 2 24π 2
1/2 (9)
which may be approximated by f t 앒 b/2앟 at high mean luminance levels. From Eq. (8), it is evident that the peak contrast frequency does not change with spatial wavelength. However, the optimum velocity to which the system is tuned does change with the spatial frequency (Fig. 3b). The curves in Figs. 3(a) and 3(b) demonstrate the SIMD ability to respond to a broad range of pattern velocities. The response characteristics in these curves are in full agreement with those of tangential cells in the lobula plate. All DSMD neurons of the lobula plate tested so far exhibit similar response characteristics: The response does not depend on pattern velocity, but rather on contrast frequency; the response range covers about 3 log units of contrast frequency (0.01–0.05 Hz to 20–50 Hz); the response amplitudes increase from lower threshold to peak and fall off sharply above the peak; the response peaks are consistently found at 1 Hz to 5 Hz [see, e.g., Eckert (18) and Zaagman et al. (19)]. Dependence on Spatial Frequency Equation (8) predicts a sinusoidal mean steady-state response with respect to the spatial frequency of a moving grating. Since the period of this sinusoid, with respect to spatial frequency f s, is equal to 1/⌬s (cycles/degree), responses in the range (1/2 ⌬s) ⬍ f s ⬍ (1/⌬s) are equal and opposite in sign to responses in the range f s ⬍ (1/2 ⌬s). This is due to the limitations on spatial resolution set by the sampling theorem. That is, the SIMD can best resolve a grating whose spatial wavelength is at least twice as long as ⌬s the distance separating the two sampling channels (i.e., 2 ⌬s ⬍ ). For a spatial wavelength such that, ⌬s ⬍ ⬍ 2 ⌬s, the sign of sin(2앟애f s⌬s) becomes opposite that of 애. Direction selectivity then reverses sign and the detector signals the wrong direction of motion. This phenomenon, known as spatial aliasing, is well known for insect visual systems. Eckert (18) found, by extracellular recordings from the H1 neuron, that when the spatial wavelength of the moving pattern is smaller than twice the interommatidial angle, the response properties with regard to the direction of pattern movement are reversed: Regressive motion causes inhibition and progressive motion causes excitation. However, when Buchner (20) measured behavioral responses of flies, they were not completely periodic; the negative responses measured for ⌬ ⬍ ⬍ 2⌬, where ⌬ represents the effective interommatidial angle, were smaller in magnitude than the positive responses measured for ⬍ 2⌬. So far we have always assumed that the receptors have a needle-shaped spatial sensitivity distribution (i.e., each receptor samples the visual field at a single point). However, real photoreceptors have instead an approximately Gaussian (bellshaped) spatial or angular sensitivity distribution. They get their input by spatially integrating the luminance distribution located within their range and hence act as a spatial lowpass filter. The cutoff frequency of the low-pass filter is determined by the width of the spatial distribution at half maximum or the acceptance angle, ⌬. If a sine-wave grating is
485
presented to the photoreceptors, its contrast, c, is attenuated by a factor cr 2 2 π2 = e− 4 ln 2 (ρ/λ) = e−3.56(ρ f s ) c
(10)
where cr is the effective contrast in the receptors (see p. 89 in Ref. 20). Since low-pass filtering severely limits the transfer of high frequencies, we can expect the response to lose its periodicity with respect to spatial frequency. The effect of contrast attenuation on the SIMD mean steady-state response is presented in Fig. 3(c). The curve in this figure has been plotted using a contrast transfer parameter ⌬ ⫽ 1. This response resembles behavioral responses obtained from flies by Buchner (see Ref. 20 or pp. 561–621 in Ref. 14). Dependence on Mean Luminance In addition to the dependence on spatial and temporal frequencies, the SIMD mean steady-state response Mr depends strongly on the mean luminance of the moving pattern. Figure 3(d) depicts Mr as a function of mean luminance L0. The variations of the curve in this figure agree well with those of the H1 neuron response. They are characterized by slow increase at low levels, saturation at high levels, and a rapid increase spanning about 2 log units of mean luminance at intermediate levels. The range of the response of H1 does also cover about 2 log units of mean luminance from threshold to saturation [see Eckert (18) and pp. 523–559 in Ref. 14]. This is different from the saturation at the photoreceptor level, which spans a range of 5 log units of mean luminance. However, since the saturation phenomenon may occur at all levels of the complex architecture of the fly visual system, we cannot know for sure if the saturation of the DSMD neurons with respect to mean luminance is caused by saturation of EMDs feeding them, as suggested here. Adaptation of Contrast Sensitivity Function In spatial vision, sine-wave gratings are frequently used to describe the perceptual spatiotemporal frequency response, which is commonly known as the contrast sensitivity function (CSF). The CSFs of visual systems are obtained by determining the inverse of the threshold contrast (i.e., the contrast sensitivity) at a set of points in the spatial frequency domain. Dvorak et al. (21) have measured at different mean luminance levels the spatial CSF for the H1 DSMD neuron in the fly lobula plate. They found that the form of the CSF varies markedly with mean luminance; in particular, the CSF increases as the mean luminance level of the stimulus is raised. At high mean luminance levels, the CSF peaks at a certain spatial frequency and falls off at higher and lower frequencies. Moreover, the high frequency roll-off becomes less steep and the peak frequency shifts toward lower frequencies as mean luminance decreases. The adaptation, or change, of CSF upon change of mean luminance level can be accounted for by considering the response of a SIMD to a moving sinusoidal grating and the light adaptation phenomena that occur at the photoreceptor level. Equation (8), with a ⫽ b ⫽ 15 and k ⫽ 10, was used to compute normalized CSFs at different values of L0. The CSF curves in Fig. 4 were obtained by including the effect of spatial filtering that takes place at the receptor level—that is, by
486
CYBERNETICS
102
CSF
101
100
10–1 10–2
Figure 4. CSF of a SIMD as computed from Eq. (8) for different mean luminance levels: L0 ⫽ 5, 2, 1, 0.5, and 0.25 (from top to bottom).
2
600
Response (mV )
600
Response (mV )
600
3 2 On-off response
1 0
0
100
200
300 (a)
400
500
3 2 On response
1 0
0
100
200
300 (b)
400
500
3 2
Off response
1 0
0
100
200
300 Time (ms)
400
500
mean luminance results in pushing the peak frequency of the CSF to a lower value. It is well known that in insect compound eyes the effective contrast transfer parameter, ⌬, increases upon lowering the mean luminance level L0. This increase of ⌬ is due to a mechanism of adaptation to low light
Response (mV )
Response (mV )
Response (mV )
Response (mV )
multiplying the contrast with the term e⫺3.56(⌬fs) . Figure 4 clearly demonstrates that the CSF of the SIMD adapts to mean luminance changes in the same way the CSF of DSMD (the H1 unit) neuron does. Using an effective contrast transfer parameter (the acceptance angle) ⌬ that depends on
(c) Figure 5. Response of the neural network model (Fig. 1) to stimulation of a pair of adjacent receptors with a sequence of flashes mimicking motion in the preferred direction. (a) ON-OFF response: The first flash is turned on at t ⫽ 100 ms and turned off at t ⫽ 200 ms followed by the second flash, which is turned on at t ⫽ 200 ms and turned off at t ⫽ 300 ms. (b) ON response: The onset of the first flash (t ⫽ 100 ms) is followed by the onset of the second flash (t ⫽ 200 ms), and both flashes are turned off at t ⫽ 300 ms. (c) OFF response: Both flashes are turned on at t ⫽ 0 ms, but the first flash is turned off at t ⫽ 100 ms and the second is turned off at t ⫽ 200 ms.
100
10–1 Spatial frequency (cycles/deg)
0 1
On-off response
2 3
0
100
200
300 (a)
400
500
600
200
300 (b)
400
500
600
200
300 Time (ms)
400
500
600
0 1
On response
2 3
0
100
0 1
On response
2 3
0
100
(c) Figure 6. Response of the neural network model (Fig. 1) to stimulation of a pair of adjacent receptors with a sequence of flashes mimicking motion in the null direction. (a) ON-OFF response: The first flash is turned on at t ⫽ 100 ms and turned off at t ⫽ 200 ms followed by the second flash, which is turned on at t ⫽ 200 ms and turned off at t ⫽ 300 ms. (b) ON response: The onset of the first flash (t ⫽ 100 ms) is followed by the onset of the second flash (t ⫽ 200 ms); both flashes are turned off at t ⫽ 300 ms. (c) OFF response: Both flashes are turned on at t ⫽ 0 ms, but the first flash is turned off at t ⫽ 100 ms and the second is turned off at t ⫽ 200 ms.
Response (mV)
CYBERNETICS
3
0
2.5
–0.5
2
–1
1.5
–1.5
1
–2
0.5
–2.5
0
0
200
400
600
–3
0
200
Response (mV)
(a)
0
2.5
–0.5
2
–1
1.5
–1.5
1
–2
0.5
–2.5 0
200
600
400
600
(c)
3
0
400
487
400
600
–3
0
200
Time (ms)
Time (ms)
(b)
(d)
Figure 7. Response of the neural network to a jumping edge. At t ⫽ 0 the edge, whose orientation (black-white or white-black) is indicated above the plot, appears over the pair of adjacent receptors. After 200 ms, it jumps by one receptor to the right, (a) and (b), or to the left, (c) and (d).
levels that results in widening the angle subtended by the photoreceptive waveguides, hence increasing absolute sensitivity by sacrificing spatial acuity (for more details on adaptational mechanisms in compound eyes, see, e.g., pp. 30–73 in Ref. 13 and pp. 391–421 in Ref. 10). Response to Sequential Flashes and Jumps In this subsection simulation results are presented that show that a DSMD architecture (Fig. 1) based on the SIMD can account for the recorded responses of the H1 neuron to a variety of moving stimuli. In the simulations, the input signal is passed through a log transformation, representing the transformation at the photoreceptor level. A laminar unit (L-Unit, Fig. 1) is simulated as a highpass filter (see, e.g., pp. 213–234 in Ref. 13). After the signal is magnified, it is rectified to produce transient responses of ON and OFF nature; there is strong evidence that, in the insect visual system, the motion signals are carried through separate ON and OFF channels [see pp. 360–390 in Ref. 13 and also Horridge and Marcelja (22)]. The outputs of the ON and OFF channels are then lowpass filtered and passed laterally to interact, respectively, with the outputs of the ON and OFF channels in the adjacent
columns. Here, the interaction used at the EMD level is a SIMD, Eq. (2), with parameter values a ⫽ 50, ⫽ 40 m, and k ⫽ 20. The spatial integration of local movement signals at the level of the wide-field DSMD neurons is, in principle, almost linear if the activation of single input channels produce only minute voltage changes at the output sites of the dendrites. If we assume it to be linear, then the effects of the excitatory and inhibitory synaptic contacts of the individual EMDs with the DSMD neurons are, respectively, additive and subtractive. Thus, if we denote by mEj(t) the signal mediated by the jth excitatory synapse and by mIj(t) the signal mediated by the jth inhibitory synapse, then, to first order, the response of the DSMD neuron is given by R(t) =
mE j (t) − mI j (t)
(11)
j
where the summation operation is carried over all j indices for both ON and OFF channels, and the rates of change of mEj(t) and mIj(t) are given by Eq. (2). Here the response R(t) represents the actual membrane voltage, or deviation of the
CYBERNETICS
Response (mV)
488
3
0
2.5
– 0.5
2
–1
1.5
–1.5
1
–2
0.5
– 2.5
0
0
200
400
600
–3
0
200
Response (mV)
(a)
8
0
6
–2
4
–4
2
–6
0
0
200
400
600
400
600
(c)
400
600
–8
0
200
Time (ms)
Time (ms)
(b)
(d)
Figure 8. Response of the neural network to a jumping thin bar. At t ⫽ 150 ms, a bright or dark bar appears over one receptor and disappears at t ⫽ 175 ms. Then at t ⫽ 200 ms, the same bar reappears over a neighboring receptor to the right (a) and (b), or to the left (c) and (d). The responses are directional regardless of bar contrast.
membrane voltage from the resting potential, rather than the firing rate of the neuron. To obtain the response as firing rate, the output of the DSMD neuron should be passed through a rectifying nonlinearity. Response to Sequential Flashing. Simulations of the neural network responses to light flashes showed that stimulating a pair of receptors singly or synchronously does not evoke any response in the DSMD neuron (results not shown). However, stimulating the two receptors with a sequence mimicking motion in the preferred direction induces an excitatory response (Fig. 5). Note that the response of the network is always time locked to the onset or offset of the second flash. Note also that the response to a sequence of nonoverlapping light flashes, with a short time lag between their onsets, consists of two prominent peaks [Fig. 5(a)]; the first peak is caused by the onset sequence [Fig. 5(b)] and the second one by the offset sequence [Fig. 5(c)]. The responses of the network to sequences mimicking motion in the null direction are shown in
Fig. 6. These responses are equal but of opposite polarity to those shown in Fig. 5; they are inhibitory responses. The responses in Figs. 5 and 6 are similar to those recorded by Franceschini and his colleagues from the H1 neuron (pp. 360–390 in Ref. 13). Response to Jumps. The responses of the neural network to an object (an edge or a bar) jumping over a distance equal to the distance between neighboring receptors are presented in Figs. 7 and 8. Figure 7 shows that, regardless of its orientation, an edge jumping in the preferred direction induces excitation [Figs. 7(a) and 7(b)], while an edge jumping in the null direction causes inhibition of the DSMD-neuron [Figs. 7(c) and 7(d)]. The dependence of directionality upon contrast was tested by jumping a thin light or dark bar in the preferred and null directions. The results are presented in Fig. 8, which shows that the preferred direction of the neural network model does not depend on the sign of contrast. In other words, both a bright and a dark bar evoke an excitatory response
CYBERNETICS
Response (mV)
when jumping in the preferred direction [Figs. 8(a) and 8(b)], and an inhibitory response when jumping in the null direction [Figs. 8(c) and 8(d)]. Notice, though, that the dark bar elicits a stronger response than the bright bar does. This phenomenon has also been observed in the recorded responses of the H1 neuron of Calliphora stygia by Horridge and Marcelja (22), who also found that the directionality of the H1 neuron does not change with edge orientation or bar contrast (Figs. 2 and 5 in Ref. 22). However, they found that the H1 neuron may lose its directionality by reversing the contrast of the jumping bar. More specifically, when there is a time lag during the jump the H1 neuron preserves its directionality (Fig. 5 in Ref. 22, but when there is no time lag (i.e., the second bar appears—contrast reversed—simultaneously with the disappearance of the first one), the H1 neuron seems to lose its directionality (Fig. 6 in Ref. 22). The responses of our neural network model to bars that reverse contrast at the jump are depicted in Figs. 9 and 10. Despite contrast reversal, the network preserves its directionality when there is a time lag between the disappearance and reappearance of the bar (Fig. 9). Yet the network may lose its directionality if there is no time lag during the jump (Fig. 10). Although the onset of the dark bar followed by the offset of
the light bar constitutes a preferred OFF sequence in Fig. 10(b), the response is inhibitory. The reason for reversal of directionality in Figs. 10(b) and 10(d) is that the sequence caused by the onset and offset of the black and white bars, respectively, evokes only a weak excitatory [Fig. 10(b)] or inhibitory [Fig. 10(d)] response in the DSMD neuron, for the time lag of the sequence is too long (150 ms). This weak response is dominated by an opposite ON response induced by the simultaneous appearance and disappearance of the white and black bars, respectively. Since the ON response caused by the offset of the black bar is not exactly the same as the ON response caused by the onset of the white bar, there is an imbalance between the excitatory and inhibitory signals fed to the DSMD neuron—two adjacent EMDs are inhibited simultaneously, but their two immediate neighbours are not— which gives rise to an excitatory or inhibitory ON response. The opposite happens when reversing the contrast of the bars [Fig. 10(c)]. ELECTRONIC ANALOGS OF CYBERNETIC MODELS Since cybernetics is the parallel study of living organisms and machines, these parallels can both inspire and guide develop-
5
0
4
–1
3
–2
2
–3
1
–4
0
0
200
400
600
–5
0
200
Response (mV)
(a)
0
– 0.5
– 0.5
–1
–1
–1.5
–1.5
–2
–2
– 2.5
– 2.5 0
200
400
600
400
600
(c)
0
–3
400
600
489
–3
0
200
Time (ms)
Time (ms)
(b)
(d)
Figure 9. Response of the neural network model to contrast reversal. The stimulus conditions are the same as those of (Fig. 8), except for reversal of contrast at the jump (i.e., a black bar becomes white and vice versa). The responses are directional in spite of contrast reversal.
CYBERNETICS
Response (mV)
490
4
0
3
–2
2
–4
1
–6
0
0
200
400
600
–8
0
200
Response (mV)
(a)
0
4
–1
3
–2
2
–3
1
–4
0
200
400
600
400
600
(c)
400
600
0
0
200
Time (ms)
Time (ms)
(b)
(d)
Figure 10. Response of the neural network to contrast reversal. At t ⫽ 50 ms, a bar appears over one receptor. Then, at t ⫽ 200 ms, the bar reappears, with its contrast reversed, over a neighboring receptor. Here, a jump in the preferred direction can cause a negative response (b), and a jump in the null direction can cause a positive response (d).
ment of the latter based on the former. Nerve cells’ conductance variation leads to simple analog electronic circuits that possess rich signal processing capabilities. Similar to the developments of the previous section, simple components are combined to demonstrate increasingly more complicated processing, such as motion detection. Parallelism, fault tolerance, simplicity of each processing unit, and repeatability of the circuits make hardware implementation of the nerve cell models feasible. One important aim of such implementations is to integrate sensing and processing units on the same substrate, thus increasing the speed of operation and reducing bandwidth necessary to transmit the sensed information to higher levels of processing, much as the retina performs this function for higher processing in the cerebral cortex. The communications bottleneck is avoided by performing much of the signal processing in the sensor itself. Networks of the kind described by Eq. (1) contain multiplicative terms, which arise from the control of conductance in nerve membrane (23). It is natural to utilize control of shunting paths of current in electronic implementations, such as the field-effect transistor (FET) and complementary metal-oxide semiconductor (CMOS) devices, in their ‘‘triode’’ or sub-pinch-off regions. If the gate voltages are received from the outputs of other net-
work devices, a network of feedforward (literally as written in Eq. 1) interconnection is synthesized. If the other network devices, in turn, receive their gate voltages from the outputs of the first set, a feedback interconnection occurs. In general, the design is more direct using the feedforward strategy, but of course the feedback strategy carries with it certain robustness, insensitivity to parameter change, and fault tolerance. It is further a point that biological nerve networks may also be feedback or feedforward. In many experimental studies the determination of which alternative is actually ‘‘wired’’ is a goal, but there is no clear dominance of one strategy, nor evidence for optimization based on cost functions in the short term. Perhaps more important than these considerations are the nonlinearities of the networks. These are primarily the multiplicative terms such as those in Eq. (1). These accomplish fractional power automatic gain control, an approximation to the Weber–Fechner law originally established in psychophysics of human observers and shown to be in the visual system primarily due to automatic gain control by the photoreceptors (23). The multiplicative terms in electronic implementations clearly optimize the use of the limited dynamic range of devices in comparison with linear implementations (Fig. 6, p. 469 in Ref. 10). But further important adaptation
CYBERNETICS
mediated by these multiplicative terms is found in the temporal and spatial contrast enhancement and ‘‘tuning,’’ which changes with mean light level in a systematic and near-optimal manner (23). This is related to the relatively higher amplification of higher temporal and spatial frequencies by the visual systems at higher light levels (as seen in the adaptation of the CSF in Fig. 4), where more photons are available to be processed and not simply counted (23). In the early work of David Marr on early vision, an underlying center-inhibitory surround spatial impulse response was invoked (24). However, these biological, mathematical, and electronic analyses show that such spatial and temporal impulse responses must adapt and change their configuration to become more differentiating in brighter light (25,26). This is simply a strategy to make use of the available optical information in order to see better (27,28). BIBLIOGRAPHY 1. N. Wiener, Cybernetics, 2nd ed., Cambridge, MA: MIT Press, 1961. 2. N. Wiener, The Human Use of Human Beings: Cybernetics and Society, rev. ed., Garden City, NY: Doubleday, 1954. 3. S. J. Heims, The Cybernetics Group, Cambridge, MA: MIT Press, 1991. 4. W. H. Calvin, The Cerebral Code, Cambridge, MA: MIT Press, 1996. 5. W. J. Rugh, Nonlinear System Theory: The Volterra/Wiener Approach, Baltimore: Johns Hopkins University Press, 1981. 6. P. Z. Marmarelis and V. Z. Marmarelis, Analysis of Physiological Systems, New York: Plenum, 1978. 7. M. Schetzen, The Volterra and Wiener Theories of Nonlinear Systems, Malabar: Krieger, 1989. 8. H.-W. Chen et al., Nonlinear analysis of biological systems using short M-sequences and sparse-stimulation techniques, Ann. Biomed. Eng., 24: 513–536, 1996. 9. M. J. Korenberg and I. W. Hunter, The identification of nonlinear biological systems: Volterra kernel approaches, Ann. Biomed. Eng., 24: 250–268, 1996 [and reprised in 24 (4)]. 10. R. B. Pinter and B. Nabet (eds.), Nonlinear Vision: Determination of Neural Receptive Fields, Function, and Networks, Boca Raton, FL: CRC Press, 1992. 11. W. Reichardt, T. Poggio, and K. Hausen, Figure-ground discrimination by relative movement in the visual system of the fly, Part II: Towards the neural circuitry, Biolog. Cybernetics, 46 (Suppl): 1–30, 1983. 12. N. J. Strausfeld, Atlas of an Insect Brain, New York: SpringerVerlag, 1976. 13. D. G. Stavenga and R. C. Hardie (eds.), Facets of Vision, Berlin: Springer-Verlag, 1989. 14. M. A. Ali (ed.), Photoreception and Vision in Invertebrates, New York: Plenum, 1984.
491
15. K. Kirschfeld, The Visual System of Musca: Studies on Optics, Structure and Fusion, in R. Wehner (ed.), Information Processing in the Visual Systems of Arthropods, New York: Springer-Verlag, 1972, pp. 61–74. 16. H. B. Barlow and W. R. Levick, The mechanism of directionally selective units in the rabbit’s retina, J. Physiol., 178: 477–504, 1965. 17. A. Bouzerdoum, The elementary movement detection mechanism in insect vision, Proc. R. Soc. Lond., B339: 375–384, 1993. 18. H. Eckert, Functional properties of the H1-neurone in the third optic ganglion of the blowfly, Phaenicia, J. Comp. Physiol., A-135: 29–39, 1980. 19. W. H. Zaagman, H. A. K. Mastebroek, and J. W. Kuiper, On the correlation model: performance of a movement detecting neural element in the fly visual system, Biol. Cybern., 31: 163–168, 1978. 20. E. Buchner, Elementary movement detectors in an insect visual system, Biol. Cybern., 24: 85–101, 1976. 21. D. Dvorak, M. V. Srinivasan, and A. S. French, The contrast sensitivity of fly movement-detection neurons, Vision Res., 20: 397– 407, 1980. 22. G. A. Horridge and L. Marcelja, Responses of the H1 neuron of the fly to jumped edges, Phil. Trans. R. Soc. Lond., B-329: 65– 73, 1990. 23. B. Nabet and R. B. Pinter, Sensory Neural Networks: Lateral Inhibition, Boca Raton, FL: CRC Press, 1991. 24. D. C. Marr, Vision: A Computational Investigation into the Human Representation and Processing of Visual Information, San Francisco: W. H. Freeman, 1982. 25. M. V. Srinivasan, S. B. Laughlin, and A. Dubs, Predictive coding: a fresh view of inhibition in the retina, Proc. R. Soc. Lond., B216: 427–459, 1982. 26. M. V. Srinivasan, R. B. Pinter, and D. Osorio , Matched filtering in the visual system of the fly: Large monopolar cells of the lamina are optimized to detect moving edges and blobs, Proc. R. Soc. Lond., B-240: 279–293, 1990. 27. J. J. Atick and A. N. Redlich, What does the retina know about natural scenes?, Neural Computation, 4: 196–210, 1992. 28. J. H. van Hateran, Theoretical predictions of spatio-temporal receptive fields of fly LMCs, and experimental validation, J. Comp. Physiol., A-171: 157–170, 1992.
ROBERT B. PINTER University of Washington
ABDESSELEM BOUZERDOUM Edith Cowan University
BAHRAM NABET Drexel University
CYCLIC CONTROL. See PERIODIC CONTROL. CYCLOCONVERTERS. See AC-AC POWER CONVERTERS. CYCLOTRONS. See SUPERCONDUCTING CYCLOTRONS AND COMPACT SYNCHROTRON LIGHT SOURCES.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7103.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Data Fusion Standard Article Dennis M. Buede1 and Edward L. Waltz2 1George Mason University, Fairfax, VA 2Environmental Research Institute of Michigan, Ann Arbor, MI Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7103 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (137K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Overview Major Issues and Algorithms Advanced Applications Areas of Further Study About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7103.htm15.06.2008 14:10:29
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
704
DATA FUSION
The fusion process may improve a number of performance metrics over that which is achievable by any single source or sensor, for example, accuracy, resolution, timeliness, or state estimates (of individual entities or events). The fusion process may also expand the understanding of relationships between entities or events and the overall comprehension (of the entire domain). Finally, the fusion process may expand the spatial domain covered by that available to any single sensor (1–5). The data fusion process has also been referred to as multisensor fusion or sensor fusion (used for real-time sensor system applications), multisource fusion (referring to intelligence and law enforcement applications that combine intelligence sources), sensor blending, or information fusion. Data fusion generally refers to automated processes. However, manual data fusion processes have long been performed by humans to process volumes of data in numerous applications, for example, detective work in law enforcement, weather analysis and prediction, statistical estimation, intelligence analysis, and air traffic control, to name a few. Animals perform neurological data fusion processes by combining sensory stimuli and applying cognitive processes to perceive the environment about them. By combining sight, sound, smell, and touch, humans routinely reason about their local environment, identifying objects, detecting dangerous situations, while planning a route. The U.S. DoD Joint Directors of Laboratories established a data fusion subpanel in the mid-1980s to establish a reference process model and a common set of terms for the functions of data fusion. This model (Fig. 1) defines four levels, or stages, of functions that are oriented toward intelligence and military applications. The first level is object refinement, where sensor or source reports that contain observations (detections of objects and measurements about those objects) are aligned, associated, and combined to refine the estimate of state (location and kinematic derivatives) of detected objects using all available data on each object. The sequence of operations within object refinement are as follows.
DATA FUSION Data fusion consists of a set of quantitative and qualitative modeling techniques for integrating the reports of multiple, diverse sensors for the purpose of modeling targets in a domain of interest. These techniques must address the aggregation of selected sensor reports into ‘‘tracks’’ of hypothesized targets, estimate the current and predict the future positions of these tracks, infer the identification of the track in terms relevant to the domain, and begin higher level inferences related to the current status of the situation comprising the entire set of tracks and then consider future possibilities given the preceding predictions and inferences. OVERVIEW The basis of all fusion processes is the synergistic use of redundancy and diversity of information contained in multiple, overlapping observations of a domain to achieve a combined view that is better than any of the individual observations.
1. Alignment. All observations must be aligned to a common spatial reference frame and a common time frame. For moving objects, observed at different times, the trajectory must be estimated (tracked) and observations propagated forward (or backward) in time to common observation time. 2. Association. Once in a common time-space reference, a correlation function is applied to observations to determine which observations have their source in the same objects. Correlation metrics generally include spatial (same location), spectral (similar observed characteristics) and temporal (same time of appearance) parameters. If the correlation metric for a pair of observations is sufficiently high, the observations are assigned to a common source object. 3. State Estimation. The state of the object (the location if the object is stationary, or a dynamic track if the object is moving) is updated using all associated observations. 4. Object Identification. The identity of the object is also estimated using all available measurements. If the sensors measure diverse characteristics of the object (e.g.,
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
DATA FUSION
Information Level Management/ Control Path
DATA Measurements and observations
Mission Objective Knowledge state is compared with mission objectives to determine information needs
Translate information needs into specific, prioritized data requests from sensors and sources.
Knowledge about the sensed situation is maintained and updated.
Knowledge about the Situation
Level 4 Resource Refinement
Sensors and sources are queried to measure the environment to meet knowledge needs
Level 2 Situation Refinement
Level 3 Threat Refinement
Level 1 Object Refinement
Sensors and sources
SENSING/ FUSION PATH
INFORMATION Data placed in spatial and temporal context
Sensing/ Fusion Path
Data Fusion Functions
MANAGEMENT/ CONTROL PATH
KNOWLEDGE Understanding of the meaning of information about the environment
705
The observed situation is assessed and a dynamic model (explanation) is created. Potential threats, future behaviors are predicted, and alternative COA's are prepared and assessed.
Align, Associate, and Combine data into organized information sets in spatial and temporal context, with derived relationships identified.
Figure 1. Four levels of functions oriented toward intelligence and military applications.
color and shape) automatic classification techniques are applied to resolve the identity. The inputs to object refinement are reports containing raw data, and the output product is organized and refined data, or information. The next levels of processing attempt to understand the information—to create knowledge about the observed objects and their behavior as groups in their context. The second level is situation refinement, which has the goal of understanding the meaning of the assembly of objects by detecting relationships between objects, detecting aggregate sets of objects, and identifying patterns of behavior to create a model of the current situation—a scene. The third level is threat refinement, which looks to the future to predict potential courses of action (COAs) of objects and groups within the situation scene that may pose a threat (defensive focus) or opportunity (offensive focus) for action by the user of the fusion system. The fourth level is process refinement, which governs the overall fusion process to optimize the use of data to achieve the knowledge objectives of the system. Sensors are managed, and internal processes are adapted to optimize the information and knowledge products produced by the process. MAJOR ISSUES AND ALGORITHMS This section addresses the major topics of data fusion in more detail. For object refinement, the three most algorithmic intensive processes are association, state estimation, and object identification. Alignment is a measurement intensive process. Situation and threat refinement employ higher level reasoning techniques from Artificial Intelligence. The section below on situation and threat refinement addresses many of the techniques relevant to these two phases. Resource refinement
is relatively new, and a cohesive body of literature is still developing, so this area is not discussed below. Techniques commonly discussed for resource refinement are knowledge-based systems, maximizing entropy, and decision analysis. Association of Sensor Reports The purpose of data association is to aggregate individual sensor reports from a common target into a group and to develop a dynamic model that represents a ‘‘track’’ based on the sequence of reports in the group. Generally, an initial set of sensor reports are collected in a ‘‘batch,’’ and ‘‘report to report’’ association is used to establish the first set of tracks. Report to report association must continue to examine the possibility of finding new targets and to create the associated tracks. Once tracks are established, it is common to perform ‘‘report to track’’ association, that is, to determine if new sensor reports can be associated with existing tracks to assume that these reports are being received from sensed emanations of the hypothesized target. Multisensor architectures can either have one centralized track to which all sensor reports are associated or can form tracks for each sensor and then perform ‘‘track to track’’ association. The centralized track structure is generally considered the optimal approach. The association decision process is based on the notion of applying a correlation test to determine if pairs of reports, or a report and a track, are ‘‘close enough’’ to be associated. The optimum solution to the data association problem is a Bayesian solution because the problem is characterized by the randomness of target motion and sensor observations. Unfortunately, the Bayesian solution has a number of implementation problems that inhibit its success. First, the Bayesian solution requires prior probabilities about the number of targets, their locations, and their identity. Information for the formation of these prior probabilities is often available but is substantially more subjective than the uncertainties concern-
706
DATA FUSION
ing sensor capabilities and, to some degree, target motion. The second requirement for the Bayesian solution are the relative likelihoods of sensor reports given targets; this is the one input that is generally available and used in all solution approaches. Finally the Bayesian solution requires maintaining a multiple hypothesis inventory over many sensor reporting periods of an exponentially growing number of hypothetical associations and therefore hypothetical tracks. Data association techniques can be characterized as deferred logic (or multiple hypothesis) techniques and sequential assignment techniques. The deferred logic techniques attempt to address the complete solution space addressed by the optimal Bayesian method but usually opt for a maximum likelihood criterion that is consistent with assuming the prior probability distributions over spatial location and number of the targets are uniform. Deferred logic techniques emphasize pruning relatively low probability hypotheses and merging hypotheses that are similar in space and identity in order to handle the exponential growth of hypotheses (4,6,7). Sequential assignment techniques either formulate the problem as a hard assignment of each report to a report or to a track (or to a false alarm), which is the generalized assignment problem, or as a virtual report assignment, which involves the probabilistic computation of relative likelihoods (6–8). In the hard assignment approach (the most commonly used approach), the goal is to assign recently arrived sensor reports to existing tracks or other recent reports or discard them as false alarms such that the combined likelihood of these assignments is maximized. The likelihood of each assignment is often measured by an inverse function of the ‘‘closeness’’ between the report and the track or report under consideration for assignment; maximum likelihood is therefore also minimum distance. Solution approaches to the assignment problem consist of Lagrangian relaxation, relaxation for network flows, a generalization of the Signature method, and an auction algorithm for the transportation problem (8). A number of good, suboptimal assignment algorithms have been developed: the Munkres, Ford-Fulkerson, and the Hungarian methods (6). The most often implemented approach is called the ‘‘greedy’’ or ‘‘row-column heuristic’’ assignment algorithm in which a track is picked at random, the reports from each sensor that are closest to it are assigned to it, a second track is picked at random, the remaining reports from each sensor that are closest to it are assigned it, and so on. This assignment approach is particularly suited to the difficult problem of report to report association for passive sensors. Because passive sensors do not provide any range information, the problem of an exponentially growing number of ghost targets arises as the number of sensors and reports increases (8). This passive sensor association problem is NP (‘‘nondeterministic polynomial’’) hard (i.e., in all likelihood cannot be solved by an algorithm of polynomial time complexity) when the number of sensors is three or greater; the deferred logic approach becomes completely impractical for this problem when there are 10 or more targets. The other sequential assignment technique is called Joint Probabilistic Data Association (JPDA) (4,6,7). JPDA creates virtual reports by combining the likelihoods that all assignable reports belong to an existing track or should be used to create a new track. This approach has been used successful when the number of targets is known and is small compared to the clutter (false alarm) environment.
Table 1. Common Distance Measures Between Vectors Distance Measure Name City Block Euclidean Minkowski Weighted Euclidean Mahalanobis Bhattacharyya Chernoff
Divergence Product
Distance Measure 兩x1 ⫺ x2兩 [(x1 ⫺ x2)2]1/2 [(x1 ⫺ x2)m]1/m [(x1 ⫺ x2)TW(x1 ⫺ x2)]1/2 [(x1 ⫺ x2)TR⫺1(x1 ⫺ x2)] 2[(x1 ⫺ x2)T(R1 ⫹ R2)⫺1(x1 ⫺ x2)] ⫹ 4 ln [兩(R1 ⫹ R2)/2兩/(兩R1兩1/2兩R2兩1/2)] (움(1 ⫺ 움)/2)[(x1 ⫺ x2)T((움R1 ⫹ (1 ⫺ 움)R2)/2)⫺1 (x1 ⫺ x2)] ⫹ 0.5 ln [兩(움R1 ⫹ (1 ⫺ 움)R2)/2兩/(兩R1兩움兩R2兩1⫺움)] 0.5[[(x1 ⫺ x2)T(R1 ⫹ R2)⫺1(x1 ⫺ x2)] ⫹ tr(R⫺1 1 R2 ⫹ R⫺1 2 R1 ⫺ 2I)] 2[(x1 ⫺ x2)T(R1 ⫹ R2)⫺1(x1 ⫺ x2)] ⫹ 2 ln [4앟 2兩(R1 ⫹ R2)兩]
There are many possible dimensions on which to measure the closeness of reports to each other or the closeness of reports to tracks: spatial, velocity, and acceleration dimensions; target characteristics such as size, shape, and area; target emanations such as radar, ultraviolet, and infrared. Unfortunately, reports from different sensors will not contain the same dimensions. For example, a radar may provide two spatial and velocity dimensions (range and azimuth), while an infrared sensor provides the spatial dimensions of azimuth and elevation angles. The data association must be performed on the overlap of dimensions; this overlap may be very limiting. The more sensors there are contributing to the data association problem, the greater the overall pool of overlapping dimensions. Nine of the most common distance measures (correlation metrics) used in association are shown in Table 1. These measures define the distance between two vectors x1 and x2 that represent the overlap of dimensions available from a sensor report and an existing track (or other sensor report). The city block distance measures distance along the sides of the rectangle formed by the measurements. The Euclidean distance is the shortest distance between two points. The Minkowski distance is the generalized Euclidean distance, where m is a number greater than 1 and not equal to 2. The Euclidean distance can also be generalized by weighting the dimensions on some basis to reflect that fact that the dimensions might not be commensurate, for example, spatial distance and velocity. The Mahalanobis distance is a special weighted generalization in which the weighting matrix is the covariance matrix (R) for the vectors representing the uncertainty associated with the measurement process; both vectors are assumed to have equal covariance matrices which work for report to report association when the reports are from the same sensor. Bhattacharyya further generalized the Mahalanobis distance by allowing unequal covariance matrices for the two vectors, a covariance for the measurement process error of each sensor or for the sensor and the track. Chernoff ’s definition of distance is a further generalization of the Bhattacharyya distance. The Divergence and Product measures have also been proposed. State Estimation In data fusion, the kinematic state of the track is typically estimated by a tracking algorithm known as the Kalman fil-
DATA FUSION
ter. The filter estimates the kinematic state of the target object using a sequence of all reports associated with that target (6–7). The development of the Kalman filter assumes a motion model for the track and a measurement model for the sensors. The typical form of the motion model is x (k + 1) = xx (k) + q (k) where x is the n-dimensional state vector of the target, spatial position, velocity, etc.; k is the discrete time increment; ⌽ is the transition matrix from one state to another; q is the n-dimensional noise vector for target motion, called plant noise; Q is the n by n covariance matrix of the plant noise. The measurement model usually takes the form y (k) = Hx Hx(k) + v (k) where y is the H is the v is the R is the
m-dimensional measurement vector of x; m by n measurement matrix; measurement noise; covariance matrix of the measurement noise.
Note, for multisensor systems, there will be a measurement model for each sensor. Also, note that the covariance matrices associated with plant and measurement noise may vary with time. The Kalman filter equations become:
xˆ (k|k) = xˆ (k|k − 1) + K (k)[yy (k) − H xˆ (k|k − 1)] HP H T + R ]−1 H T [HP HP(k|k − 1)H K (k) = P (k|k − 1)H H ]P P (k|k − 1) P (k|k) = [I − K (k)H xˆ (k + 1|k) = ˆx (k|k) P (k|k)T + Q Pˆ (k + 1|k) = P where xˆ(k兩k) is the estimate of x at time k given the sensor data through time k. A major implementation issue associated with the Kalman filter is partitioning the state vector into independent subvectors so that multiple, simpler Kalman filters can be used to compute the estimated state at each point in time. Significant work has taken place on multiple sensor Kalman filtering and the simultaneous use of multiple Kalman filters based on different motion models to derive the best estimate of the target state in situations in which the target may turn, dive, or climb, and accelerate or decelerate. The Kalman filter assumes there is one, linear motion model of the target’s motion and a linear measurement model. Enhancements of the Kalman filter are called the extended Kalman filter (EKF) and the interacting multiple model (IMM) Kalman filter. The EKF is a suboptimal estimation algorithm that assumes nonlinear dynamic or measurement models. The IMM posits several motion models for the target. The IMM mixes hypotheses across the filter results from the previous time period in developing an overall estimate in the current time period, which is used to initialize the several filters for the next time period (7).
707
Object Identification The identification of objects that are being tracked on the basis of multiple sensor reports must deal with the fact that the sensors are again observing the same object using different phenomonology, and possibly from different vantage points. Most targets are not completely symmetrical along either vertical or horizontal axes. The advantages of being able to ‘‘see’’ a target through different ‘‘lenses’’ and different perspectives also introduces a number of complications. These complications are ultimately reflected in uncertainty about what was seen and what it means to have seen that. As a result, most methods for solving the object identification problem are based upon modeling and updating uncertainty using new information contained in the sensor reports. There are hard sensor integration techniques that rely on each sensor declaring what type of target the sensor believes the object to be. These techniques then use rule-based algorithms that address the combination of the different sensor declarations with the ability of the sensors to make accurate identification declarations. In complex environments with many types of objects, these techniques tend to fail often due to a brittleness in their logic structure and the limited resolution of their representation of uncertainty. The three most commonly proposed methods for dealing with uncertainty explicitly are probability theory, evidence theory, and fuzzy sets. Probability Theory. Probability theory is the common method for representing uncertainty for object identification (1,9–10). Within probability theory, Bayes’s rule addresses how one should optimally update one’s uncertainty as new information becomes available. p(IDk |rpts, prior info) =
p(rpts|IDk , prior info)p(IDk |prior info) p(rpts|prior info)
Bayes’s rule says the posterior probability of the kth ID being correct given a set of sensor reports and the prior information about the situation equals the likelihood of receiving those reports given the kth ID being correct and the prior information times the probability of the kth ID being correct given the prior information, divided by the probability of the reports being received given the prior information. A common criticism of the Bayesian approach is that the prior information is only available from expert judgments; yet these experts commonly have a great deal of information about the objects in the environment and the characteristics of these objects. The likelihood of specific reports given the various possible identifications typically requires much of the same information, and these likelihoods are crucial for the hard sensor integration techniques as well as many of the competing approaches. Evidence Theory. Evidence theory operates on a frame of discernment, ⌰, for which there are a finite number of focal elements. The power set of ⌰, 2⌰, is the set of all subsets of ⌰. Evidence theory allows one to attach a probability to any member of the power set of the frame of discernment. Evidence theory (1) uses Dempster’s rule to combine multiple belief functions, say from different sensors, and then (2) computes the supportability and plausibility measures for each
708
DATA FUSION
element of the power set of ⌰. Dempster’s rule and the measures called supportability and plausibility are defined as follows (11–12). A valid belief function assigns a probability measure to each element of 2⌰. It is easily shown that for a frame of discernment with n focal elements, there will be 2n elements in the power set, including the null set and ⌰ itself. Evidence theorists interpret a probability greater than 0 that has been assigned to ⌰ as the probability that could be assigned to any of the other 2n ⫺ 1 elements of the power set. This probability is said to be ‘‘uncommitted’’ and is described as a measure of ignorance. This is the concept that is truly unique to evidence theory and, as shown later, causes the two algorithms to diverge. One major effect of uncommitted belief is: b(A) ⫹ b(A⬘) ⱕ 1, where b(A) is the probability assigned to A which is an element of 2⌰, and b(A⬘) is the probability assigned to all sets of 2⌰ whose intersection with A is null. Since ⌰ has a non-null intersection with A and A⬘, any time the belief in ⌰ ⬎ 0, the above equation will be a strict inequality. Dempster’s rule is defined as follows for any two belief functions b1(A) and b2(B), where ⌰ has been divided into two possibly different representations 兵An其 and 兵Bm其: b(Ai ∩ Bj ) =
b1 (Ai )b2 (Bj )
r
Report States
b1 (Ar )b2 (Bs )
s
such that Ar 傽 Bs are all instances of the null event. It is easily shown that Dempster’s rule is equivalent to Bayes’s rule when there is no uncommitted belief. It can also be shown that the uncommitted belief will decrease with the addition of each new belief function having some committed belief. Since evidence theory allows some belief to be uncommitted, it is possible to develop lower and upper measures of uncertainty for any element of 2⌰. Supportability, the lower measure, is defined by s(A j ) = b(Ar ), ∀Ar ⊆ A j r
The upper measure, called plausibility, is pl(A j ) = 1 − s(Aj ), Aj = Asr ∀As As ∩ A j = ϕ
TWS Report 1
TWS Report 2
0.0 0.3 0.0 0.2 0.4 0.1
0.2 0.2 0.4 0.0 0.0 0.2
SAM-X (any state) SAM-Y (any state) SAM-Y.ttr SAM-Y.acq or SAM-Y.ttr SAM-Y.ttr or SAM-Y.ml Unknown (uncommitted)
From this definition and a number of axioms (e.g., commutative and associative), the most common fuzzy operators on fuzzy set operations are
z (x) = 1 − z (x)∀x z (x) > 0
z ∪ (x) = max[x (x), z (x)]∀x ∈
z ∩ (x) = min[z (x), z (x)]∀x ∈
A very important concept is the cardinality of a fuzzy set. The cardinality of a fuzzy set, 兩⌿兩, is computed with the same equation as the cardinality of a crisp set: || = z (x) x∈
1−Q
where Q=
Table 2. TWS Sensor Reports
The notion of subsethood, the degree to which A is a subset of B, is the fuzzy set approach often proposed for sensor fusion (14). The subsethood theorem states that the degree to which A is a subset of B is the cardinality of A intersected with B divided by the cardinality of A: S(A, B) =
The power set of a fuzzy set B, PB, is the set of sets such that A ∈ PB ⇔ zA (x) ≤ zB (x)∀x So we can see that if A is an element of the power set of B, the subsethood of A with respect to B will be 1.0. The subsethood equation above suggests an analogy between subsethood and probability. If we interpret cardinality to be analogous to probability, the numerator of the right hand side is analogous to the joint probability of ‘‘A and B’’ and the denominator to the marginal probability of A; leading to the interpretation that S(A,B)is analogous to the conditional probability of Bgiven A, p(B兩A). It is easy to show that
s
S(A, B) = Fuzzy Sets. Another approach to integrating various statements associated with ambiguous measures of uncertainty is fuzzy sets (13). In fuzzy sets, the membership function of a set is allowed to take on any value in the closed interval from 0 to 1: z (x) : → [0, 1] where z is the membership (or characteristic) function ⌿ is the fuzzy set x is a focal element of the fuzzy set ⍀ is the universal set.
|A ∩ B| |A|
S(B, A)|B| |A|
Table 3. Likelihoods for TWS Reports Focal States
p(rpt 1 兩 state)
p(rpt 2 兩 state)
SAM-X.acq SAM-X.ttr SAM-X.ml SAM-Y.acq SAM-Y.ttr SAM-Y.ml
0.1 0.1 0.1 0.6 1.0 0.8
0.4 0.4 0.4 0.4 0.8 0.4
DATA FUSION Table 4. Posteriors for First and Both Reports
709
Table 5. Supportability and Plausability After Both Reports
Focal States
p(state 兩 rpt 1)
p(state 兩 rpts 1 & 2)
Focal States
s(state after rpts 1 & 2)
pl(state after rpts 1 & 2)
SAM-X.acq SAM-X.ttr SAM-X.ml SAM-Y.acq SAM-Y.ttr SAM-Y.ml
0.037 0.037 0.037 0.22 0.37 0.30
0.027 0.027 0.027 0.16 0.54 0.22
SAM-X SAM-Y SAM-X.acq SAM-X.ttr SAM-X.ml SAM-Y.acq SAM-Y.ttr SAM-Y.ml
0.02 0.96 0.00 0.00 0.00 0.00 0.49 0.00
0.04 0.98 0.04 0.04 0.04 0.29 0.98 0.39
furthering the analogy between subsethood and probability because this equation is clearly reminiscent of Bayes’s rule. The following results, in which the fuzzy set E represents evidence and Ai represents the identification states of interest, are also easy to show:
There may be several ways to convert this information into likelihoods for application in Bayes’s theorem; our approach here is to add the values that do not conflict with the likelihood being calculated. For example,
0 ≤ S(E, Ai ) ≤ 1 S(E, Ai ) = 1,
p(report 1|Y.acq) = 0.3 + 0.2 + 0.1 = 0.6
if E ⊂ Ai (E ∈ PA ) i
S(E, Ai ∪ A j ) = S(E, Ai ) + S(E, A j ) − S(E, Ai ∩ A j ) S(E, Ai ∩ A j ) = S(E, A j )S(A j ∩ E, Ai ) These results generalize to multiple items of evidence:
S(E1 , Ai ∩ E2 ) S(E1 , E2 ) |E ∩ E2 ∩ Ai | = 1 |E1 ∩ E2 |
S(E1 ∩ E2 , Ai ) =
Example. For a sample comparison of the Bayesian, evidence theory, and fuzzy set approaches consider the following Threat Warning example that has been used to illustrate evidence theory. There are two types of command guided surface-to-air missile systems about which friendly aircraft are concerned, SAM-X and SAM-Y. The radar of each missile system has three operational states: acquisition (acq), target track (ttr), and missile launch (ml). A radar may be in only one state at a point in time. The friendly aircraft has a threat warning system (TWS) with sensors that monitor such radar parameters as radar frequency (RF) and pulse repetition frequency (PRF) and attempt to determine which SAM radar, and its associated operational state, is painting the aircraft. Suppose that the TWS of the friendly aircraft provides two independent sensor reports (Table 2) about the same SAM site within a relatively short period of time.
Table 3 shows the likelihoods for both TWS reports. Assuming uniform priors over the six focal states, Bayes’s theorem yields the results shown in Table 4 under the assumption that we compute a posterior after receiving the first report and then again after both reports. The evidence theory solution begins by assuming that all prior information is uncommitted. Therefore, the updated uncertainty after the first report will be the first report. The results after the second report are: SAM-X is .02, SAM-Y is .17, SAM-Y.ttr is .49, SAM-Y.acq or SAM-Y.ttr is .10, SAMY.ttr or SAM-Y.ml is .20, and uncommitted is .02. The supportability and plausibility after the second report are shown in Table 5. For the fuzzy solution to this problem we assume that each report element, for example, SAM-Y (any state), is a fuzzy set. Using the extension principle (4), which is a max(min(. . .)) operation, we can compute the fuzzy set associated with each TWS report. Table 6 shows how the fuzzy set for the first report (‘‘Rpt 1’’) is computed and then shows the results for ‘‘Rpt 2’’ and the combination (intersection) of reports 1 and 2. The fuzzy results, using the subsethood theorem, for report 1 and then for both reports are shown in Table 7. The results of Bayes’s rule, evidence theory, and the subsethood theorem are shown in Table 8. These results are not dissimilar. However, there are situations in which the results will be significantly different.
Table 6. Fuzzy Sets for TWS Reports Focal States of Universal Set Report Elements & Reports ‘‘Y’’ ‘‘Y.ttr or Y.ml’’ ‘‘Y.acq or Y.ttr’’ ‘‘Uncommitted’’ ‘‘Rpt 1’’ ‘‘Rpt 2’’ ‘‘Rpt 1 & Rpt 2’’
X.acq
X.ttr
X.ml
Y.acq
Y.ttr
Y.ml
0.0 0.0 0.0 0.1 0.1 0.2 0.1
0.0 0.0 0.0 0.1 0.1 0.2 0.1
0.0 0.0 0.0 0.1 0.1 0.2 0.1
0.3 0.0 0.2 0.1 0.3 0.2 0.2
0.3 0.4 0.2 0.1 0.4 0.4 0.4
0.3 0.4 0.0 0.1 0.4 0.2 0.2
710
DATA FUSION
Table 7. Subsethood for First and Both Reports
Table 9. Element Option Table for Reasoning Systems
Focal States
S(rpt 1, state)
S(rpts 1 & 2, state)
SAM-X.acq SAM-X.ttr SAM-X.ml SAM-Y.acq SAM-Y.ttr SAM-Y.ml
0.072 0.072 0.072 0.21 0.29 0.29
0.091 0.091 0.091 0.18 0.36 0.18
Knowledge Representation Scheme
Situation and Threat Assessment Situation and threat refinement are processes of reasoning about aggregations of objects and projecting objects and aggregates of objects forward in time. The locations, identities, activities, and time for which the objects are expected to remain doing those activities are the major characteristics associated with the objects and their aggregates that are of interest. Reasoning systems are often considered to have three major elements: a knowledge representation scheme, an inference or evaluation process, and a control structure for searching and computation. Table 9 presents many alternate approaches for representing knowledge, inferring or evaluating, and controlling the reasoning process. There is no suggested linkage among items on the same row. Rather, this table presents many options for addressing each of the three main elements of a reasoning system. To build a reasoning system, one must select one or more options for representing knowledge, one or more options for conducting inference and evaluation, and one or more methods for controlling the reasoning process. ADVANCED APPLICATIONS While the primary research in data fusion has focused on military applications for detecting and tracking military targets, data fusion processes are being applied in a broad range of civil and commercial applications as well. Image data fusion applications combine multiple images of a common scene or object by registering the imagery to produce enhanced (spatial or spectral) composite imagery, detect changes over time, or to merge multiple video sources. Medical applications include the fusion of magnetic resonance (MR) and computer tomography (CT) images into full 3-D models of a human body for diagnosis and treatment planning. The fusion of geospatial data for mapping, charting, and geodetic applications includes registering and linking imagery, maps, thematic maps, and spatially-encoded text data in a common data base into a geographic information system (GIS). Robotic applica-
Rule Frame Hierarchical Classification Semantic Net Neural Net Nodal Graphs Options, Goals, Criteria, Constraints Script Time Map Spatial Relationships Analytical Model
Inference or Evaluation Process Deduction Induction Abduction Analogy Classical Statistics Bayesian Probability Evidence Theory Polya’s Plausible Inference Fuzzy Sets and Fuzzy Logic Confidence Factors Decision Theory (Analysis) Circumscription
Control Structure Search Reason Maintenance System Assumption-based Truth Maintenance Hierarchical Decomposition Control Theory Opportunistic Reasoning Blackboard Architecture
tions require the registration and combination of imaging, tactile, and other sensors for inspection and manipulation of parts. Financial applications, similar to intelligence uses, require the association of vast amounts of global financial data to model and predict market behaviors for decision analysis. AREAS OF FURTHER STUDY Data fusion technology will always be faced with demands to accept higher data rates and volumes as sensing technologies provide higher fidelity data, and data base technologies provide greater capacity to store information. Advanced applications of data fusion will also include integrated sensors, robust processing, learning capabilities, robust spatial data structures, and spatial reasoning. Key areas of further study in data fusion include: Optimal Sensor and Process Management. The management of complex networks of sensors to achieve optimum information-based performance and operational effectiveness will require advances in the application of optimal search and programming methods. Similarly, as data fusion processing networks grow in complexity, advanced management methods must be developed to allocate diverse networked fusion resources to acquired data sets. Uncertain Data Management. The ability to quantify uncertainty, combine multiple uncertain data elements,
Table 8. Posterior, Supportability, and Subsethood After Second Report Focal States
p(state 兩 rpts 1 & 2)
s ⫺ pl(state after 2 reports)
S(rpts 1 & 2, state)
SAM-X.acq SAM-X.ttr SAM-X.ml SAM-Y.acq SAM-Y.ttr SAM-Y.ml
0.027 0.027 0.027 0.16 0.54 0.22
0.00–0.04 0.00–0.04 0.00–0.04 0.00–0.29 0.49–0.98 0.00–0.39
0.091 0.091 0.091 0.18 0.36 0.18
DATA PRESENTATION
and infer uncertain information and knowledge requires advances in methods to (1) combine, manage, and represent uncertainty, (2) create and maintain multiple hypotheses, and (3) provide traceability to source data through the use of ‘‘pedigree’’ data. Dynamic Databases and Information Representation. As the volumes of data to be fused increase, means for mediating between heterogeneous databases must be developed, and more flexible methods of representing information models (text, hypertext, spatial data, imagery, and video, etc.) must be developed. Knowledge Prediction. High level and adaptive, learning models of complex processes must be developed to predict the behavior of groups and complex relationships beyond the level of simple level 1 target tracks. Visualization. The results of most data fusion systems must ultimately be presented to human decision-makers, requiring advances in the methods to efficiently display high volumes of complex information and derived knowledge, and to provide the ability to ‘‘drill-down’’ to the underlying data fusion processes and sources of data. BIBLIOGRAPHY 1. E. L. Waltz and J. Llinas, Data Fusion with Military Applications, Norwood, MA: Artech House, 1990. 2. D. L. Hall, Mathematical Techniques in Multisensor Data Fusion, Norwood, MA: Artech House, 1992. 3. B. V. Dasarathy, Decision Fusion, Washington, DC: IEEE Computer Society Press, 1994. 4. R. C. Luo and M. G. Kay, Multisensor Integration and Fusion for Intelligent Machines and Systems, Norwood, NJ: Ablex, 1995. 5. D. L. Hall and J. Llinas, An Introduction to Multisensor Data Fusion, Proc. IEEE, 85: 6–23, 1997. 6. S. S. Blackman, Multiple-Target Tracking with Radar Applications, Norwood, MA: Artech House, 1986. 7. Y. Bar-Shalom and X. Li, Multitarget Multisensor Tracking Principles and Techniques, Storrs, CT: YBS Publishing, 1995. 8. K. R. Pattipati, et al., A New Relaxation Algorithm and Passive Sensor Data Association, IEEE Trans. Autom. Control, 37: 198– 213, 1992. 9. D. M. Buede and P. Girardi, A Target Identification Comparison of Bayesian and Dempster-Shafer Multisensor Fusion, IEEE Trans. Syst. Man Cybern, 27A: 569–577, 1997. 10. J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, San Mateo, CA: Morgan Kaufmann, 1988. 11. A. P. Dempster, A Generalization of Bayesian Inference, J. Roy. Statistical Soc., Series B, 30 (2): 205–247, 1968. 12. G. Shafer, A Mathematical Theory of Evidence, Princeton, NJ: Princeton Univ. Press, 1976. 13. L. A. Zadeh, Fuzzy Sets, Inf. Control, 8: 338–353, 1965. 14. B. Kosko, Neural Networks and Fuzzy Systems, Englewood Cliffs, NJ: Prentice-Hall, 1992.
DENNIS M. BUEDE George Mason University
EDWARD L. WALTZ Environmental Research Institute of Michigan
DATA MANAGEMENT. See DATABASES. DATA MART. See DATAWAREHOUSING. DATA MINING. See DATA REDUCTION; MACHINE LEARNING.
DATA MODELS, OBJECT-ORIENTED. See OBJECTORIENTED DATABASES.
711
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7104.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Decision Analysis Standard Article Kathryn B. Laskey1 1George Mason University, Fairfax, VA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7104 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (87K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Introduction Preference Modeling Option Generation Uncertainty Modeling Maximization of Subjective Expected Utility Decision Trees and Influence Diagrams Value of Information Sensitivity Analysis and Model Criticism Additional Issues in Decision Analysis About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7104.htm15.06.2008 14:10:48
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
DECISION ANALYSIS
INTRODUCTION Decision analysis is engineering applied to decision making. The term decision analysis is typically used to refer to a set of analytical tools applied by decision analysts to arrive at recommendations for action. While these tools are important, identifying them with decision analysis is like identifying carpentry with hammers and saws. More important than the tools is the structured process of decomposition, analysis and synthesis that decision analysts apply to decision problems. Fundamentally, decision analysis is a way of thinking. Like all engineering disciplines, decision analysis is based on application of the scientific method and rational analysis. But decision analysis makes explicit what other engineering disciplines often leave implicit, that rationality and the scientific process are tools for helping to devise policies and construct solutions that serve our values. Decision analysis forms a bridge between the rational world of structured, analytic thinking and the esthetic world of values and feelings. Properly applied, the decision analysis process improves individual, organizational and societal decision making by helping to construct decision policies that serve our core values. The decision analysis process decomposes a decision problem into components, analyzes the components, synthesizes them into a decision model, and uses the model to evaluate options and recommend actions. Decomposition focuses on three basic questions: 1. What do I value? The decision maker considers the fundamental objectives served by the decision, develops ways to measure how well these objectives are achieved, and organizes the information into a preference model for evaluating policy options. 2. What can I do? The decision maker identifies a set of policy options under consideration. 3. What might happen? The decision maker considers the consequences of the options under consideration and evaluates how well the consequences meet the fundamental objectives. When consequences are uncertain and the uncertainty has an impact on which option is preferred, an uncertainty model is constructed. Options for gathering information to reduce uncertainty are identified and considered. Decision analysis follows the standard phases of the engineering process: problem definition, analysis, design of a solution, criticism and refinement of the proposed solution, and implementation. Problem definition involves setting the decision context: who are the actors, what are their roles, whose objectives (individual or organizational) are to be served by the decision, how broad or narrow is the scope of options to be considered, what constraints must be taken into account, what is the time frame for decision making and policy implementation, what sources of information are available. The analysis phase constructs a preference model and/or an uncertainty model. In the solution design
phase the model is applied to evaluate candidate policy options and select a preferred option. Solution criticism and refinement involves examining the results of the model, performing sensitivity analysis on key inputs, evaluating how well the model captures the essentials of the problem for the purpose at hand, and making a final policy selection. The final step of a decision analysis is implementation of the chosen option. Although often omitted from texts on decision analysis, strategies for implementation and execution monitoring are often as important to achieving the decision maker’s objectives as identifying the best policy. PREFERENCE MODELING In decision analysis, preferences are modeled by a utility function that measures the decision maker’s relative degrees of preference for different consequences. Preference modeling is the process of constructing a utility function that captures the decision maker’s values. The first step in preference modeling is identifying the fundamental objectives for the decision context. It is important to distinguish fundamental objectives, which are intrinsically important to the decision maker, from means objectives, which are important only to the degree to which they support fundamental objectives. Fundamental objectives are decomposed hierarchically. A good final set of objectives is complete, concise, non-redundant, separable, measurable, operational, and controllable. Next, attributes of value are defined to measure how well outcomes satisfy each of the fundamental objectives. The set of attributes should cover the fundamental objectives completely but without redundancy. Once attributes have been defined, a single attribute utility function is constructed for each attribute, and these are aggregated into a multiattribute utility function. The model is constructed according to by following a structured elicitation process. The preference elicitation process is based on constraints imposed by the mathematics of utility theory, insights and methods from experimental psychology and psychometrics, and the distilled wisdom of years of decision analysis practice. The decision maker provides judgments that enable the modeler to select an appropriate functional form for the utility function and determine parameters of the function (most notably, the shape of the single-attribute utility curves and the relative weight to be given to different attributes). The most commonly applied functional form for the multiattribute utility function is a linear weighted average of single-attribute utility functions: u(x) =
wi ui (xi )
(1)
In this expression, ui (xi ) is the single-attribute utility function for the ith attribute of value. It is commonly specified to range on a scale between 0 (the utility of a “reasonable worst” option) and 1 (the utility for a “reasonable best” option). The weights are positive numbers that sum to 1. The weights measure of the relative value to the decision maker of “swings” from reasonable worst to reasonable best on the respective attributes. When the problem involves uncertainty, the utility function reflects not just ordinal preferences but also attitude toward risk. A concave utility func-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.
2
Decision Analysis
tion on a numerical attribute reflects aversion to risk, i.e., the decision maker prefers a certain option to a gamble with the same expected value. A linear utility function is risk neutral; a convex utility function is risk-seeking. OPTION GENERATION Some decision problems involve selecting from among a discrete, denumerable set of options. For such problems, option generation means developing the list from which an option is to be selected. In a portfolio problem, the options under consideration are subsets of a set of elementary options. In other problems, the option space is defined implicitly by constraints defining a feasible region of solutions in some solution space, or by operators applied to options to generate other options. Option generation and preference modeling support each other. An explicit focus on values and fundamental objectives helps to spur creative generation of new options for meeting fundamental objectives, avoiding a narrow focus on a few salient options. Attention to fundamental objectives helps to mitigate the tendency to underweight or ignore options that meet fundamental objectives but score poorly on salient means objectives. Comparing options to see how they differ can help to identify missed objectives. Examining objectives on which an otherwise good option scores poorly can help to generate ideas for modifying the option to address its shortcomings. UNCERTAINTY MODELING Uncertainty over consequences is measured by a probability distribution that quantifies the decision maker’s degree of belief in different consequences. Decision analysis embraces the subjectivist view of probability, in which probability measures degrees of belief in propositions about which the decision maker is uncertain. Uncertainty modeling is useful because it provides a structured and theoretically sound approach to sort through the implications of different contingencies, organize information about their impact on the decision, and summarize them to arrive at an overall evaluation of an option. To build an uncertainty model, the decision maker first identifies the key uncertain contingencies that affect the decision. A qualitative assessment is made about how much the uncertainties affect the choice of which option is preferred. Uncertainties that matter are selected for further modeling. Uncertain contingencies are modeled as random variables. A joint probability distribution is defined to express the decision maker’s beliefs about the uncertain contingencies. As with preference modeling, a structured elicitation process is used to specify the uncertainty model. Subjectivist Bayesian theory provides a sound methodology for integrating empirical data with informed expert judgment to form a model that accurately reflects available knowledge about the uncertain contingencies. The uncertainty model may include contingencies whose outcome is not known at the time the model is constructed but will be known at the time the choice is made.
The model is specified so that the recommended decision policy may be contingent on these outcomes. Suppose X is an uncertain contingency affecting value to the decision maker and Y is a related observable contingency whose outcome depends probabilistically on X. If Y becomes known before the decision is made, then the optimal decision is based on the conditional distribution of X given Y, which is computed from the prior distribution using Bayes rule: P(X|Y ) =
P(Y |X)P(X) P(Y )
(2)
The optimal policy may depend on which outcome occurs for Y. MAXIMIZATION OF SUBJECTIVE EXPECTED UTILITY Once preference and uncertainty models have been specified, the model is solved for the optimal policy, which is the policy that maximizes the expected value of the utility function, where the expectation is taken over the uncertain contingencies. The principle of maximizing subjective expected utility can be derived mathematically from various systems of axioms reflecting principles of rational choice under uncertainty. Axiomatic justifications of expected utility maximization are compelling because of the guarantee that a decision theoretically sound model is internally consistent and the chosen policy is optimal given the modeling assumptions. From an engineering perspective, theoretical soundness is attractive but not sufficient. More important is the judgment of experienced practitioners that the structured decomposition, analysis, and synthesis process improves decision making by providing a framework for organizing, analyzing, and integrating the many factors involved in complex decisions. DECISION TREES AND INFLUENCE DIAGRAMS Two commonly applied visual and analytic tools for constructing a decision model are the decision tree and the influence diagram. Figures 1 and 2 show an influence dia-
Figure 1. Influence Diagram for Build or Buy Decision.
Decision Analysis
3
Figure 2. Decision Tree for Build or Buy Decision.
gram and a decision tree for a decision of whether to satisfy a client’s requirement with an off-the-shelf solution or to design and build a custom solution. The two tools show complementary views of the same problem. The influence diagram displays independence relationships between uncertain contingencies and the decomposition of utility into attributes of value. For example, Figure 1 shows a decomposition of value into cost and performance. Labor costs are modeled as independent of technical risk. The decision tree shows how the contingencies affect the decision options. For example, Figure 2 shows that labor cost and technical risk affect the decision maker’s value for the build option but not for the buy option. Decision trees serve both as graphical aids for specifying a model and as computational architectures for solving the model. The decision tree of Figure 2 is shown in schematic form. A full decision tree would have ten branches: a branch for each combination of values of cost and performance under the build option, and another branch for the buy option. A number of commercial software packages exist for specifying and solving decision models using decision trees and influence diagrams. VALUE OF INFORMATION Decision analysis provides a sound basis for evaluating options for collecting information to resolve uncertainty. When the optimal decision differs for different outcomes of an uncertain contingency, resolving the uncertainty before the decision is made may increase expected value to the decision maker. The expected value of perfect information (EVPI) is the increase in expected utility if costless information is provided prior to the decision that completely resolves the uncertainty in question. The EVPI determines an upper bound on the price the decision maker would be willing to pay for information. The expected value of sample information (EVSI) is the difference in utility if a realistically achievable information collection option is implemented. SENSITIVITY ANALYSIS AND MODEL CRITICISM It is a key tenet of decision analysis that the value of modeling derives as much from the insight the decision maker gains into the problem than from the answer provided by the model. The modeling exercise helps the decision maker
to ensure that all relevant factors have been considered, to integrate all available information in a consistent and sound way, to reflect on how to trade off different components of value, and to justify the decision to him/herself and others. An important support to this process is sensitivity analysis, in which inputs to the model are systematically varied to observe the impact on the model’s recommendations. Sensitivity analysis and the structured modeling process help the decision maker to understand the reasons for the model’s recommendations and to adjust the model to make sure it incorporates all important concerns. While the model and the modeling process provide inputs to the decision, the ultimate responsibility for the decision lies with the decision maker.
ADDITIONAL ISSUES IN DECISION ANALYSIS The prototypical application of decision analysis is a situation in which a trained decision analyst works with a single decision maker to build a model for a major, onetime decision problem. As training in decision analysis becomes more widespread and as software tools become more accessible, decision analysis can be practiced without the intervention of the analyst, and becomes cost-effective for more routine decision problems. An active area of research is the development of technology for specifying reusable template models and model components for commonly recurring problem types. While subjective expected utility theory is a single-actor theory of optimal decision-making, decision analysis is often applied in situations involving more than one actor. Structured elicitation methods have been developed for eliciting a consensus decision model from a group of stakeholders. Ideas and methods from decision analysis have been applied in the field of artificial intelligence to develop inference, prediction, diagnosis, and planning systems.
BIBLIOGRAPHY Bunn, D. (1984) Applied Decision Analysis. New York: McGrawHill. Clemen, R. T. Making Hard Decisions: An Introduction to Decision Analysis, 2nd edition.( 1997) Pacific Grove, CA: Duxbury Press.
4
Decision Analysis
Howard, R. A. (1988) Decision analysis: practice and promise. Management Science 34, 679–685. Keeney, R. (1992) Value-Focused Thinking: A Path to Creative Decision Making. Cambridge, MA: Harvard University Press. Keeney, R. and Raiffa, H.(1976) Decisions with Multiple Objectives. New York: Wiley. Morgan, G. and Henrion, M.(1990) Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge: Cambridge University Press. Plous, S. (1993) The Psychology of Judgment and Decision Making. New York: McGraw-Hill. Raiffa, H. (1968) Decision Analysis. Reading, MA: Addison-Wesley. Raiffa, H. and Schlaifer, R. (1961) Applied Statistical Decision Theory. Cambridge, MA: Harvard University Press. von Winterfeldt, D. and Edwards, W. (1986) Decision Analysis and Behavioral Research. Cambridge: Cambridge University Press. Watson, S. R. and Buede, D. M. (1987) Decision Synthesis: The Principles and Practice of Decision Analysis. Cambridge: Cambridge University Press.
KATHRYN B. LASKEY George Mason University, Fairfax, VA
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7118.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Human Centered Design Standard Article William B. Rouse1 1Enterprise Support Systems, Norcross, GA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7118 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (169K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases ❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
Abstract The sections in this article are Design Objectives Design Issues Design Methodology Naturalist Phase Marketing Phase Engineering Phase Sales and Service Phase Conclusions Keywords: design methods; design for usability; human factors; human interaction; measurement; product planning; stakeholders; system design; system development; systems engineering About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7118.htm15.06.2008 14:11:28
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering c 1999 John Wiley & Sons, Inc. Copyright
HUMAN CENTERED DESIGN This article is concerned with designing products and systems using a methodology called human-centered design (1,2). Human-centered design is a process of ensuring that the concerns, values, and perceptions of all stakeholders in a design effort are considered and balanced. Stakeholders include users, customers, evaluators, regulators, service personnel, and so on. Human-centered design can be contrasted with user-centered design (3,4,5,6). The user is a very important stakeholder in design, often the primary stakeholder. However, the success of a product or system is usually strongly influenced by other players in the process of design, development, fielding, and ongoing use of products and systems. Human-centered design is concerned with the full range of stakeholders. Considering and balancing the concerns, values, and perceptions of such a broad range of people presents difficult challenges. Ad hoc approaches do not consistently work—too much drops through the cracks. A systematic framework, which is comprehensive but also relatively easy to employ, is necessary for human-centered design to be practical. This article presents such a framework.
Design Objectives There are three primary objectives within human-centered design. These objectives should drive much of designers’ thinking, particularly in the earlier stages of design. Discussions in later sections illustrate the substantial impact of focusing on these three objectives. The first objective of human-centered design is that it should enhance human abilities. This dictates that humans’ abilitites in the roles of interest be identified, understood, and cultivated. For example, people tend to have excellent pattern recognition abilities. Design should take advantage of these abilities, for instance, by using displays of information that enable users to respond on a pattern recognition basis rather than requiring more analytical evaluation of the information. The second objective is that human-centered design should help overcome human limitations. This requires that limitations be identified and appropriate compensatory mechanisms be devised. A good illustration of a human limitation is the proclivity to make errors. Humans are fairly flexible information processors—an important ability—but this flexibility can lead to “innovations” that are erroneous in the sense that undesirable consequences are likely to occur. One way of dealing with this problem is to eliminate innovations, perhaps via interlocks and rigid procedures. However, this is akin to throwing out the baby with the bath water. Instead, mechanisms are needed to compensate for undesirable consequences without precluding innovations. Such mechanisms represent a human-centered approach to overcoming the human limitation of occasional erroneous performance. The third objective of human-centered design is that it should foster human acceptance. This dictates that stakeholders’ preferences and concerns be explicitly considered in the design process. While users are certainly key stakeholders, there are other people who are central to the process of designing, developing, and 1
2
HUMAN CENTERED DESIGN
operating a system. For example, purchasers or customers are important stakeholders who often are not users. The interests of these stakeholders also have to be considered to foster acceptance by all the humans involved.
Design Issues This article presents an overall framework and systematic methodology for pursuing the above three objectives of human-centered design. There are four design issues of particular concern within this framework. The first concern is formulating the right problem—making sure that system objectives and requirements are right. All too often, these issues are dealt with much too quickly. There is a natural tendency to “get on with it,” which can have enormous negative consequences when requirements are later found to be inadequate or inappropriate. The second issue is designing an appropriate solution. All well-engineered solutions are not necessarily appropriate. Considering the three objectives of human-centered design, as well as the broader context within which systems typically operate, it is apparent that the excellence of the technical attributes of a design are necessary but not sufficient to ensure that the system design is appropriate and successful. Given the right problem and appropriate solution, the next concern is developing it to perform well. Performance attributes should include operability, maintainability, and supportability—that is, using it, fixing it, and supplying it. Supportability includes spare parts, fuel, and, most importantly, trained personnel. The fourth design concern is ensuring human satisfaction. Success depends on people using the system and achieving the benefits for which it was designed. However, before a system can be used, it must be purchased, usually by other stakeholders, which in turn depends on it being technically approved by yet other stakeholders. Thus, a variety of types of humans have to be satisfied.
Design Methodology Concepts such as user-centered design, user-friendly systems, and ergonomically designed systems have been around for quite some time. Virtually everybody endorses these ideas, but very few people know what to do in order to realize the potential of these concepts. What is needed, and what this article presents, is a methdological framework within which human-centered design objectives can be systematically and naturally pursued. Design and Measurement. What do successful products and systems have in common? The fact that people buy and use them is certainly a common attribute. However, sales is not a very useful measure for designers. In particular, using lack of sales as a way to uncover poor design choices is akin to using airplane crashes as a method of identifying design flaws: This method works, but the feedback provided is a bit late. The question, therefore, is one of determining what can be measured early that is indicative of subsequent poor sales. In other words, what can be measured early to find out if the product or system is unlikely to fly? If this can be done early, it should be possible to change the characteristics of the product or system so as to avoid the predicted failure. This section focuses on the issues that must be addressed and resolved for the design of a new product or system to be successful. Seven fundamental measurement issues are discussed, and a framework for systematically addressing these issues is presented. This framework provides the structure within which the remainder of this article is organized and presented. Measurement Issues. Figure 1 presents seven measurement issues that underlie successful design (7). The “natural” ordering of these issues depends on one’s perspective. From a nuts and bolts engineering point of view, one might first worry about testing (i.e., getting the system to work) and save issues such as
HUMAN CENTERED DESIGN
3
Fig. 1. Measurement issues that must be addressed to ensure success in design.
viability until much later. In contrast, most stakeholders are usually first concerned with viability and only worry about issues such as testing if problems emerge. A central element of human-centered design is that designers should address the issues in Fig. 1 in the same order that stakeholders address these issues. Thus, the last concern is, “Does it run?” The first concern is, “What matters?” or “What constitutes benefits and costs?” Viability. Are the benefits of system use sufficiently greater than the costs? While this question cannot be answered empirically prior to having a design, one can determine how the question is likely to be answered. How do stakeholders characterize benefits? Are they looking for speed, throughput, an easier job, or appealing surroundings? What influences their perceptions of these characteristics? How do stakeholders characterize costs? Is it simply purchase price? Or, do costs include the costs of maintenance and perhaps training? Are all the costs monetary? Acceptability. Do organizations/individuals use the system? This is also a question that cannot be answered definitively prior to having the results of design. However, one can determine in advance the factors that are likely to influence the answer. Most of these factors relate to the extent to which a product or system fits into an organization’s philosophy, technology, and so on. Validity. Does the product or system solve the problem? This, of course, leads to the question, What is the problem? How would you know if the problem was solved, or not solved? The nature of this question was discussed earlier in this article. Evaluation. Does the system meet requirements? Formulation of the design problem should result in specification of requirements that must be satisfied for a design solution to be successful. Examples include speed, accuracy, throughput, and manufacturing costs. Demonstration. How do observers react to the system? It is very useful to get the reactions of potential stakeholders long before the product or system is ready for evaluation. It is important, however, to pursue demonstration in a way that does not create a negative first impression. Verification. Is the system put together as planned? This question can be contrasted with a paraphrase of the validation question: Is the plan any good? Thus, verification is the process of determining that the system was built as intended, but does not include the process of assessing whether or not it is a good design. Testing. Does the system run, compute, and so on? This is a standard engineering question. It involves issues of (a) physical measurement and instrumentation for hardware and (b) runtime inspection and debugging tools for software.
4
HUMAN CENTERED DESIGN
Fig. 2. A framework for measurement—four phases of the design process and the ways in which technology affects this process.
A Framework for Measurement. The discussion thus far has served to emphasize the diversity of measurement issues from the perspectives of both designers and stakeholders. If each of these issues were pursued independently, as if they were ends in themselves, the costs of measurement would be untenable. Yet, each issue is important and should not be neglected. What is needed, therefore, is an overall approach to measurement that balances the allocation of resources among the issues of concern at each stage of design. Such an approach should also integrate intermediate measurement results in a way that provides maximal benefit to the evolution of the design product. These goals can be accomplished by viewing measurement as a process involving the four phases shown in Fig. 2. Naturalist Phase. This phase involves understanding the domains and tasks of stakeholders from the perspective of individuals, the organization, and the environment. This understanding includes not only people’s activities, but also prevalent values and attitudes relative to productivity, technology, and change in general. Evaluative assessments of interest include identification of difficult and easy aspects of tasks, barriers to and potential avenues of improvement, and the relative leverage of the various stakeholders in the organization. Marketing Phase. Once one understands the domain and tasks of current and potential stakeholders, one is in a position to conceptualize alternative products or systems to support these people. Product concepts can be used for initial marketing in the sense of determining how users react to the concepts. Stakeholder’s reactions are needed relative to validity, acceptability, and viability. In other words, one wants to determine whether or not people perceive a product concept as solving an important problem, solving it in an acceptable way, and solving it at a reasonable cost. Engineering Phase. One now is in a position to begin tradeoffs between desired conceptual functionality and technological reality. As indicated in Fig. 2, technology development will usually have been pursued prior to and in parallel with the naturalist and marketing phases. This will have at least partially ensured that the product concepts shown to stakeholders were not technologically or economically ridiculous. However, one now
HUMAN CENTERED DESIGN
5
must be very specific about how desired functionality is to be provided, what performance is possible, and the time and dollars necessary to provide it. Sales and Service Phase. As this phase begins, the product should have successfully been tested, verified, demonstrated, and evaluated. From a measurement point of view, the focus is now on validity, acceptability, and viability. It is also at this point that one ensures that implementation conditions are consistent with the assumptions underlying the design basis of the product or system. The Role of Technology. It is important to note the role of technology in the human-centered design process. As depicted in Fig. 2, technology is pursued in parallel with the four phases of the design process. In fact, technology feasibility, development, and refinement usually consume the lion’s share of the resources in a product or system design effort. However, technology should not drive the design process. Human-centered design objectives should drive the process, and technology should support these objectives. Organization for Measurement. Table 1 illustrates how the seven measurement issues should be organized, or sequenced, in the four phases. Framing an issue denotes the process of determining what an issue means within a particular context and defining the variables to be measured. Planning is concerned with devising a sequence of steps and schedule for making measurements. Refining involves using initial results to modify the plan, or perhaps even rethink issues and variables. Finally, completing is the process of making outcome measurements and interpreting results. Table 1 provides a useful context in which to discuss typical measurement problems. There are two classes of problems of interest. The first class is planning too late where, for example, failure to plan for assessing acceptance can preclude measurement prior to putting a product into use. The second class of problems is executing too early where, for instance, demonstrations are executed prior to resolving test and verification issues and potentially lead to negative initial impressions of a product or system.
Naturalist Phase The purpose of the naturalist phase is gaining an understanding of stakeholders’ domains and tasks. This includes assessing the roles of individuals, their organizations, and the environment. Also of interest is identifying barriers to change and avenues for change.
6
HUMAN CENTERED DESIGN
The result of the naturalist phase is a formal description of stakeholders, their tasks, and their needs. This description can take many forms, ranging from text to graphics and ranging from straightforward descriptions to theories and hypotheses regarding stakeholders’ behaviors. This section elaborates and illustrates the process of developing descriptions of stakeholders, tasks, and needs. The descriptions resulting from the naturalist phase are the starting point for the marketing phase. Identifying Stakeholders. Who are the stakeholders? This is the central question with which a humancentered design effort should be initiated. The answer to this question is not sufficient for success. However, the answer to this question is certainly necessary. Stakeholder Populations. The key issue is identifying a set of people whose tasks, abilities, limitations, attitudes, and values are representative of the total population of interest. It is often necessary to sample multiple organizations to gain this understanding of the overall population. An exception to the guideline occurs when the total population of stakeholders resides in a single organization. Designers as Stakeholder Surrogates. Rather than explicitly identifying stakeholders, it is common for designers to think, perhaps only tacitly, that they understand stakeholders and, therefore, can act as their surrogates. To the extent that designers are former stakeholders, this approach has some merit. However, it is inherently limited from capturing the abilities, attitudes, and aspirations of current or potential stakeholders, as well as the current or potential impact of their organizations. Elusive Stakeholders. It is often argued, particularly for advanced technology efforts, that the eventual stakeholders for the product or system of interest do not yet exist—there are no incumbent stakeholders. This is very seldom true because there are actually extremely few products and systems that are designed “from scratch.” Even, for example, when designing the initial spacecraft, much was drawn from previous experiences in aircraft and submarines. Methods and Tools for Measurement. How does one identify stakeholders and, in particular, how does one determine their needs, preferences, values, and so on? Observation is, of course, the necessary means. Initially, unstructured direct observations may be appropriate. Eventually, however, more formal means should be employed to ensure unbiased, convincing results. Table 2 lists the methods and tools appropriate for answering these types of questions. Magazines and Newspapers. To gain an initial perspective on what is important to a particular class of stakeholders or a particular industry, one should read what they read. Trade magazines and industry
HUMAN CENTERED DESIGN
7
newspapers publish what interests their readers. One can capitalize on publishers’ insights and knowledge by studying articles for issues and concerns. For example, is cost or performance more important? Is risk assessment, or equivalent, mentioned frequently? One should pay particular attention to advertisements, because advertisers invest heavily in trying to understand customers’ needs, worries, and preferences. One can capitalize on advertisers’ investments by studying the underlying messages and appeal in advertisements. It is useful to create a file of articles, advertisements, brochures, catalogs, and so on, that appear to characterize the stakeholders of interest. The contents of this file can be slowly accumulated over a period of many months before it is needed. This accumulation might be initiated in light of long-term plans to move in new directions. When these long-term plans become short-term plans, this file can be accessed, the various items juxtaposed, and an initial impression formed relatively quickly. Databases. There are many relatively inexpensive sources of information about stakeholders available via online databases. With these sources, a wide variety of questions can be answered. How large is the population of stakeholders? How are they distributed, organizationally and geographically? What is the size of their incomes? How do they spend it? Such databases are also likely to have information on the companies whose advertisements were identified in magazines and newspapers. What are their sales and profits? What are the patterns of growth? By pursing these questions, one may be able to find characteristics of the advertisements of interest that discriminate good versus poor sales growth and profits. Such characteristics might include leading-edge technology, low cost, and/or good service. Questionnaires. Once magazines, newspapers, and databases are exhausted as sources of information, attention should shift to seeking more specific and targeted information. An inexpensive approach is to mail, or otherwise distribute, questionnaires to potential stakeholders to assess how they spend their time, what they perceive as their needs and preferences, and what factors influence their decisions. Questions should be brief, have easily understandable responses, and be straightforward to answer. Multiple choice questions or answers in terms of rating scales are much easier to answer than open-ended, essay-like questions, even though the latter may provide more information. Low return rate can be a problem with questionnaires. Incentives can help. For example, those who respond can be promised a complete set of the results. In one effort, an excellent response rate was obtained when a few randomly selected respondents were given tickets to Disney World. Results with questionnaires can sometimes be frustrating. Not infrequently, analysis of the results leads to new questions which one wishes had been on the original questionnaire. These new questions can, however, provide the basis of a follow-up agenda. Interviews. Talking with stakeholders directly is a rich source of information. This can be accomplished via telephone, but face-to-face is much better. The use of two interviewers can be invaluable to enable one person to maintain eye contact and the other to take notes. The use of two interviewers also later provides two interpretations of what was said. Usually, interviewees thoroughly enjoy talking about their jobs and what types of products and systems would be useful. Often, one is surprised by the degree of candor people exhibit. Consequently, interviewees usually do not like their comments tape-recorded. It is helpful if interviewees have first filled out questionnaires, which can provide structure to the interview as they explain and discuss their answers. Questionnaires also ensure that they will have thought about the issues of concern prior to the interview. In the absence of a prior questionnaire, the interview should be carefully structured to avoid unproductive tangents. This structure should be explained to interviewees prior to beginning the interview. Experts. People with specialized expertise in the domain of interest, the technology, and/or the market niche can be quite valuable. People who were formerly stakeholders within the population of interest tend to
8
HUMAN CENTERED DESIGN
be particularly useful. These people can be accessed via the Internet or informal telephone calls (which are surprisingly successful), gathered together in invited workshops, and/or hired as consultants. While experts’ knowledge can be essential, it is very important that the limitations of experts be realized. Contrary to the demeanor of many experts, very few experts know everything! Listen and filter carefully. Furthermore, it is very unlikely that one expert can cover a wide range of needs. Consider multiple experts. This is not due to a need to get a good average opinion. It is due to the necessity to cover multiple domains of knowledge. Summary. The success of all of the above methods and tools depends on one particular ability of designers—the ability to listen. During the naturalist phase, the goal is understanding stakeholders rather than convincing them of the merits of particular ideas or the cleverness of the designers. Designers will get plenty of time to talk and expound in later phases of the design process. At this point, however, success depends on listening.
Marketing Phase The purpose of the marketing phase is introducing product concepts to potential customers, users, and other stakeholders. In addition, the purpose of this phase includes planning for measurements of viability, acceptability, and validity. Furthermore, initial measurements should be made to test plans, as opposed to the product, to uncover any problems before proceeding. It is important to keep in mind that the product and system concepts developed in this phase are primarily for the purpose of addressing viability, acceptability, and validity. Beyond that which is sufficient to serve this purpose, minimal engineering effort should be invested in these concepts. Beyond preserving resources, this minimalist approach avoids, or at least lessens, “ego investments” in concepts prior to knowing whether or not the concepts will be perceived to be viable, acceptable, and valid. These types of problem can also be avoided by pursuing more than one product concept. Potential stakeholders can be asked to react to these multiple concepts in terms of whether or not each product concept is perceived as solving an important problem, solving it in an acceptable way, and solving it at a reasonable cost. Each person queried can react to all concepts, or the population of potential stakeholders can be partitioned into multiple groups, with each group only reacting to one concept. The marketing phase results in an assessment of the relative merits of the multiple product concepts that have emerged up to this point. Also derived is a preview of any particular difficulties that are likely to later emerge. Concepts can be modified, both technically and in terms of presentation and packaging, to decrease the likelihood of these problems. Methods and Tools for Measurement. How does one measure the perceptions of stakeholders relative to the viability, acceptability, and validity of alternative product and system concepts? Table 3 lists the appropriate methods and tools for answering this question, as well as their advantages and disadvantages. Questionnaires. This method can be used to obtain the reactions of a large number of stakeholders to alternative functions and features of a product or system concept. Typically, people are asked to rate the desirability and perceived feasibility of functions and features using, for example, scales of 1 to 10. Alternatively, people can be asked to rank order functions and features. As noted when questionnaires were discussed earlier, low return rate can be a problem. Furthermore, one typically cannot have respondents clarify their answers, unless telephone or in-person follow-ups are pursued. This tends to be quite difficult when the sample population is large. Questionnaires can present problems if they are the only methods employed in the marketing phase. The difficulty is that responses may not discriminate among functions and features. For example, respondees may rate as 10 the desirability of all functions and features.
HUMAN CENTERED DESIGN
9
This sounds great—one has discovered exactly what people want! However, an alternative interpretation is that the alternatives were not sufficiently understood for people to perceive different levels of desirability among the alternatives. Asking people to rank order items can eliminate this problem, at least on the surface. However, questionnaires usually are not sufficiently rich to provide people with real feelings for the functionality of the product or system. Interviews. Interviews are a good way to follow-up questionnaires, perhaps for a subset of the population sampled if the sample was large. As noted earlier, questionnaires are a good precursor to interviews in that they cause interviewees to have organized their thoughts prior to the interviews. In-person interviews are more useful than telephone interviews because it is much easier to iteratively uncover perceptions and preferences during face-to-face interaction. Interviews are a good means for determining people’s a priori perceptions of the functionality envisioned for the product or system. It is useful to assess these a priori perceptions independently of the perceptions that one may subsequently attempt to create. This assessment is important because it can provide an early warning of any natural tendencies of potential stakeholders to perceive things in ways other than intended in the new product or system. If problems are apparent, one may decide to change the presentation or packaging of the product to avoid misperceptions. Scenarios. At some point, one has to move beyond the list of words and phrases that describe the functions and features envisioned for the product or system. An interesting way to move in this direction is by using stories or scenarios that embody the functionality of interest and depict how these functions might be utilized. These stories and scenarios can be accompanied by a questionnaire within which respondents are asked to rate the realism of the depiction. Furthermore, they can be asked to explicitly consider, and perhaps rate, the validity, acceptability, and viability of the product functionality illustrated. It is not necessary, however, to explicitly use the words “validity,” “acceptability,” and “viability” in the questionnaire. Words should be chosen that are appropriate for the domain being studied; for example, viability may be an issue of cost in some domains and not in others. It is very useful to follow-up these questionnaires with interviews to clarify respondents’ comments and ratings. Often the explanations and clarifications are more interesting and valuable than the ratings.
10
HUMAN CENTERED DESIGN
Mock-Ups. Mock-ups are particularly useful when the form and appearance of a product or system are central to stakeholders’ perceptions. For products such as automobiles and furniture, form and appearance are obviously central. However, mockups can also be useful products and systems where appearance does not seem to be crucial. For example, computer-based systems obviously tend to look quite similar. The only degree of freedom is what is on the display. One can exploit this degree of freedom by producing mockups of displays using photographs or even viewgraphs for use with an overhead projector. One word of caution, however. Even such low-budget presentations can produce lasting impressions. One should make sure that the impression created is such that one wants it to last. Otherwise, as noted earlier, one may not get an opportunity to make a second impression. Prototypes. Prototypes are a very popular approach and, depending on the level of functionality provided, can give stakeholders hands-on experience with the product or system. For computer-based products, rapid prototyping methods and tools have become quite popular because these methods and tools enable the creation of a functioning prototype in a matter of hours. Thus, prototyping has two important advantages. Prototypes can be created rapidly and enable hands-on interaction. With these advantages, however, come two important disadvantages. One disadvantage is the tendency to produce ad hoc prototypes, typically with the motivation of having something to show stakeholders. It is very important that the purpose of the prototype be kept in mind. It is a device with which to obtain initial measurements of validity, acceptability, and viability. Thus, one should make sure that the functions and features depicted are those for which these measurements are needed. One should not, therefore, put something on a display simply because it is intuitively appealing. This can be a difficult impulse to avoid. The second disadvantage is the tendency to become attached to one’s prototypes. At first, a prototype is merely a device for measurement, to be discarded after the appropriate measurements are made. However, once the prototype is operational, there is a tendency for people, including the creators of the prototype, to begin to think that the prototype is actually very close to what the final product or system should be like. In such situations, it is common to hear someone say, “Maybe with just a few small changes here and there . . ..” Prototypes can be very important. However, one must keep their purpose in mind and avoid “rabid” prototyping! Also, care must be taken to avoid premature ego investments in prototypes. The framework for design presented in this article can provide the means for avoiding these pitfalls. Summary. During the naturalist phase, the goal was to listen. In the marketing phase, one can move beyond just listening. Various methods and tools can be used to (a) test hypotheses that emerged from the naturalist phase and (b) obtain potential stakeholders’ reactions to initial product and system concepts. Beyond presenting hypotheses and concepts, one also obtains initial measurements of validity, acceptability, and viability. These measurements are in terms of quantitative ratings and rankings of functions and features, as well as more free flow comments and dialogue. Engineering Phase The purpose of the engineering phase is developing a final design of the product or system. Much of the effort in this phase involves using various design methods and tools in the process of evolving a conceptual design to a final design. In addition to synthesis of a final design, planning and execution of measurements associated with evaluation, demonstration, verification, and testing are pursued. Four-Step Process. In this section a four-step process for directing the activities of the engineering phase and documenting the results of these activities is discussed. The essence of this process is a structured approach to producing a series of design documents. Beyond the value of this approach to creating a humancentered design, documentation produced in this manner can be particularly valuable for tracing back from
HUMAN CENTERED DESIGN
11
design decisions to the requirements and objectives that motivated the decisions. For example, suggested design changes are much easier to evaluate and integrate into an existing design when one can efficiently determine why the existing design is as it is. It is important to note that the results of the naturalist and marketing phases should provide a strong head start on this documentation process. In particular, much of the objectives document can be based on the results of these phases. Furthermore, and equally important, the naturalist and marketing phases will have identified the stakeholders in the design effort and are likely to have initiated relationships with many of them. Objectives Document. The first step in the process is developing the Objectives Document. This document contains three attributes of the product or system to be designed: goals, functions, and objectives. Goals are characteristics of the product or system that designers, users, and customers would like the product or system to have. Goals are often philosopical choices, frequently very qualitative in nature. There are usually multiple ways of achieving goals. Goals are particularly useful for providing guidance for later choices. Functions define what the product or system should do, but not how it should be done. Consequently, there are usually multiple ways to provide each function. The definition of functions subsequently leads to analysis of objectives. Objectives define the activities that must be accomplished by the product or system in order to provide functions. Each function has at least one, and often five to ten, objectives associated with it. Objectives are typically phrased as imperative sentences beginning with a verb. There are two purposes for writing a formal document listing goals, functions, and objectives. First, as noted earlier, written documents provide an audit trail from initial analyses to the “as-built” product or system. The Objectives Document provides the foundation for all subsequent documents in the audit trail for the engineering phase. The second purpose of the Objectives Document is that it provides the framework—in fact, the outline—for the Requirements Document. All stakeholders should be involved in the development of the Objectives Document. This includes at least one representative from each type of stakeholder group. This is important because this document defines what the eventual product or system will and will not do. All subsequent development assumes that the functions and objectives in the Objectives Document form a necessary and complete set. The contents of the Objectives Document can be based on interviews with subject-matter experts, including operators, maintainers, managers, and trainers. Baseline and analogous systems can also be valuable, particularly for determining objectives that have proven to be necessary for providing specific functions. Much of the needed information will have emerged from the marketing phase. At the very least, one should have learned from the marketing phase what questions to ask and who to ask. All the stakeholders in the process should have been identified and their views and preferences assessed. The level of detail in the Objectives Document should be such that generality is emphasized and specifics are avoided. The activities and resulting document should concentrate on what is desirable. Discussion of constraints should be delayed—budgets, schedules, people, and technology can be considered later. Requirements Document. Once all the stakeholders agree that the Objectives Document accurately describes the desired functions and objectives for the product or system, the next step is to develop the Requirements Document. The purpose of this document is to identify all information and control requirements associated with each objective in the Objectives Document. For evolutionary designs, baseline and analogous systems can be studied to determine requirements. However, if the product or system being designed has no antecedent, subject-matter expertise can be very difficult to find. In this case, answers to the above questions have to come from engineering analysis and, if necessary, validated empirically. The Requirements Document should be reviewed and approved by all stakeholders in the design effort. This approval should occur prior to beginning development of the conceptual design. This document can also be very useful for determining the functional significance of future design changes. In fact, the Requirements
12
HUMAN CENTERED DESIGN
Document is often used to answer downstream questions that arise concerning why particular system features exist at all. Conceptual Design Document. The conceptual design of a product or system should accommodate all information and control requirements as parsimoniously as feasible within the state of the art. The conceptual design, as embodied in the Conceptual Design Document, is the first step in defining how the final system will meet the requirements of the Requirements Document. The Conceptual Design Document should describe a complete, workable system that meets all design objectives. Realistically, one should expect considerable disagreement as the conceptual design evolves. However, the Conceptual Design Document should not reflect these disagreements. Instead, this document should be iteratively revised until a consensus is reached. At that point, all stakeholders should agree that the resulting conceptual design is a desirable and appropriate product or system. Detailed Design Document. The fourth and final step in the design process involves synthesizing a detailed design. Associated with the detailed design is the Detailed Design Document. This document describes the “production” version of the product or system, including block diagrams, engineering drawings, parts lists, and manufacturing processes. The Detailed Design Document links elements of the detailed design to the functionality within the Conceptual Design Document, which are in turn linked to the information and control requirements in the Requirements Document, which are in turn linked to the objectives within the Objectives Document. These linkages provide powerful means for efficiently revising the design when, as is quite often the case, one or more stakeholders in the design process do not like the implications of their earlier choices. With the audit trail provided by the four-step design process, evaluating and integrating changes are much more straightforward. As a result, good changes are readily and appropriately incorporated, and bad changes are expeditiously rejected. Summary. In this section the engineering phase has been described in terms of a documentation process, including the relationships among documents. Obviously, much of the engineering phase concerns creating the contents of these documents. Many of the other articles in this encyclopedia provide detailed guidance on these engineering activities.
Sales and Service Phase Initiation of the sales and service phase signals the accomplishment of several important objectives. The product or system will have been successfully tested, verified, demonstrated, and evaluated. In addition, the issues of viability, acceptability, and validity will have been framed, measurements planned, and initial measurements executed. These initial measurements, beyond the framing and planning, will have exerted a strong influence on the nature of the product or system. Sales and Service Issues. In this phase, one is in a position to gain closure on viability, acceptability, and validity. One can make the measurements necessary for determining if the product or system really solves the problem that motivated the design effort, solves it in an acceptable way, and provides benefits that are greater than the costs of acquisition and use. This is accomplished using the measurement plan that was framed in the naturalist phase, developed in the marketing phase, and refined in the engineering phase. These measurements should be performed even if the product or system is “pre-sold”—for example, when a design effort is the consequence of a winning proposal. In this case, even though the “purchase” is ensured, one should pursue closure on viability, acceptability, and validity in order to gain future projects. There are several other activities in this phase beyond measurement. One should ensure that the implementation conditions for the product or system are consistent with the assumed conditions upon which the design is based. This is also the point at which the later steps of stakeholder acceptance plans are executed,
HUMAN CENTERED DESIGN
13
typically with a broader set of people than those who participated in the early steps of the plan. This phase also often involves technology-transition considerations in general. The sales and service phase is also where problems are identified and remediated. To the greatest extent possible, designers should work with stakeholders to understand the nature of problems and alternative solutions. Some problems may provide new opportunities rather than indicating shortcomings of the current product or system. It is important to recognize when problems go beyond the scope of the original design effort. The emphasis then becomes one of identifying mechanisms for defining and initiating new design efforts to address these problems. The sales and service phase also provides an excellent means for maintaining relationships. One can identify changing stakeholders that occur because of promotions, retirements, resignations, and reorganizations. Furthermore, one can lay the groundwork and make initial progress on the naturalist phase, and perhaps the marketing phase, for the next project, product, or system. Methods and Tools for Measurement. How does one make the final assessments of viability, acceptability, and validity? Furthermore, how does one recognize new opportunities? Unstructured direct observation can provide important information. However, more formal methods are likely to yield more definitive results and insights. Table 4 lists the methods and tools appropriate for answering these types of questions. Sales Reports. Sales are an excellent measure of success and a good indicator of high viability, acceptability, and validity. However, sales reports are a poor way of discovering a major design inadequacy. Furthermore, when a major problem is detected in this manner, it is quite likely that one may not know what the problem is or why it occurred. Service Reports. Service reports can be designed, and service personnel trained, to provide much more than simply a record of service activities. Additional information of interest concerns the specific nature of problems, their likely causes, and how stakeholders perceive and react to the problems. Stakeholders’ suggestions for how to avoid or solve the problems can also be invaluable. Individuals’ names, addresses, and telephone numbers can also be recorded so that they subsequently can be contacted. Questionnaires. Questionnaires can be quite useful for discovering problems that are not sufficient to prompt service calls. They also can be useful for uncovering problems with the service itself. If a record is maintained of all stakeholders, this population can regularly be sampled and queried regarding problems, as well as ideas for solutions, product upgrades, and so on. As noted before, however, a primary disadvantage of questionnaires is the typical low return rate. Interviews. Interviews can be a rich source of information. Stakeholders can be queried in depth regarding their experiences with the product or system, what they would like to see changed, and new products and
14
HUMAN CENTERED DESIGN
systems they would like. This can also be an opportunity to learn how their organizations make purchasing decisions, both in terms of decision criteria and budget cycles. While sales representatives and service personnel can potentially perform interviews, there is great value in having designers venture out to the sites where their products and systems are used. Such sorties should have clear measurement goals, questions to be answered, an interview protocol, and so on, much in the way that is described in earlier sections. Summary. The sales and service phase brings the measurement process full circle. An important aspect of this phase is using the above tools and methods to initiate the next iteration of naturalist and marketing phases. To this end, as was emphasized earlier, a primary prerequisite at this point is the ability to listen.
Conclusions This article has presented a framework for human-centered design. Use of this framework will ensure a successful product or system in terms of viability, acceptability, validity, and so on. In this way, human-centered design provides the basis for translating technology opportunities into market innovations.
BIBLIOGRAPHY 1. W. B. Rouse Design for Success: A Human-Centered Approach to Designing Successful Products and Systems, New York: Wiley, 1991. 2. W. B. Rouse Best Laid Plans, Englewood Cliffs, NJ: Prentice-Hall, 1994. 3. H. R. Booher (ed.) MANPRINT: An Approach to Systems Integration, New York: Van Nostrand Reinhold, 1990. 4. S. K. Card T. P. Moran A. Newell The Psychology of Human–Computer Interaction, Mahwah, NJ: Erlbaum, 1983. 5. D. A. Norman S. W. Draper (eds.) User Centered System Design: New Perspectives on Human-Computer Interaction, Mahwah, NJ: Erlbaum, 1986. 6. C. E. Billings Aviation Automation: The Search for a Human-Centered Approach, Mahwah, NJ: Erlbaum, 1996. 7. W. B. Rouse On meaningful menus for measurement: Disentangling evaluative issues in system design, Information Processing and Management, 23: 593–604, 1987.
WILLIAM B. ROUSE Enterprise Support Systems
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7105.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Human Machine Systems Standard Article Thomas B. Sheridan1 1Massachusetts Institute of Technology, Cambridge, MA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7105 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (416K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Problems of Human-Machine Systems Human Control: Emerging New Roles Examples of Salient Human Machine Systems That are Undergoing Active Change Methods of Human-Machine Systems Engineering Workload and Error Social Implications of Modern Human-Machine Interaction About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7105.htm15.06.2008 14:12:27
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering c 1999 John Wiley & Sons, Inc. Copyright
HUMAN MACHINE SYSTEMS The phrase human-machine system refers to a technological system operated by or otherwise interacting with one or more human beings. Interpreted broadly, this includes practically all human interactions with the physical environment. Interpreted narrowly, it usually is used to refer to humans operating vehicles such as aircraft, automobiles, trains, or ships; humans operating computers in home or business settings, and humans operating industrial machinery or other complex equipment in factories, stores, hospitals, homes, etc. More than anything, the term refers to a system point of view regarding human interaction with the physical environment, wherein specific variables of information or energy can be identified in a two-way, causeeffect relation between human and machine, as illustrated in Fig. 1. Specifically, a set of signals from the machine are observed by human senses, and another set of signals from the human muscles affect the machine in defined ways.
Problems of Human-Machine Systems Usually, the problems of interest to human-machine systems engineers have to do with meeting goals of system performance and safety. Performance includes communication, decision-making, feedback control, reliability, and the negatives of error and cost. Safety means minimization of death and injury to personnel and minimization of property damage, and includes consideration of both probability and consequences of accidents. This, of course, includes the design and operation of systems to achieve desired performance and safety. The human-machine systems engineer, therefore, assumes responsibility for design of displays and controls for the human operator, design of procedures for normal and emergency operation and for maintenance and repair, and design of instructions and training programs. These days, the computer figures in some form into most sophisticated human-machine systems, even though the purpose of the system is not operation of the computer per se but rather the operation of some vehicle or machine or industrial process in real time. Computers acquire information from sensors and data bases when needed, generate displays for the human senses, render advice for human consideration and decision, transform human commands into machine actions, and store data. If requested, computers also close low level automatic decision and control loops under human supervision. For these reasons, the human-machine system engineer must be familiar with the use and abuse of computer hardware and software.
Human Control: Emerging New Roles Control Theory Applied to Direct Manual Control. During the 1940s, aircraft designers needed to characterize the transfer-function of the human pilot mathematically. This is necessary for any vehicle or controlled physical process for which the human is the controller, see Fig. 2. Here, both the human operator H and the physical process P lie in the closed loop (where H and P are LaPlace transforms of the component 1
2
HUMAN MACHINE SYSTEMS
Fig. 1. Human-machine system.
Fig. 2. Direct manual control.
transfer functions), and the HP combination determines whether the closed loop is inherently stable (i.e., the closed loop characteristic equation 1 + HP = 0 has only negative real roots). H must be chosen carefully by the human for any given P not only to ensure stability but also to provide good transient and steady-state response. Efforts to characterize the pilot in these terms resulted in the discovery that the human adapts to a wide variety of physical processes so as to make HP = K(1/s)(e − sT ). In other words, the human adjusts H to make HP constant. The term K is an overall amplitude or gain, (1/s) is the LaPlace transform of an integrator, and (e − sT ) is a delay T long (the latter time delay being an unavoidable property of the nervous system). Parameters K and T vary modestly in a predictable way as a function of the physical process and the input to the control system. This so-called “simple crossover” model (1) is now widely accepted and used, not only in engineering aircraft control systems but also in designing automobiles, ships, nuclear and chemical plants, and a host of other dynamic systems. The New Form: Supervisory Control. Supervisory control may be understood in terms of the analogy between a supervisor of subordinate staff in an organization of people and the human overseer of a modern, computer-mediated, semi-automatic control system. The human supervisor gives human subordinates general instructions which they, in turn, may translate into action. The human supervisor of a computer-controlled system does the same. Strictly, supervisory control means that one or more human operators are setting initial conditions for, intermittently adjusting, and receiving high-level information from a computer that itself closes a control loop in a well-defined process through artificial sensors and effectors. For some time period, the computer controls the process automatically. More generally, supervisory control is used when a computer transforms human operator commands to generate detailed control actions or makes significant transformations of measured data to produce integrated summary displays. In this latter case, the computer need not have the capability to commit actions based
HUMAN MACHINE SYSTEMS
3
Fig. 3. Supervisory control.
upon new information from the environment, whereas in the first, it necessarily must. The two situations may appear similar to the human supervisor since the computer mediates both human outputs and human inputs, and the supervisor is thus removed from detailed events at the low level. Figure 3 shows a supervisory control system. Here, the human operator issues commands to a humaninteractive computer which can understand high level language and provide integrated summary displays of process state information back to the operator. This computer, typically located in a control room or cockpit or office near to the supervisor, in turn communicates with at least one, and probably many (hence the dotted lines), task-interactive computers, located with the equipment they are controlling. The task-interactive computers thus receive subgoal and conditional branching information from the human-interactive computer. Such information serves as reference inputs, and then, the task-interactive computers close low-level control loops between artificial sensors and mechanical actuators, thereby accomplishing the low level automatic control. The process being controlled is usually at some physical distance from the human operator and his human-friendly, display-control computer. Therefore, the communication channels between computers may be constrained by multiplexing, time delay, or limited bandwidth. The task-interactive computer, in this case, sends analog control signals to and receives analog feedback signals from the controlled process. The controlled process does the same with the environment as it operates (vehicles moving relative to air, sea, or earth; robots manipulating objects; process plants modifying products; etc.). Command and feedback channels for process state information are shown in Fig. 3 to pass through the left side of the human-interactive computer. Represented on the right side are decision-aiding functions with requests of the computer for advice and displayed output of advice (from a data base, expert system, or simulation) to the operator. Many new developments in computer-based decision aids for planning, editing, monitoring, and failure detection are forming an auxiliary part of operating dynamic systems. The nervous system of higher animals reveals a similar kind of supervisory control wherein commands are sent from the brain to local ganglia, and peripheral motor control loops are then closed locally through receptors in the muscles, tendons, or skin. The brain, presumably, does higher level planning based on its own
4
HUMAN MACHINE SYSTEMS
stored data and “mental models,” an internalized expert system available to provide advice and permit trial responses before commitment to actual response. Supervisory control ideas grew from the realization that many systems were becoming automated, and that the human was being replaced by the computer for direct control responsibility and was moving to a new role of monitor and goal-constraint setter. The space program posed another form of supervisory control problem: how a human operator on earth could control a manipulator arm or vehicle on the moon through a three-second communication round-trip time delay. The solution which most easily avoided instability was to make the operator a supervisor communicating intermittently with a computer on the moon which, in turn, closed the control loop there. The rapid development of microcomputers has forced a transition from manual control to supervisory control in a variety of applications (2,3). The next section provides some examples of human-machine interaction which constitute supervisory control. First, we consider three forms of vehicle control, namely, control of modern aircraft, intelligent highway vehicles, and high speed trains. All three have both human operators in the vehicles as well as humans in centralized traffic control centers. Then, we consider remotely operated manipulators and vehicles for distant and/or hazardous environments.
Examples of Salient Human Machine Systems That are Undergoing Active Change Advanced Control of Commercial Aircraft: Flight Management Systems. The aircraft industry has understood the importance of human-machine interaction from its beginning and today, exemplifies the most sophisticated forms of human-machine interaction. The flight management system is an excellent example of supervisory control, where the pilot flies the aircraft by communicating in high-level language through a computer intermediary. The flight management system is a centralized computer which interacts with a great variety of sensors, communication from the ground as well as many displays and controls within the aircraft. It includes many functions and mediates most of the pilot information requirements. In former days, each sensor had its own display, operating independently of all other sensor-display circuits. The flight management system brings together all of the various autopilot modes, from long-standing, low-level control modes wherein the aircraft is commanded to go to and hold a commanded altitude, heading, and speed, to more sophisticated modes where the aircraft is instructed to fly a given course, consisting of a sequence of waypoints (latitudes and longitudes) at various altitudes, and even land automatically at a given airport on a given runway. One type of display mediated by the flight management system, shown in Fig. 4, integrates many formerly separate components of information. It is a multicolor, plan-view map showing position and orientation of important objects relative to one’s own aircraft (the triangle at the bottom). It shows heading (compass arc at top, present heading 175◦ ), groundspeed plus windspeed and wind direction (upper left), actual altitude relative to desired altitude (vertical scale on right side), programmed course connecting various waypoints (OPH and FLT), salient radio navigation beacons to the right and left of present position/direction with their codes and frequencies (lower left and right corners), the location of key navigation beacons along the course (three cornered symbols), the location of weather to be avoided (two gray blobs), and a predicted trajectory based on present turn rate, showing that the right turn is appropriately getting back on course. The flight management system is programmed through a specialized keyboard and text display unit (Fig. 5) including all the alphanumeric keys plus a number of special function keys. The displays in this case are specialized to the different phases of a flight (Taxi, takeoff, departure, enroute approach, land, etc.), each phase having up to three levels of pages. Designing displays and controls is no longer a matter of what can be built; the computer and the flight management system allows essentially any conceivable display/control to be realized. The computer can also provide a great deal of real-time advice, especially in emergencies, based on its many sensors and stored knowledge about how the aircraft operates. But pilots are not sure they need all the information which aircraft
HUMAN MACHINE SYSTEMS
5
Fig. 4. Integrated aircraft map display [from Billings (4)].
designers would like to give them and have an expression, killing us with kindness to refer to this plethora of available information. The question is what should be designed based on the needs and capabilities of the pilot. The aircraft companies Boeing, McDonnell Douglas and Airbus have different philosophies for designing the flight management system. Airbus has been the most aggressive in automating, intending to make piloting easier and safer for pilots from countries with less well established pilot training. Unfortunately, it is these most automated aircraft which have had the most accidents of the modern commercial jets—a fact which has precipitated vigorous debate about how far to automate. The Evolution of Air Traffic Control. Demands for air travel continue to increase and so do demands on air traffic control. Based on what is currently regarded as safe separation criteria, air space over major urban areas is already saturated. Simply adding more airports is not acceptable (in addition, residents do not want more airports with their noise and surface traffic). The hope is to reduce separations in the air without compromising safety and to land aircraft closer together or on parallel runways simultaneously. The result is greater demands on air traffic controllers, particularly at the terminal area radar control centers (TRACON), where trained operators stare at blips on radar screens and verbally guide pilots entering the terminal airspace from various directions and altitudes into orderly descent and landing patterns with proper separation between aircraft. Many changes are now being introduced into air traffic control which have profound implications for human-machine interaction (5). Previously, communication between pilots and air traffic controllers was entirely by voice, but now, digital communication between aircraft and ground (a system called datalink) allows both more frequent and more reliable two-way communication. Through datalink, weather, runway, and wind information, clearances, etc. can be displayed to pilots visually. However, pilots are not sure they want this additional technology. The demise of the partyline of voice communications with which they are so familiar, and which permits all pilots in an area to listen in on each other’s conversations, is a threatening prospect.
6
HUMAN MACHINE SYSTEMS
Fig. 5. Flight management system control and display unit [from Billings (4)].
Now, there are aircraft-borne radars which allow pilots to detect air traffic in their own vicinity. There are improved ground-based radars which detect microbursts or windshear which can easily put an aircraft out of control. But with both types of radar, there are questions as to how best to warn the pilot and provide guidance as to how to respond. These new systems also pose a cultural change in air traffic control, since until now, pilots have been dependent upon air traffic controllers to advise them of weather conditions and other air traffic. In addition, because of the new weather and collision avoidance technology, there are current plans for radically altering the rules whereby high altitude commercial aircraft must stick to well defined traffic lanes. Airlines want pilots to have more flexibility as to altitude (to find the most favorable winds and therefore save fuel) and to be able to take great-circle routes straight to their destinations (also saving fuel). However, air traffic controllers are not sure they want to give up the power they have had, becoming passive observers and monitors and functioning only in emergencies. Guidance and Navigation Systems for Highway Vehicles. Having GPS (global positioning system) satellites, high density computer storage of map data, electronic compass, synthetic speech synthesis, and
HUMAN MACHINE SYSTEMS
7
computer-graphic displays permits cars and trucks to know where they are located on the earth to within 100 meters or less. New navigation systems can guide a driver to a programmed destination by a combination of a map display and speech. There are human factor challenges in deciding how to configure the maps (how much detail to present, whether to make the map north-up with a moving dot representing one’s own vehicle position, or current heading up and rapidly changing with every turn). Computer graphics can also be used to show what turns to anticipate and when to get in which lane. Computer-generated speech can reinforce these turn anticipations, can caution the driver if he is perceived to be headed in the wrong direction or off-course, and can even guide him or her how to get back on course. One experimental question is what the computer should say in each situation to get the driver’s attention, to be understood quickly and unambiguously, but without being an annoyance. Another question is whether such systems will distract the driver’s attention from the primary tasks, thereby reducing safety. The various vehicle manufacturers have developed and evaluated such systems for reliability and human use, and they are beginning to market them in the United States, Europe, and Japan. Intelligent Cruise Control. Conventional cruise control has a major shortcoming. It knows nothing about vehicles ahead, and one car can easily collide with the rear end of another car if the driver is not careful. Intelligent cruise control means that a microwave or optical radar detects the presence of a vehicle ahead and measures that distance. But then there is the question of what to do with this information: just warn the driver with some visual or auditory alarm or automatically brake? A warning sound is better than a visual display because the driver does not have to be looking in the right place. But can a warning be too late to elicit braking or surprise the driver so that he brakes too suddenly and causes a rear-end accident to his own vehicle? Should the computer automatically apply the brakes by some function of distance to the obstacle ahead, speed, and closing deceleration? If the computer did all the braking, would the driver become complacent and not pay attention, to the point where a serious accident would occur if the radar failed to detect an obstacle, say a pedestrian or bicycle, or the computer failed to brake? Should braking be some combination of human and computer braking, and if so, by what algorithm? These are human factors questions which are currently being researched. It is worth noting that current developmental systems only decelerate and down-shift mostly because if the vehicle manufacturers sell vehicles which claim to perform braking, they would be open to a new and worrisome area of tort litigation. It should also be mentioned that the same radar technology that can warn the driver or help control the vehicle can also be applied to cars overtaking from one side or the other. Another set of questions then arises as to how and what to communicate to the driver and whether to trigger some automatic control maneuver in certain cases. Control of High Speed Passenger Trains. Railroad technology has lagged behind that of aircraft and highway vehicles with respect to new electronic technology for information sensing, storage, and processing, but currently it is catching up. The proper role of the human operator in future rail systems is being debated since for some limited right-of-way trains (e.g., in airports), one can argue that fully automatic control systems now perform safely and efficiently. The principal job of the train driver is speed control (although there are many other monitoring duties he must perform). In a train, this task is much more difficult than in an automobile because of the huge inertia of the train, since it takes 2 to 3 km to stop a high-speed train. Speed limits are fixed at reduced levels for curves, bridges, grade-crossings, and densely populated areas, while wayside signals temporarily command lower speeds if there is maintenance being performed on the track, there are poor environmental conditions such as rock slides or deep snow, or especially if there is another train ahead. It is mandatory that the driver obey all speed limits and get to the next station on time. It can take months to learn to maneuver the train with its long time constants, given that for the speed control task, the driver’s only input currently is an indication of current speed.
8
HUMAN MACHINE SYSTEMS
A new computer-based display which helps the driver anticipate the future effects of current throttle and brake actions has been developed in the Human-Machine Systems Laboratory at MIT. It is based on a dynamic model of the train and gives an instantaneous prediction of future train position and speed based on current acceleration. It allows speed to be plotted on the display assuming the operator holds to current brakethrottle settings and also plots trajectories for maximum emergency braking and maximum service braking. The computer generates an optimal speed trajectory which adheres at all (known) future speed limits, gets to the next station on time, and minimizes fuel/energy. Remote Manipulators for Hazardous Radiation, Space, and Other Environments. The development of master-slave remote manipulators started when nuclear power was first adopted in the late 1940s. Using such devices, a human operator at one location could position and orient a device attached to his hand, and a servomechanism-controlled gripper would move in correspondence and handle objects at another location which was too hazardous for a human. Such manipulators remotely controlled by humans are called teleoperators. Teleoperator technology got a big boost from the industrial robot technology, which came in a decade or so later, and provided improved vision, force and touch sensors, actuators, and control software. Large teleoperators were developed for rugged mining and undersea tasks, and small teleoperators were developed for delicate tasks such as eye surgery. Eventually, teleoperators have come to be equipped with sensitive force feedback, so that the human operator cannot only see the objects in the remote environment but also can feel them in his grasp. The space program naturally stimulated the desire to control lunar manipulators from earth. However, the unavoidable round trip time delay of three seconds (speed of light from earth to moon and back) would not permit simple closed loop control. Supervisory control provided an answer. The human on earth could communicate to the moon a subgoal to be reached and a procedure for getting there, and the teleoperator would be turned loose for some short period to perform automatically. Such a teleoperator is called a telerobot. The Flight Telerobotic Servicer (FTS) developed by Martin Marietta for the US Space Station Freedom is pictured in Fig. 6. It has two seven-degree of freedom (DOF) arms (including gripper) and one five DOF leg for stabilizing itself while the arms work. It has two video eyes to present a stereo image to its human operator. It can be configured either as a master-slave teleoperator (under direct human control) or as a telerobot (able to execute small programmed tasks using its own eyes and force sensors). Remotely Operated Submarines and Planetary Rovers. The same principles of supervisory control as can be applied to telemanipulators can also be applied to submarines and planetary roving vehicles. In Fig. 7 is shown the remotely operated submersible Jason developed by Woods Hole Oceanographic Institution. It is the big brother of Jason Junior which swam into the interior of the ship Titanic and made a widely viewed video record when the latter was first discovered. Jason has a single manipulator arm, sonar and photo sensors, and four thrusters which can be oriented within limited range and which enable it to move in any direction. It was designed for the severe pressures at depths up to 6000 m. It can be operated either in direct teleoperator mode or as a telerobot. The Mars Pathfinder roving vehicle is yet another example of supervisory control, in this case operating over one-way time delays of more than 10 min.
Methods of Human-Machine Systems Engineering The following four steps are typically carried out in designing new human-machine systems or making improvements in existing such systems. (1) Function and Task Analysis This is a detailed paper and pencil exercise wherein system operation is characterized as a sequence of steps, and for each step, requirements are specified for
HUMAN MACHINE SYSTEMS
9
Fig. 6. Flight telerobotic servicer prototype design (courtesy NASA).
(1) what variables must be controlled to what criterion (what constitutes satisfactory completion of that step). Each step can be accomplished by a machine or a human or a combination of these. A notation is made of what decisions must be made at that step and what are the tradeoffs among resources (such as time, energy, raw materials used) for that decision/control. It is noted what control devices must or could be actuated. (2) what information is needed by the human or machine entity performing that step. The current source of that information, whether direct observation of the process, viewing of a display, reading of text or printed illustration, or verbal communication, is noted. The probabilistic aspects of inputs and disturbances are also noted, as well as how the operator or machine can determine whether the system is failing in some mode. (3) Tentative allocations are made at this point as to whether human or machine ought to perform each task or function in that step and what improvements are suggested in displays, controls, or procedures. (4) Design Based on Common Criteria for Human-Machine Interfacing Design of human-system interactions poses the same types of problems no matter what the context. The displays must show the important variables unambiguously to whatever accuracy is required, but more than that must show the variables in relation to one another so as to clearly portray the current situation (situation awareness is currently a popular test of the human operator in complex systems). Alarms must get the operator’s attention, indicate by text, symbol, or location on a graphic display what is abnormal, where in the system the failure occurred, what is the urgency, and if response is urgent, even suggest what action to take. (For example, the ground proximity warning in an aircraft gives a loud, Whoop, whoop! followed by a distinct spoken command, Pull up, pull up!).
10
HUMAN MACHINE SYSTEMS
Fig. 7. Deep ocean submersible Jason (courtesy Woods Hole Oceanographic Institution).
(5) Controls—whether analogic joysticks, master-arms, or knobs, or symbolic special-purpose buttons or general purpose keyboards—must be natural and easy to use and require little memory of special procedures (computer icons and windows do well here). The placement of controls and instruments and their mode and direction of operation must correspond to the desired direction and magnitude of system response. (6) Human engineering design handbooks are available but require experience to interpret for the same reason that medicine cannot be practiced from a handbook. Humans are not as simple as machines, and we do not have a tool kit of well codified engineering principles. (7) Mathematical Modeling With the task analysis in hand to rough out the topology of the process and a preliminary design, appropriate mathematical tools are often brought into account. The most useful ones for human-automation interactions are listed below: Control models are normally essential to characterize the automation itself (robot or vehicle movement, etc.); however, we are most concerned with characterizing control at a higher level, where computer or human decision-making adjusts parameters or make changes. Fuzzy control techniques are particularly useful to characterize those aspects of control which are not amenable to linearization and precise quantification, particularly where adaptation to environmental variables or variations in raw material require adjustment. Control decisions by the human operator may also be characterized as fuzzy control in terms of several key variables (which need not be continuous or even numerical). Information models are useful to characterize the complexity (entropy) of a process and the effort required to narrow down to decision and action. Any input-output array can be characterized by information trans-
HUMAN MACHINE SYSTEMS
11
mission measures to show how consistent the process is. Information measures are always averages over a set of events. Bayesian decision-making models are appropriate to provide a normative model of inferring truth about the state of the system from various forms of evidence. Conventional decision models specify what action should be taken to maximize the subjectively expected utility (value or relative worth) of consequences, given the prior probabilities of events which copule to those consequences. Signal detection models are appropriate to human/machine detection of signals in noise, determination of failures, etc. where there is some continuous variable which correlates with the degree of decision confidence, and probability density functions on that variable can be derived conditional on true positive (actual failure) vs. false positive (false alarm). (8) Human-in-the-Loop Simulation In parallel with mathematical analysis, it is generally appropriate to develop a real-time simulation of the process or some aspect of the process that needs to be changed. Simulating the physics of the process are not necessary but instead, only the dynamics of the interactions of key variables, to the extent that the displays and controls can be simulated and appear to allow for realistic human interaction. The simulation can perhaps be implemented on a PC, but a graphics workstation is often more appropriate. Experiments are then carried out on the simulation with trained human subjects, while parametric changes are made which simulate design changes in the process, the automation, the displays and controls, or the procedures. Sufficient trial runs are made to allow for simple statistical comparisons of system performance under the various parameter treatments. System performance includes objective measures of product quality, time, cost, human error, and mental workload as well as subjective appraisals. This gives evidence of what design alternatives are best and by how much. (9) Refinement, Optimization, and Pilot Testing All the evidence gleaned from the above three steps is codified in the form of specific design improvement recommendations. Insofar as economically feasible, these are implemented on a pilot automation system and evaluated in conventional ways.
Workload and Error New technology allows combination, integration, and simplification of displays compared to the intolerable plethora of separate instruments in older aircraft cockpits, plant control rooms, and instrument display/control panels. The computer has taken over more and more functions from the human operator. Potentially, these changes make the operator’s task easier. However, it also allows for much more information to be presented, more extensive advice to be given, and more complexity in operation, particularly when the automation fails. These changes add many cognitive functions that were not present at an earlier time. They make the operator into a monitor of the automation who is supposed to step in when required to set things straight. Unfortunately, people are not always reliable monitors. Mental Workload. It is imperative to know whether the mental workload of the operator is too great for safety (physical workload is of less and less concern). Human–machine systems engineers have sought to develop measures of mental workload, the idea being that as mental load increases, the risk of error in performance increases, but presumably, measurable mental load comes before actual lapse into error. Thus mental workload is a more sensitive predictor of performance deterioration than performance itself. Mental workload may be measured by three techniques: (1) The subjective rating scale, typically a ten-level category scale with descriptors for each category from no load to unbearable load. This is the most reliable measure, though it is necessarily subjective. (2) Physiological indices which correlate with subjective scales, including heart rate and the variability of heart rate, certain changes in the frequency spectrum of the voice, electrical resistance of the skin, diameter of the pupil of the eye, and certain changes in the evoked brain wave response to sudden sound or light stimuli.
12
HUMAN MACHINE SYSTEMS
These techniques would be attractive were it not for the wide variability between human subjects and the noise in measurements. (3) The so-called secondary task, an easily measurable additional task which consumes all of the operator’s attention remaining after the requirements of the primary task are satisfied. This technique has been used successfully in the laboratory but has shortcomings in practice in that operators may refuse to cooperate. These techniques are now routinely applied to critical tasks such as aircraft landing, air traffic control, certain planned tasks for astronauts, and emergency procedures in nuclear power plants. The evidence suggests that supervisory control relieves mental load when things are going normally, but when automation fails, the human operator is subjected to rapidly increased mental load. Human Error. While human error has long been of interest to psychologists and industrial engineers, only in recent decades has there been serious effort to understand human error in terms of categories, causation, and remedy (7). Human error may be classified in several ways. One is according to whether an error is one of omission (something not done which was supposed to have been done) or commission (something done which was not supposed to have been done). Another is slip (a correct intention for some reason not fulfilled) versus mistake (an incorrect intention which was fulfilled). Errors may also be classified according to whether they are in sensing, perceiving, remembering, deciding, or acting. There are some special categories of error worth noting which are associated with following procedures in operation of systems. One, for example, is called a capture error, wherein the operator, being very accustomed to a series of steps, say A, B, C, and D, intends at another time to perform E, B, C, F. But he is “captured” by the familiar sequence B, C and does E, B, C, D. With regard to effective therapies for human error, proper design to make operation easy, natural, and unambiguous is surely the most important. It is always best if system design allows for error correction before the consequences become serious. Active warnings and alarms are necessary when the system can detect incipient failures in time to take such corrective action. Training is probably next most important after design, but any amount of training cannot compensate for an error-prone design. Preventing exposure to error by guards, locks, or an additional “execute” step can help make sure that the most critical actions are not taken without sufficient forethought. Least effective are written warnings such as posted decals or warning statements in instruction manuals, although many tort lawyers would like us to believe the opposite.
Social Implications of Modern Human-Machine Interaction Trust. Trust is a term not often taken an engineering variable, but it is rapidly taking on such a connotation. When an operator does not trust his sensors and displays, expert advisory system, or automatic control system, he will not use it or will avoid using it if possible. On the other hand, if an operator comes to place too much trust in such systems, he will let down his guard, become complacent, and when it fails, not be prepared. The question of operator trust in the automation is an important current issue in human-machine interface design. It is desirable that operators trust their systems, but it is also desirable that they maintain alertness, situation awareness, and readiness to take over, so there can be too much trust. Alienation. There is a set of broader social concerns that the new human-machine interaction can have, which can be discussed under the rubric of alienation. (1) People worry that computers can do some tasks much better than they themselves can, such as memory and calculation. Surely, people should not try to compete in this arena. (2) Supervisory control tends to make people remote from the ultimate operations they are supposed to be overseeing—remote in space, desynchronized in time, and interacting with a computer instead of the end product or service itself.
HUMAN MACHINE SYSTEMS
13
(3) People lose the perceptual-motor skills which in many cases gave them their identity. They become deskilled, and if ever called upon to use their previous well-honed skills, they could not. (4) Increasingly, people who use computers in supervisory control or in other ways, whether intentionally or not, are denied access to the knowledge to understand what is going on inside the computer. (5) Partly as a result of factor four, the computer becomes mysterious, and the untutored user comes to attribute to the computer more capability, wisdom, or blame than is appropriate. (6) Because computer-based systems are growing more complex, and people are being elevated to roles of supervising larger and larger aggregates of hardware and software, the stakes naturally become higher. Where a human error before might have gone unnoticed and been easily corrected, now such an error could precipitate a disaster. (7) The last factor in alienation is similar to the first, but more all-encompassing: namely the fear that a “race” of machines is becoming more powerful than the human race. These seven factors and the fears they engender whether justified or not, must be managed. Computers must be made to be not only human friendly but also not alienating with respect to these broader factors. Operators and users must become computer-literate at whatever level of sophistication they can.
How Far to Go With Automation. The trend toward supervisory control is surely changing the role of the human operator, posing fewer requirements on continuous sensory-motor skill and more on planning, monitoring, and supervising the computer. As computers take over more and more of the sensory-motor skill functions, new questions are being raised regarding how the interface should be designed to provide the best cooperation between human and machine. Among these questions are: To what degree should the system be automated? How much help from the computer is desirable? What are there points of diminishing returns? Table 1 lists ten levels of automation, from 0 to 100% computer control. Obviously, there are few tasks which have achieved 100% computer control, but new technology pushes relentlessly in that direction. It is instructive to consider the various intermediate levels of Table 1 in terms not only of how capable and reliable is the technology but also what is desirable in terms of safety and satisfaction of the human operators and the general public.
14
HUMAN MACHINE SYSTEMS
BIBLIOGRAPHY 1. D. T. McRuer H. R. Jex A review of quasi-linear pilot models, IEEE Trans. Hum. Factors Electron., HFE-4: 231–249, 1967. 2. T. B. Sheridan Telerobotics, Automation, and Human Supervisory Control, Cambridge, MA: MIT Press, 1992. 3. T. B. Sheridan Supervisory control. In G. Salvendy (Ed.), Handbook of Human Factors/Ergonomics, New York: Wiley, 1987. 4. C. E. Billings Human-Centered Aircraft Automation: A Concept and Guidelines, Moffet Field, CA: NASA Ames Research Center, 1991. 5. C. Wichens National Academy of Science, The Future of Air Traffic Control, Washington, D.C.: National Academy Press, 1998. 6. S. Y. Askey Design and Evaluation of Decision Aids for Control of High Speed Trains: Experiments and a Model, PhD Thesis, Cambridge, MA: Massachusetts Institute of Technology, June, 1995. 7. J. Reeson Human Error, Cambridge Univ. Press, 1990.
THOMAS B. SHERIDAN Massachusetts Institute of Technology
Abstract : Information Technology : Wiley Encyclopedia of Electrical and Electronics Engineering : Wiley InterScience
● ● ● ●
My Profile Log In Athens Log In
●
HOME ●
ABOUT US ●
CONTACT US
Home / Engineering / Electrical and Electronics Engineering
●
HELP ●
Recommend to Your Librarian
Information Technology
●
Save title to My Profile
●
Article Titles A–Z
Standard Article
●
Email this page
●
Topics
●
Print this page
Wiley Encyclopedia of Electrical and Electronics Engineering
Andrew P. Sage1 and William B. Rouse2 1George Mason University, Fairfax, VA 2Georgia Institute of Technology, Atlanta, GA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7106 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (82K)
Abstract The sections in this article are Historical Evolution of Information Technology Information Technology and Information Technology Trends Information Technology Challenges Conclusion
About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7106.htm15.06.2008 14:27:00
Browse this title
Search this title
●
Advanced Product Search
●
Search All Content
●
Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7712.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Intelligent Transportation Systems Standard Article Kan Chen1 1Kan Chen Incorporated, Hillsbourough, CA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7712 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (373K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Functions Technical Concepts User Services and Market Packages History Programs Around the World Selected Technologies Traffic and Road Surveillance Distribution of Traffic and Related Information Vehicle Location Related Functions Linkage Between Vehicle and Road Through Beacons Vehicle Control Related Functions
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20E...%20Systems,%20Man,%20and%20Cybernetics/W7712.htm (1 of 2)15.06.2008 14:27:22
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7712.htm
Human Side of Its: Travelers and Operators Evolutionary Deployment System Architecture Standards Market and Evaluation About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20E...%20Systems,%20Man,%20and%20Cybernetics/W7712.htm (2 of 2)15.06.2008 14:27:22
INTELLIGENT TRANSPORTATION SYSTEMS
511
INTELLIGENT TRANSPORTATION SYSTEMS Intelligent transportation systems (ITS) are transportation systems that apply information and control technologies to help their operations. Given this broad definition, ITS is an umbrella that covers a wide range of transportation systems, J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
512
INTELLIGENT TRANSPORTATION SYSTEMS
some of which have been implemented for many years while others are just getting deployed or are still under research and development for future applications (1). For example, time-proven adaptive traffic signal controls, using information about traffic flows at or near the traffic signal lights to provide coordinated signal controls, are an early form of ITS. Intelligent cruise control, which automatically adjusts vehicle speed to maintain safe headway from the car in front, is another example of ITS that is on the verge of deployment. Dynamic route guidance which advises drivers the optimum route to take for a given destination, taking into account the current and predicted road and traffic conditions, is yet another example of ITS that has been tested but may take more years to develop for wide deployment due to its complexity in information collection and communications. Thus, a popular definition of ITS is the application of computer, control, and communication technologies to help drivers and operators make smart decisions while driving smart vehicles or controlling traffic on smart road networks. Although not very rigorous, this definition brings forth the notion that ITS keeps the human driver and operator at the center. Therefore, ITS is by no means synonymous to automation even though automated driving is an option for the distant future under the ITS umbrella. The technical and institutional descriptions of ITS will appear after the overview. FUNCTIONS The overarching ITS function is to improve transportation system operations, which in turn support the transportation objectives of increasing efficiency, safety, productivity, energy savings, environmental quality, and trip quality. These objectives are common to all regions around the world even though their relative priority may vary from one region to another. Relatively high on the priority lists of practically all countries is the increase of efficiency through ITS-assisted operations to the extent that the capacity of existing road systems can be increased substantially. In many countries, the current traffic congestion would only get worse since construction of new roads has little hope of catching up with increasing traffic demand due to financial and environmental constraints. In other words, they can no longer build their way out of congestion; and ITS offers a new approach to help reduce or postpone construction needs. This is particularly true in industrially mature countries such as the United States, Japan, and most of Western Europe. Even in countries where major road construction programs are still ahead of them, such as in many developing countries and economies in transition, ITS offers the possibility of increasing capacity per lane thus reducing the need for scarce capital. The same can be said on the vehicle side. For example, ITS-assisted transit or commercial vehicle operations would reduce the number of needed vehicles to handle the same passenger or cargo load, thereby saving capital and operational costs of these vehicles. TECHNICAL CONCEPTS The technical core of ITS is the application of information and control technologies to transportation system operations. These technologies include surveillance, communications, au-
tomatic control, and computer hardware and software. The adaptation of these technologies to transportation requires the knowledge from many engineering fields—civil, electrical, mechanical, industrial—and their related disciplines: for example, traffic engineering, vehicle dynamics, computer science, operations research, and human factors. The amalgamation of these technologies to perform ITS functions is based on the principles of systems engineering. From a system perspective, the major components of transportation systems are the transportation infrastructure, the vehicle, and the people in the system, including the system operator (for example, in the traffic or transportation management center) as well as the traveler who may ride, drive, or just walk. All of these people make decisions based on available information and their decisions often affect one another. Many of the transportation problems arise from the lack of timely and accurate information and from the lack of appropriate coordination among the decisions made by the people in the system. The contribution of information technology is to provide better information to assist people involved in the system to make better coordinated decisions in order to meet the ITS objectives of increasing efficiency, safety, productivity, and air quality. There is a plethora of existing technologies which can be or have been applied to ITS. As the capabilities of ‘‘high tech’’ continue to increase and their costs continue to decrease in the future, so will the capabilities and costs of ITS functions. In addition, these technologies will build on top of each other to produce synergism. For example, the same information for electronic toll collection may be used also to provide vehicle probe data for traffic management. (Vehicle probe means the use of a vehicle to sense traffic conditions as it reports actual traveling time experienced in real traffic.) However, it is not necessary for any traffic agency to master all the high tech electronics to begin applying ITS to solve some of their most urgent problems. As mentioned previously, traffic adaptive signal controls have been developed and applied to smoothen traffic at intersections for decades.
USER SERVICES AND MARKET PACKAGES Countries which have established ITS programs in recent years share similar views on the range of possible ITS functions, as represented by the corresponding ITS user services. User services are the functions performed by ITS technologies and organizations for the direct benefit of the transportation users, which include the traveler, the driver, the operator, the manager, and the regulator in the transportation systems. A composite taxonomy of these user services is given in Table 1, based on program information from many countries around the world. Note that the ITS programs in various countries may put emphases on different subsets of the user services. For example, most ITS programs in the United States do not include vulnerable travelers services for pedestrians and bicyclists, which may be very important in some other countries. Moreover, the provision of some user services presumes certain policy decisions. For example, demand management and operations through road pricing and policing/enforcing traffic regulations can certainly be facilitated by ITS technologies but their implementation would require strong policy support in some countries. (Road pricing is the charge of user fees as
INTELLIGENT TRANSPORTATION SYSTEMS
513
Table 1. User Services Provided by Intelligent Transportation Systemsa
Table 2. Market Packages for Intelligent Transportation Systems (for United States)
Traveler information (ATIS)
ATMS01 ATMS02 ATMS03 ATMS04 ATMS05 ATMS06 ATMS07 ATMS08 ATMS09 ATMS10 ATMS11 ATMS12 ATMS13 ATMS14 ATMS15
Network Surveillance Probe Surveillance Surface Street Control Freeway Control HOV and Reversible Lane Management Traffic Information Dissemination Regional Traffic Control Incident Management System Traffic Network Performance Evaluation Dynamic Toll/Parking Fee Management Emissions and Environmental Hazards Sensing Virtual TMC and Smart Probe Data Standard Railroad Grade Crossing Advanced Railroad Grade Crossing Railroad Operations Coordination
APTS1 APTS2 APTS3 APTS4 APTS5 APTS6 APTS7
Transit Vehicle Tracking Transit Fixed-Route Operations Demand Response Transit Operations Transit Passenger and Fare Management Transit Security Transit Maintenance Multimodal Coordination
ATIS1 ATIS2 ATIS3 ATIS4 ATIS5 ATIS6 ATIS7 ATIS8 ATIS9
Broadcast Traveler Information Interactive Traveler Information Autonomous Route Guidance Dynamic Route Guidance ISP-Based Route Guidance Integrated Transportation Management/Route Guidance Yellow Pages and Reservation Dynamic Ridesharing In-Vehicle Signing
AVSS01 AVSS02 AVSS03 AVSS04 AVSS05 AVSS06 AVSS07 AVSS08 AVSS09 AVSS10 AVSS11
Vehicle Safety Monitoring Driver Safety Monitoring Longitudinal Safety Warning Lateral Safety Warning Intersection Safety Warning Precrash Restraint Deployment Driver Visibility Improvement Advanced Vehicle Longitudinal Control Advanced Vehicle Lateral Control Intersection Collision Avoidance Automated Highway System
CVO01 CVO02 CVO03 CVO04 CVO05 CVO06 CVO07 CVO08 CVO09 CVO10
Fleet Administration Freight Administration Electronic Clearance CV Administrative Processes International Border Electronic Clearance Weigh-In-Motion Roadside CVO Safety On-Board CVO Safety CVO Fleet Maintenance HAZMAT Management
EM1 EM2 EM3
Emergency Response Emergency Routing Mayday Support
ITS1
ITS Planning
Traffic management (ATMS)
Vehicle (AVCS)
Commercial vehicle (CVO)
Public transport (APTS)
Emergency (EM)
Electronic payment Safety
a
Pretrip information On-trip driver information On-trip public transport information Personal information services Route guidance and navigation Transportation planning support Traffic control Incident management Demand management Policing/enforcing traffic regulations Infrastructure maintenance management Vision enhancement Automated vehicle operation Longitudinal collision avoidance Lateral collision avoidance Safety readiness Precrash restraint deployment Commercial vehicle preclearance Commercial vehicle administrative processes Automated roadside safety inspection Commercial vehicle on-board safety monitoring Commercial vehicle fleet management Public transport management Demand responsive transport management Shared transport management Emergency notification and personal security Emergency vehicle management Hazardous materials and incident notification Electronic financial transactions Public travel security Safety enhancement for vulnerable road users Intelligent junctions
Source: International Standards Organization.
a means to reduce traffic congestion and/or to finance road construction and operations.) Appropriate institutional arrangements are also prerequisite for effective ITS user services. For example, route guidance involving public agencies often requires cross-jurisdictional agreement in traffic diversion from one jurisdiction to another. The concept of user services is central in ITS deployment so that the ITS implementor would be guided by what the users want ultimately, and not by the application of ITS just for the sake of its technology. Another useful concept in ITS implementation is that of market packages. Each market package includes an assembly of equipment on the vehicle or the infrastructure that can be purchased on the market (now or in the future) to deliver a particular user service in part or in full. A list of 56 ITS market packages for the full deployment of the ITS program in the United States is given in Table 2 (2). Note that ITS market packages are technology independent; that is, each market package may consist of
514
INTELLIGENT TRANSPORTATION SYSTEMS
equipment whose capability and cost may change over time as technology advances. For example, network surveillance (the first market package in Table 2) may employ inductive loops, microwave detectors, or closed circuit television, or some combination of them, for the function of traffic surveillance, which in turn supports a number of user services including traffic control. Note that the market packages in Table 2 are bundled under seven application areas as follows: 1. ATMS (advanced traffic management systems): Adaptive traffic signal controls, automatic incident detection, regional traffic control, emission sensing, freeway management, etc. 2. APTS (advanced public transportation systems): Automatic vehicle location, signal preemption, smart cards for fare collection, dynamic ride sharing, etc. 3. ATIS (advanced traveler information systems): Motorist information, dynamic route guidance, pretrip planning, in-vehicle signing, etc. 4. AVSS (advanced vehicle safety systems): Intelligent cruise control, collision warning and avoidance, night vision, platooning, etc. 5. CVO (commercial vehicle operations): Weigh-in-motion, electronic clearance, automatic vehicle classification, fleet management, international border crossing, etc. 6. EM (emergency management): Automatic Mayday signal, coordinated emergency response, signal preemption for emergency vehicles, etc. 7. ITS (planning for ITS): Automatic data collection for ITS planning, etc. HISTORY In the United States, research on automatic control of automobiles began in the private sector during the 1950s (3). In the public sector, US government research on electronic route guidance systems (ERGS) in the 1960s (4) has been cited as the first serious attempt to apply information technologies to ground travel, and has inspired similar programs elsewhere in the world, including the autofarer leitung und informationsystem (ALI) project in Europe (5) and the comprehensive automobile traffic control system (CACS) project in Japan (6). The lack of continuing US Congressional support resulted in only minimum activity in this area in the Unites States until the late 1980s. However, the activities in Japan and Europe continued with both public and private sector support, perhaps as a result of the more pressing needs for congestion relief there, especially in the urban areas, as well as different government policies (7). The announcements of the billion-dollar DRIVE and PROMETHEUS (8) programs in Europe and the comparable AMTICS and RACS programs in Japan (9) during the mid-1980s, along with extensive prodding by the California Department of Transportation, have jolted the United States, leading to a revival of its activities in applying information technology to ground transportation. In 1986, a small group of federal and state transportation officials, academics, and private sector representatives, under the sobriquet of Mobility 2000, began to meet informally to prepare for enactment of the first major
national transportation legislation of the post-Interstate era. As the era of interstate expressway construction drew to a close, study after study showed traffic congestion worsening and traffic safety, environmental, and energy conservation problems increasing as the number of vehicles rose from about 70 million in 1960 to 188 million by 1991. National productivity and international competitiveness also were major concerns, both closely linked to transportation efficiency as manufactured goods frequently were required to move over long distances. With the passage of time, the revived activities in the United States in the late 1980s took on characteristics which differ, both technically and institutionally, from that seen through the 1960s and 1970s. For example, more emphasis is now put on the nearer term use of information systems for traveler advisory functions than on the longer term use of control technologies for automation purposes due to the rapid progress of computer technology. There is also a wider range of organizations working in concert from both the private and public sectors than in the past to link the vehicles and highways through information technology. These considerations led the researchers at the University of Michigan to give a new name to this broad area, intelligent vehicle–highway systems (IVHS), which connotes the integration of vehicles made by the private sector with the highway infrastructure operated by the public sector into a single system. In May 1990, 200 academics, business and industry leaders, federal, state, and local government officials, and transportation association executives met in Orlando, Florida at the IVHS Leadership Conference. They agreed that a formal organization was needed to advocate the use of advanced technologies in surface transportation, to coordinate and accelerate their development, and to serve as a clearinghouse of information from the many different players already active in the field. They also agreed that a totally new entity would be needed. Then, in July 1990, a House Appropriations Committee report called for ‘‘a nationwide public-private coordinating mechanism to guide the complex research and development activities anticipated in the IVHS area.’’ As a result of this mandate, in August 1990 the IVHS America was incorporated in the District of Columbia as a nonprofit educational and scientific organization. After its formation, IVHS America was designated as a utilized Federal Advisory Committee to the US Department of Transportation, which assured that its recommendations would be heard at the highest levels of government. In the fall of 1994, the organization changed its name to the Intelligent Transportation Society of America (ITS AMERICA) to reflect a broader mission, including all parts of public transportation and intermodal connections, than implied by the term vehicle– highway systems. For the 35-year period, 1956–1991, America’s surface transportation policy was dominated by construction of the National System of Interstate and Defense Highways. But with the enactment of the Intermodal Surface Transportation Efficiency Act (ISTEA) of 1991, the interstate construction era ended and a new era in surface transportation began. The law gave much more flexibility at the state and local level in deciding how federal highway and transit funds should be used. To ensure intermodal management of the federal funds, the US Department of Transportation established an ITS Joint Program Office in 1993 to oversee and coordinate ITS pro-
INTELLIGENT TRANSPORTATION SYSTEMS
gram activities in the Federal Highway Administration, the National Highway Traffic Safety Administration, the Federal Transit Administration, the Federal Rail Administration, and other relevant agencies (7).
Table 3. Selected ITS Technologies for the Infrastructure Dimension Information Function
Infrastructure
Information collection
Induction loop detector Ultrasonic detector Microwave detector Infrared detector Closed-circuit television (CCTV) Helicopter patrol Car patrol Vehicle probe Image processing Traffic parameter calculation Traffic data fusion Incident detection Traffic optimization Centrally determinated route guidance Wireline communications, including optical fiber Wireless communications, including spread spectrum Telephone Fax Radio Television Teletext Desk-top computer Internet Kiosk Traffic display board Changeable message sign (variable message sign) Adaptive signal control Ramp meter Expressway management Incident management Road pricing (and congestion pricing) Parking management Pretrip planning Electronic toll collection (ETC) Electronic toll and traffic management (ETTM) Coordinated rescue operation
PROGRAMS AROUND THE WORLD With the enabling legislation of ISTEA, and the Congressional appropriation of $200–300 million per year, augmented by comparable expenditure in the state/local governments and the private sector, the ITS Program in the United States moved rapidly from research to field operational tests (FOTs), and to deployment since the early 1990s. The purpose of FOTs is to learn from the experience of limited applications of R&D results in real traffic environment before making major investments for deployment. With positive results from many FOTs, the US Secretary of Transportation announced the Operation Time Saver (10), declaring the intention to build intelligent transportation infrastructure (ITI) in 75 metropolitan areas within a decade. The first projects are under the name of Model Deployment Initiatives (MDI) for four metropolitan areas and for a number of commercial vehicle operations in eight states. Each of these projects has featured public– private partnerships to provide a wide array of ITS services. Continuous expansion of ITS deployment is sought through an effort to mainstream ITS by pitching ITS projects against more traditional alternatives in the regular transportation planning process. With the expiration of ISTEA, US Congress is deliberating the National Economic Crossroads Transportation Efficiency Act (NEXTEA), which would include a reauthorization of the federal ITS program under the title of Intelligent Transportation Systems Act of 1997. In parallel to the ITS movement in North America, corresponding European and Japanese programs, using various names, have forged ahead, involving both public and private sectors. Since 1994, annual ITS World Congresses have been launched to encourage and facilitate international exchange in the field, initiated by ITS America and its European counterpart ERTICO (European Road Transport Telematics Implementation Co-ordination Organization) and its Japanese counterpart VERTIS (Vehicle, Road & Traffic Intelligence Systems). The ITS World Congress has become a meeting ground for all countries around the world to exchange ideas and compare experiences. It is noteworthy that other countries active in ITS are not limited to the industrially mature nations like Australia but also many developing countries or economies in transition such as China and Brazil. In fact, the 1998 ITS World Congress held in Seoul, Korea, signifies that the driving forces behind their programs are the common goal of achieving transportation efficiency, safety, productivity, energy savings and environmental quality through ITS (11).
SELECTED TECHNOLOGIES Any system using information and control technologies may be broken down into the sub-functions of information collection, storage/processing, dissemination, and utilization (for decision and control support). In ITS, these sub-functions would be applied to traffic, vehicles, and the people involved. Selected ITS technologies and sub-functions described below
515
Information storage/ procession
Information distribution
Information utilization
can be grouped as shown in Tables 3–5. Note that some of the technologies may belong to more than one category. Each of the technologies in the three tables will be discussed in the subsequent sections. TRAFFIC AND ROAD SURVEILLANCE On the road side, a prerequisite for many ITS services is the collection of timely and accurate information about traffic and road conditions. For many years, traffic surveillance has been achieved by induction loop detectors that can sense the presence of a vehicle as the metallic mass of the vehicle changes the inductance and thus the resonant frequency of the induction loop installed under the pavement. In the simplest application, a single loop buried under the lane pavement can do vehicle counting. However, loop detectors can do a lot more than the pneumatic tubes put across the pavement surface to do vehicle counting. As various vehicles and trailers have different masses and lengths, vehicle classification can often
516
INTELLIGENT TRANSPORTATION SYSTEMS
Table 4. Selected ITS Technologies for the Vehicle Dimension Information Function
Vehicle
Information collection
Automatic vehicle location (AVL) Global positioning system (GPS) Differential global positioning system (DGPS) Dead reckoning Map matching Signpost Automatic vehicle classification Automatic vehicle identification (AVI) Tag (transponder) Active tag Backscatter tag Reader (transceiver) Smart card Weigh-in-motion (WIM) Electronic lock Vehicle diagnostics Tire slippage sensing Radar Lidar Ultrasonic obstacle detector Magnetometer (magnetic nail sensor) Video camera (lane sensor) Vehicle-initiated distress signal Digital map Compact disk (CD) PCMCIA card (PC card) Heuristic algorithms heuristics for map matching heuristics for route guidance Dijkstra’s algorithm A* algorithm Car radio (AM and FM) Highway advisory radio (HAR) Automatic highway advisory radio (AHAR) Radio data system (RDS) Traffic message channel (TMC) Radio broadcast data system (RBDS) Subcarriers (FM and AM) Cellular telephone Cellular digital packet data (CDPD) In-vehicle fax Pager Personal digital assistant (PDA) Personal communications system (PCS) Special mobile radio (SMR) Mobile data terminal (MDT) Laptop computer Palmtop computer Voice and sound alarm and display Head-up display (HUD) Reconfigurable dashboard Arrow display for route guidance Countdown visual display for route guidance Tinged side-mirror display Dedicated short-range communications (DSRC) Vehicle-to-vehicle communications Satellite communications Low-earth-orbit (LEO) satellite communications
Informaton storage/ procession
Information distribution
Table 4. (Continued) Information Function Information utilization
Vehicle Navigation Route guidance Static route guidance Dynamic route guidance Freight management In-vehicle signing In-vehicle traveler information Electronic toll collection State/provincial border crossings International border crossings En-route trip planning Anti-theft Adaptive cruise control (ACC) Intelligent cruise control (ICC) Lane keeping Collision warning Collision avoidance Automatic highway system (AHS) Free agent Platoon Truck convoy
be deduced from the pattern of the electrical signals that provide inductive loop signatures of vehicles. Double loops in the same lane separated by a fixed distance can measure vehicle speed. As vehicle speed slows below a threshold, loop detectors can give indication of traffic congestion. When used in conjunction with computer software, signals collected from multiple loop detectors placed strategically on the highway and transmitted to the traffic center can do incident detection and alert the center operator about the likelihood of an incident occurrence. The advantages of induction loop detectors include their low cost as well as usability for many applications. However, their installation under the pavement disturbs traffic and they are difficult to maintain, especially under harsh climate conditions. In addition to their limited capability to classify vehicles, their sensitivity is degraded by the steel in reinforced concrete pavement. These problems can be overcome by the more expensive ultrasonic, microwave, or infrared traffic sensors installed on overhead gantries. Such detectors can ac-
Table 5. Selected ITS Technologies for the Driver/Operator Dimension Information Function Information collection
Information storage/ procession Information distribution
Information utilization
Driver/Operator Driver monitor Driver-initiated distress signal E911 Immigration information Driver override Voice and sound display Head-up display Reconfigurable dashboard Human interfaces State/provincial border crossings International border crossings Emergency services
INTELLIGENT TRANSPORTATION SYSTEMS
curately measure the height and other dimensions of the passing vehicle, thus providing more reliable information for vehicle classification. While all of these traffic detectors can provide traffic parameters that are normally sufficient for traffic management, they provide only the symptoms but not the nature of traffic congestion. Furthermore, these traffic parameters do not provide much information to allow important and necessary human comprehension and assessment of complicated situations, such as during a traffic accident, when the traffic center operator may need to call upon fire, police, or medical services for coordinated actions. There is nothing better than live video images to help the traffic center operator monitor complicated traffic situations and make appropriate decisions. Visual images from closed circuit television (CCTV) are therefore obtained by the traffic management center to complement the data acquired from traffic detectors. As the cost of CCTV is higher than that of traffic sensors, the video cameras are usually installed at critical junctions and curves on the roadways, and at a proper height to provide a rather broad coverage of the roadway. The camera can be controlled remotely from the traffic management center in several degrees of freedom—pan, tilt, and zoom (PTZ), in addition to focus and iris controls—so that the camera can be manipulated to focus on a particular segment of the roadway. Thus, with appropriate installation, CCTV cameras can be spaced about 2 km apart and still be able to provide full surveillance of the roadway. The video surveillance technology must be versatile to provide video images in bright light (day), flat light (overcast/low contrast), adverse weather (rain, snow), and low light (night) conditions. Color images tend to provide maximum resolution for daylight conditions, while black-and-white images provide the best contrast for low light conditions. Thus, a combination of color and black-and-white video cameras may be chosen, color for bright light conditions, and black-and-white for low light/night viewing. An automatic camera switch could then be used to monitor ambient light conditions and switch over to the appropriate camera. Image processing through machine vision is one of the latest technologies to be applied to traffic detection. Thus, images acquired by CCTV cameras can be processed to obtain the traffic parameters mentioned previously—vehicle presence, speed, lane occupancy, lane flow rate, etc. Multiple detection zones can be defined within the field of view of the CCTV camera, thus providing multiple lane coverage by a single camera. Multiple cameras can be connected to one processor unit providing wide area coverage, as well as alleviating some of the problems caused by shadows, occlusion, and direct sunlight shining on some of the cameras. Although current machine vision systems require heavier up-front investment, lower cost systems are emerging and have become more cost effective than traditional traffic sensors in a number of situations, especially where multiple zones need to be covered. Even with a combination of traffic detectors and video traffic surveillance, there are additional inputs that are useful for traffic management. First, there is relevant information from road maintenance authority, police department, and weather bureau. Additional information can come from human observers purposely sent out on helicopters or patrol vehicles, or travelers reporting through telephones or mobile communica-
517
tions (cellular phones and citizen band radios). These inputs are particularly useful for information related to road and weather as well as traffic conditions, and from locations that are not covered by any traffic surveillance devices. Vehicles which are equipped to receive ITS services can also send both traffic and road information automatically through mobile communications to the traffic management center. Such vehicles are known as vehicle probes, also known as floating vehicles. Traffic analysis and simulation have indicated that statistically reliable traffic parameters can be obtained for traffic management purposes if 10% or more of the vehicles in an adequate traffic flow on a road segment can serve as traffic probes. In addition to traffic information, road conditions such as icy pavements can be sensed automatically by temperature and humidity measurements or by vehicle probes through such measurements as tire slippage. Anticipatory information such as road closures and major sports events can be obtained directly from pertinent authorities. With the many ways through which traffic information is obtained simultaneously at the traffic or transportation management center, there is a need to process all the data, verify their accuracy, reconcile conflicting information, and combine them into a consistent set of traffic data before they are distributed or used for traffic control purposes. This process is known as traffic data fusion. Within the traffic management center, traffic information is usually conveyed to the operators on a large display board, supplemented by multiple CCTV monitors that can be switched to any camera in the field. The photograph shown in Fig. 1 shows the array of multiple CCTV monitors of a traffic management center. Color code can be used on the traffic display board to indicate the degree of congestion or occurrence of incidents. Traffic parameters obtained from traffic sensors can be transmitted through wireline communications or relatively low bandwidth mobile communication systems (e.g., packet radio). In contrast, video images include many bytes of information and therefore require a broad bandwidth of communication channel (e.g., optical fibers) for real-time (live video) transmission. Thus, distributed data processing is usually applied to convert video images from CCTV cameras to traffic parameters locally before the information is transmitted to the traffic management center for data fusion. Alternatively, the traffic parameters obtained locally may be used directly for local control (e.g., for ramp metering) or local display (e.g., for vehicle speed). However, for the purpose of providing visual images to the traffic center operator, video distribution at operational resolutions and frame rates, even with data compression, still requires relatively wide bandwidths for each video distribution channel. In contrast, data transmission in the opposite direction, from the traffic center to the CCTV cameras for PTZ controls, requires only very low bandwidths and presents no particular communications problems. Operators at the traffic management center also maintain voice communications with patrols and operators in other centers, which is important during emergency situations as timely, accurate, and interactive information acquisition is required for coordinated rescue operations. Information picked up by traffic sensors need not be transmitted to the traffic management center to become useful. This is the case with adaptive traffic signal controls which can create green waves of traffic signals to let a group of vehi-
518
INTELLIGENT TRANSPORTATION SYSTEMS
Figure 1. A typical traffic management center. The most visible equipment includes the array of CCTV traffic monitors and the computer consoles for the human operators. (Source: Federal Highway Administration)
cles on a major arterial pass through intersections by sensing the presence of vehicles upstream from the intersections (e.g., SCOOT (12)) as well as at the intersections (e.g., SCATS (13)). In general, traffic signals are controlled by the length of their split (relative duration between green and red), cycle (duration between the beginning of green lights), and offset (duration between the beginning of green light at one intersection and that at the following intersection). By sensing traffic parameters at both the arterial and the side streets within an area of street network, the information may be fed to a local computer in the area. The algorithm in the computer, based on certain traffic optimization model, is then used to control the split, cycle, and offset of the traffic lights in the area. If all the relevant traffic information is brought to the traffic management center, traffic optimization through a whole region is potentially feasible even though traffic prediction models and rigorous algorithms for traffic optimization have been a continuing subject of research and development. Another example of using traffic information either locally or regionally is the control of ramp meters, which control the rate of vehicles flowing into expressways through varying duration of red lights at the on ramps. The idea behind ramp metering is to control vehicle density on the expressway. Traffic theory, which has been verified empirically, predicts that flow rate (measured in numbers of vehicles per unit time) increases with vehicle density up to a threshold value beyond
which the flow rate decreases as vehicles move in a stop-andgo mode, wasting travel time and fuel, and increasing pollution and frustration. Vehicle density measured by loop detectors near the on ramps can be used to control the red duration of the local ramp meters to keep vehicle density on the expressway below the threshold. The more advanced ramp metering system would send upstream vehicle density information to the traffic management center which can then control the downstream ramp meters in an anticipatory mode on the basis of a computer model. Communications for traffic information and ramp meter control can be accomplished through wirelines or narrow band wireless channels since the required bandwidth is rather limited.
DISTRIBUTION OF TRAFFIC AND RELATED INFORMATION Traffic and other relevant information (road and weather conditions, parking availability, etc.) may be distributed by public authorities in order to improve transportation efficiency, safety, and environmental quality. Similar information may be distributed by information service providers in the private sector who collect revenues through advertisement or charges to the end user. In such cases, traffic-relevant information distribution services are sometimes bundled with other information services (e.g., paging service) for business reasons.
INTELLIGENT TRANSPORTATION SYSTEMS
Public–private partnerships have also been formed in recent years in which the public and private partners share the tasks of traffic information collection and distribution, including the possible bundling with other services. There are two broad categories of technical means for distributing traffic and other relevant information: fixed terminals and mobile terminals. Fixed terminals include regular telephones, radios, television, desk top computers, fax machines, kiosks, and changeable message signs. Mobile terminals include car radios, special mobile radios, cellular telephones, laptop computers, pagers, and hand-held digital devices. The most common ways for the general public to receive traffic information is through their television at home and their radios, both at home and in the vehicle. In the United States, there are commercial radio and television stations that broadcast traffic information provided by the traffic management center or information service providers who collect traffic information with their own helicopter and car patrols and collect their revenues through advertisement. Many modern traffic management centers provide special booths for radio and TV stations staffs, who can look at the same traffic display board as the center operators and make their live broadcast accordingly (although this function is performed off-site mostly by exchanging data). Traffic information is usually broadcast within certain time slots during the day as such information competes with other programs for air time. One way to alleviate this conflict is to use teletext to transmit brief traffic reports superimposed onto the television signal, utilizing the narrow time slots between the transmission of consecutive TV frames. While the broadcast traffic information is free, the traveler does not get the traffic information of specific interest without much delay. On demand traffic information services have been made available, sometimes at a cost, through interactive telecommunications. For example, telephone call-in may be used with options to specify location of interest through push buttons or through conversation with a human operator, who can fax a hard copy of the information upon request if the facilities are available. Similarly one can get more specific traffic information through interactive cable television. Traffic maps of cities around the world (color coded to indicate congestion and incidents) have become available through computer access to the Internet, and the user can focus on a segment of the road network to get detail information. Kiosks have been installed in public places, such as bus stations, for travelers to get interactive traffic information. The same kiosks are often sources for other transportation information, such as transit fare, routing, and schedule, sometimes with dynamic information about expected departure and arrival times and delays, similar to the display monitors at the airport for air travelers. In the case of the kiosks, yellow page information such as lodging and food could also be available, sometimes with equipment arrangements on or near the kiosk to make potential reservations and payments convenient. Changeable message signs (CMS), also known as variable message signs (VMS), are another means for traffic and road information to be distributed and utilized. Although these signs are fixed on the highway, the messages on them are intended for travelers on the move. These are road signs with messages that can be changed locally, such as from traffic and
519
road sensors nearby to warn about hazardous conditions ahead, or from parking garages to show the number of available spaces. However, most often the messages are changed remotely from the traffic management center and their displays are monitored by the center to assure accuracy, and therefore require two-way narrow-band wireline or wireless communication links. Messages are made up from a mosaic of mechanical plates or electrical illuminators such as light emitting diodes. The latter is more flexible than the former for displaying graphics and color-coded messages. Although any arbitrary message can be composed by the traffic center operator to be shown on the CMS, the common practice is to show only a limited number of messages, normally predetermined for a particular traffic situation. This is to ensure that both appropriate and comprehensible messages are displayed under urgent situations. The most common mobile terminals to receive traffic and other relevant information are those in the automobiles. For years motorists rely on car radios (both AM and FM) to receive traffic relevant broadcasts. However, the broadcast information covers a large area and often has little relevance to the route taken by the motorist. In order to provide traffic information at the time and location where the motorist needs it, highway advisory radio (HAR) is installed along the road segments for wireless transmission limited to the local area (i.e., localcast), as often done along highways surrounding airports to advise motorists about parking situations. Typically HAR is a low power, under 10-W, standard AM broadcast band transmitter with a planned reception range of 2–3 km. In case of safety related information, it is important to alert motorists who may not have turned on the radio or may be engaged in entertainment listening. There are prototypes of special radio receivers, known as Automatic HAR (AHAR), that can turn on and tune to the HAR program automatically to be localcast. In order to broadcast traffic information on a more frequent or continuing basis, FM subcarriers can be used to multiplex relatively low-bit-rate (about 1,000 bits per second) traffic data for text display on car radios, analogous to brief traffic reports via teletext on televisions. In Europe, radio data systems have been developed for such purposes as station identification and program type indication. This is accomplished by providing coded messages on a subcarrier of 57 kHz (which is the third harmonic of the standardized pilot tone for stereo FM broadcasting.) The Europeans have agreed to use some of the coded messages for transmitting traffic information. The codes transmitted through this radio data system traffic message channel (RDS/TMC) can be converted into any language understandable to the motorist for display. Similar systems have been developed in Japan and in the United States (under the name of radio broadcast data system, or RBDS) with higher bit rates than the European system, by taking advantage of the wider spacing between the FM stations in those countries and by using more sophisticated modulation techniques (14). For those motorists who have cellular phones or special mobile radios, powered by batteries in the car or in the communication device, they can get traffic information by calling an operator or an electronic message distribution center where traffic advisory messages are stored for retrieval, just like what they can do from home or office using a regular telephone. With two-way interactive communications, cellular phones and personal communica-
520
INTELLIGENT TRANSPORTATION SYSTEMS
tion systems (PCS) can be used to query traffic situations in specific locations and yellow page information, and can be used to make reservations and payments such as for parking and lodging. These cell-based technologies have their own communication infrastructure that can provide roaming capabilities so that relevant traffic and other types of information can be multicast to specific individuals with special interests no matter where they happen to be within the coverage area. Wireless digital information communications have been used to provide ITS functions. For years, mobile digital terminals (MDT) have been used on police cars, trucks, and other special vehicles for data communications, with the advantages of information precision, data storage for record keeping and asynchronous communications, graphical image transmission capabilities, and convenient interfacing with computers. In-vehicle fax machines have been demonstrated also for similar purposes. Personal communication systems (PCS) mentioned previously are all digital. Data communications via analog cellular systems have been made feasible by cellular digital packet data (CDPD) and a number of proprietary packet-switched wireless data communication techniques. New digital cellular services based on TDMA (time division multiple access), CDMA (code division multiple access), as well as the European GSM (global system for mobile communications) standards have already been introduced to the market. With the ever-expanding development of technologies, widespread wireless data communications can be expected to become more capable and less costly in all urban and suburban areas. The advent of low earth orbit (LEO) satellite communication systems (which require much less power than geostationary communication satellites) will also reduce communications costs in rural areas. Portable digital terminals are becoming widely used for a number of purposes, and can be used by travelers on the ground or in the vehicle. Pagers are becoming more sophisticated so that traffic information can be transmitted in both text and graphical forms on these terminals. A number of hand-held digital devices, sometimes known as personal digital assistants (PDA), have emerged on the market for a variety of functions including traffic information. Pagers and PDAs are particularly useful to pedestrians seeking traffic and related information. Laptop and palmtop computers are being equipped with modems that can be used to access Internet and information service providers for many purposes, including interactive communications related to traffic information and decision support. Another important way to communicate traffic and other relevant information to the motorist is through dedicated short range communication (DSRC), which links road infrastructure to equipped vehicles in its close proximity, as will be discussed in a later section.
VEHICLE LOCATION RELATED FUNCTIONS Information about vehicle location is important for ITS functions. Two key questions are: ‘‘Where am I?’’ and ‘‘How far am I from other vehicles and obstacles?’’ The answer to the first question is the vehicle’s absolute location which is needed for navigation, fleet management, and determination of what specific information becomes relevant to the motorist. These functions are discussed in this section. The answer to the sec-
ond question is the vehicle’s relative location (with respect to other vehicles, road edges, and obstacles) which is needed for vehicle control and collision avoidance, and which will be discussed in a later section on Vehicle Control Related Functions. One of the most common automatic vehicle location (AVL) systems for determining absolute vehicle location is Global Positioning System (GPS), a system developed and maintained by the US Department of Defense (USDOD). The satellite-based radio navigation system, fully deployed in the early 1990s, consists of a constellation of 24 satellites orbiting 12,600 miles above the earth. The receiver’s three-dimensional coordinates (longitude, latitude, and altitude) can be determined, based on the time of arrival (TOA) principle, when 4 or more satellites are in line of sight from the receiver. The USDOD allows and guarantees the use of Standard Positioning (SPS), which is deliberately degraded from Precise Positioning (PPS) for military use. SPS has an accuracy of 60–100 m for civilian applications, including all modes of transportation around the world. However, by installing a transmitter at a known location on the ground to provide corrections, one can use differential GPS (DGPS) to improve the performance of the degraded GPS and get vehicle location accuracy in the order of 30 m. Since the normal functioning of GPS requires the observation of at least four satellites, vehicle location needs complementary systems that would still work while the vehicle is under a bridge, under dense tree foliage, or in an urban canyon surrounded by tall buildings. One of the commonly used systems for this purpose is dead reckoning, which uses gyroscope or related inertial guidance principles to deduce vehicle location in reference to a known starting point. However, dead reckoning cannot function alone since the cumulative error needs to be corrected from time to time, preferably done automatically. This correction can be done through the radio signal from a beacon at a known location passed by the vehicle as well as by GPS when enough satellites are in sight. Such a location indicating beacon is known as a signpost. Another approach to correct cumulative errors in dead reckoning is map matching, which takes advantage of the fact that vehicle location is usually restricted to the road network except during temporary deviations when the vehicle is in a parking lot or on a ferry. As the name implies, map matching would require the presence of a digital map on the vehicle and the use of heuristic algorithms to deduce where the vehicle should be on the map. There are other methods for determining vehicle location, using angle of arrival (AOA) or time difference of arrival (TDOA) principles. For example, the location of a vehicle with its car phone turned on can be determined on the basis of its direction from two or more cell sites of known locations. Such methods can be important for emergency calls from mobile phones (extended 911 or E911 service in the United States) with automatic indication to the rescue team where the caller or vehicle needing help is located, especially if the vehicle is not equipped with GPS. Note that either the vehicle or the dispatch center for a fleet of vehicles can be the host for vehicle location functions, depending on the specific ITS function to be performed. In either case, the output of the system will need a map to convey the vehicle location(s) to the user. Digital maps are a prerequisite for any advanced traveler information system, including vehicle location and naviga-
INTELLIGENT TRANSPORTATION SYSTEMS
tion. There are two types of digital maps: raster-encoded maps and vector-encoded maps. The former are basically video images of paper maps and used mainly for display purposes such as for vehicle tracking in fleet management. Vector-encoded maps require less memory, intrinsically relational in nature, and thus easier to manipulate—such as zooming, suppression of details, and expansion of attributes. They are also more expensive to make as the process is laborintensive. Generally the making of vector-encoded maps includes three steps. First the raw data need to be collected from paper maps, aerial photographs, and other information sources. Then, the information needs to be digitized, with the aid of software. Finally the digital maps need verification and updating from time to time. With advanced storage technology, the digital map showing all the major roads in the United States can be stored in a single compact disk. For detail information needed for route guidance, the digital map of a single metropolitan area may be put in a PCMCIA card (also known simply as PC card). Among the most common ITS functions accomplished with digital maps are navigation and route guidance. For navigation, the vehicle location determined from GPS and other complementary means would be displayed as icons superimposed on the digital map. For route guidance purposes, the digital map would need to include such attributes of road segments as distance, travel time according to speed limits and time of the day, turn restrictions, toll charges, and so forth. Given any origin and destination, software based on dynamic programming principles can be used to compute the optimum route. Various constraints or modified objective functions may be applied to the optimization problem: no expressway on route, least toll charges, most scenic route for tourists, etc. Dijkstra’s algorithm for shortest path (or cost) computation has been the most common basic algorithm for route guidance. However, in order to save computation time and memory space, the basic algorithm may be modified to include heuristic search strategies (e.g., the A* algorithm). The heuristic approach is particularly important in dynamic route guidance. Unlike static route guidance which is based on historical traffic data, dynamic route guidance would provide timely advice to the motorist that takes into account the real-time traffic situation (congestion, incidents, road closure, etc.) whether the motorist is still at the origin (pretrip planning) or is already en-route (en-route planning). In the latter case, the allowable computation time for optimum route could be quite limited. Dynamic route guidance takes the current location of the vehicle as the origin and computes the optimum route to any given destination repetitively. The computation could be done on the vehicle in vehicle-based systems, or at the traffic center or at the information service provider in center-based systems. Choice between these options depends on the tradeoff between computation and communication costs, and other considerations such as the need to update digital maps— there are some map changes almost everyday in a typical metropolitan area. In either case, travel time experienced by vehicles equipped with route guidance systems would be collected as probe data to complement other sources of link times. The communications between the center and equipped vehicles can be done through general-purpose wide area wireless systems (e.g., CDPD) or through dedicated short range communications (i.e., beacons). The latter provides communi-
521
cations only when the vehicle is within the proximity of a beacon and requires investment of dedicated infrastructure that might not be cost effective for dynamic route guidance unless the DSRC is used also for a number of other ITS services. LINKAGE BETWEEN VEHICLE AND ROAD THROUGH BEACONS Most ITS services require the functioning of the vehicle and the road infrastructure as an integrated system. Since beacons are installed at fixed locations on the road infrastructure, they not only provide mobile communications between the vehicle and the road infrastructure, but also provide information about the vehicle location. Thus, the information transmitted by dedicated short range communications (DSRC) through beacons can be location relevant or location selective. The ITS services that are prime users of DSRC are shown in the following list: • • • • • • •
Electronic toll and traffic management (ETTM) Commercial vehicle operations (CVO) Parking management Signal preemption In-vehicle signing In-vehicle traveler information Individual route guidance (in selected systems)
The earliest DSRC investment in the United States has been for electronic toll collection (ETC). Since the beacons for ETC can be installed along the road infrastructure as well as at the toll plazas, the travel time of individual vehicles (or vehicle probes) can be obtained for traffic management purposes as well. Thus the broader term electronic toll and traffic management (ETTM) is used to include both services. An ETC system, shown in Fig. 2, consists of a vehicle with an on-board unit, a two-way microwave link, and roadside (or tollgate) equipment. The in-vehicle equipment is a transponder, which is usually a tag, an integrated circuit card with a card holder, or a combination of the two. It stores the information needed for toll transactions, such as vehicle type, account identification, and balance. The roadside equipment consists of (a) transceiver (transmitter and receiver), also known as reader, the main functions of which are to verify the functionality of the in-vehicle equipment and to conduct the transaction; (b) a lane controller, which monitors activities occurring in a toll lane; and (c) a primary processing computer system, used to access account information and process the transaction requests (15). There are several taxonomies to classify ETC systems. The two basic types of DSRC technologies used in current ETC applications are active tags and backscatter tags. Active tags contain a battery or external power source to power the internal circuits and transmissions. They contain internal electronics capable of communicating with the reader through its own transmitter. Active tags operate over a larger range, generally 20 to 50 m, than backscatter tags. Backscatter tags send information back to the reader by changing or modulating the amount of RF energy reflected back to the reader antenna from a continuous-wave RF signal beamed from a reader. The RF energy is either allowed to continue traveling
522
INTELLIGENT TRANSPORTATION SYSTEMS
Antenna Toll booth
Monitoring camera Transit detector
Antenna Existing lane Non-stop lane
Figure 2. Schematic diagram of an electronic toll collection (ETC) system. The heart of the technology is dedicated short range communications (DSRC) between the readers on the infrastructure (connected to the antenna in the figure) and the tags on the vehicles (connected to the on-board equipment in the figure). (Original source: Highway Industry Development Organization in Japan. Modified by Post Buckley Schuh & Jernigan, Inc.)
Exit
IC chip
Vehicle type detector Existing lane
Non-stop lane
Guidance signs
On-board equipment IC card
past it or is intercepted and ‘‘scattered’’ according to the tag’s antenna pattern back to the reader antenna. This operation of switching the scattering on and off can be done with very little power so that it can be powered with an internally integrated battery or with power derived by rectifying the RF signal intercepted from the transceiver. At present, the active and backscatter tags are not compatible or interoperable. Efforts are being made to come up with a standard that can accommodate both technologies. Meanwhile, super readers (that can read both kinds of tags) and super tags (that are interoperable with both kinds of readers) may be installed during the transition period toward the ultimate standard DSRC system (16). Another way to classify DSRC tags is according to their levels of technical communications capabilities. Type I tags are read-only tags. When interrogated, these tags transmit their unique identification code to the roadside unit. They are factory programmed with their identification code and other data. For Type I tags, a tag database must be kept on a centralized computer to process and record activity. Type II tags are read–write tags. They have the capability to store and transmit other information. They are commonly called transponders because they can perform two-way communication with the roadside unit. Data are read from, modified, then sent back to the tag. These tags can allow for the creation of decentralized processing systems. Type II⫹ tags provide feedback to the driver using lights or buzzers to convey information. Type II⫹⫹ tags use LCD and buzzers to provide feedback to drivers. Type III tags are also read–write tags, but feature an external interface that is used for transferring information to other on-board devices, such as computers or driver information displays. These tags are particularly useful for fleet management applications, where drivers are required to track and receive large amounts of data from a variety of sources. Type IV tags are also read–write tags with many of the same features as Type III tags, but these tags
Entrance
have an integrated smart card reader, rather than simply an interface to an on-board computer or smart card reader. Smart cards are plastic credit cards that include a small microelectronic circuit that allows memory and simple logic functions to be performed, such as requirement for correct personal identification numbers to be entered into the reader for financial transactions. Still another way to classify ETC systems is according to the way toll payment is calculated. In an open system, each time a vehicle passes a toll lane, the roadside unit instructs the transceiver to debit the vehicle’s account. In a closed system, a roadside transceiver reads the memory content of the in-vehicle equipment unit at the point of entry. The computer then verifies the vehicle identification and account number, and whether sufficient funds are available in the account. The transceiver then writes a date, time, location, and lane number stamp in the appropriate fields of the unit’s memory. When the vehicle exits the system, the transceiver reads the on-board unit’s memory and the computer calculates the toll and debits the vehicle’s account. This information is then written back to the unit’s memory. Finally, ETC systems may be classified according to the configuration of the toll collection zone. The single-lane ETC systems operate only if the equipped vehicles are allowed to pass through specific lanes, such as in situations where toll plazas have been installed for manual toll collection previously or for mixed (manual and electronic) toll collection. In such situations, vehicles usually slow down from mainline speeds and barriers may be installed to stop those vehicles without tags or without sufficient funds in their accounts. The multilane ETC systems operate in situations where vehicles may crisscross between lanes and traveling at mainline speeds. The technology and process for accurate toll operation and for catching violators are more complicated in such situations. These situations arise where the roadway was designed originally without toll plazas, such as in the case of electronic
INTELLIGENT TRANSPORTATION SYSTEMS
toll collection on former freeways. However, ETC technologies have progressed to the state that reliable operations (accurate toll collection according to vehicle types and catching violators) can now be achieved in multilane configurations with a variety of vehicles (including motorcycles) traveling at mainline speed (over 100 miles per hour). Of course, the complexity of ETC systems may add to their costs. ETC systems are being applied around the world, including many developing countries where toll collection has become a necessity for financing road construction. The number of ETC tags issued worldwide by the end of the 20th century has been estimated to total up to ten million. All major interurban expressways in a number of industrialized countries, including Japan and Southern European countries, have traditionally been toll roads. Some Northern European countries are in the process of converting their free expressways to toll roads. Even in the United States, the first private toll road was constructed after over a century in Southern California as a way to relieve congestion. In this case, the principle of road pricing (also known as congestion pricing) has been applied to vary the toll rate according to the time of day, and ETC has been found as a technical means to facilitate its implementation. From an institutional perspective, it was not surprising that ETC has turned out to be one of the earliest widespread ITS applications since it has helped all the major stakeholders with vested interests. The toll agencies have reduced costs through ETC automation, the drivers of vehicles equipped with ETC save time as they do not need to stop to pay tolls manually, and even the drivers of nonequipped vehicles save time since the queues at the toll plaza for manual toll collection have become shorter without the equipped vehicles. The second most important DSRC application, at least in the United States, is for commercial vehicle operations (CVO), including weigh-in-motion and (interstate and international) border crossing. The objective and major benefits are in time savings by substantially reducing the need for trucks to stop for inspection. It has been estimated that every minute saved by a large commercial truck is worth $1 to the trucking company, as well as reduction of stress and frustration on the part of the driver. A number of systems for weigh-in-motion (WIM) systems are currently obtainable from manufacturers. They are based on stress and strain measurements as a function of the total weight or axle weight of the vehicle—bending plate, electric capacitance variation, piezo-electric load sensors, etc. Those commercial vehicles which do not violate the weight limit are given a signal through DSRC by the inspector to bypass the weigh station. In the United States, every state has its own regulations regarding licensing, fuel tax computation, and safety requirements for commercial vehicles. This situation has caused inspection delays for trucks traveling between states. The purpose of CVO application to this situation is to reduce all state border inspections to only one stop. Once cleared, the commercial vehicle can then travel nonstop from state to state, with passing signals through DSRC at the state border crossings as the data from the single-stop inspection can be sent ahead to all the downstream states on the route traveled by the truck. In this case, the tag (transponder) on the truck needs to carry only the identities of the vehicle, the owner, and the driver.
523
Once the DSRC system is in place, many other kinds of information may be transmitted to the truck driver, including freight management information between the driver and the fleet dispatcher. Vehicle diagnostic data (e.g., defective brakes and excessive emissions) can also be downloaded at the maintenance station through the same DSRC tag on the vehicle. In the case of international border crossing, the situation is much more complicated since customs and immigration information also need to be transmitted to the border in order to reduce inspection delays and transmitted back for record keeping. For example, the cargo on a truck may be inspected before it arrives at the international border. Once cleared, an electronic lock is used to seal the door and the truck can be instructed to bypass further custom inspection at the border after a simple check is made to assure that the electric lock has not been opened. Multiple government agencies from two or more countries will need to agree and coordinate with one another, making the institutional aspects more complicated. Other current DSRC applications include parking management and signal preemption. The operational concept of parking management is quite similar to that for a closed ETC system. The parking agency can positively identify the location and entry time of the vehicle, both at the time of the payment transaction and when the vehicle enters the parking area, ensuring that the driver is correctly billed. The system can assign access to vehicles by specific lots and for various time periods. Some systems can electronically report attempts to use an invalid tag to parking managers, giving the location of the attempted entry and the name and card number of the violator. An anti-pass-back feature can require the smart tag to exit before reentering the lot, making it impossible for one user to pass a tag back to another. DSRC technology can be used to allow for signal preemption for transit and emergency vehicles, as well as transit vehicle data transfer (similar to CVO data transfer). In this case, the transit and emergency vehicles use DSRC technology (either at microwave or infrared frequencies) to communicate with traffic signal control systems to request priority signal treatment (usually through extended green times or reduced red times but no sudden change of signal for safety reasons). DSRC applications to other ITS services have been tested and are expected to be widely deployed. These include in-vehicle signing to bring road sign information (speed limit, cross street names, etc.) for continuous display on vehicle dashboards. Such displays can also be in large characters to help those with eyesight impairment. Portable transmitters may be put on school buses to warn drivers around the corner. Location-relevant yellow-page information would help motorists and travelers as well as commercial interests. These applications have already been implemented by some private toll road agencies (e.g., Cofiroute, a private highway concessionaire in France) to deliver value-added services to their customers. Location-relevant route guidance system has also been deployed in certain countries (e.g., in the Japanese VICS system) where the traffic authorities have both the financial capabilities to install the dedicated infrastructure and the desire to maintain control of the route guidance system. In other countries, such as in the United States, dynamic route guidance usually leverages on the wide-area mobile communication systems already invested by the telecommunications industry rather than relying on any new DSRC infrastructure.
524
INTELLIGENT TRANSPORTATION SYSTEMS
VEHICLE CONTROL RELATED FUNCTIONS Solid-state electronics, which has been applied increasingly for vehicle control since the 1960s, has gone through three generations. Vehicle electronics was first used in open-loop control at the component level such as for engine ignition. Then, it was applied in closed-loop control at the subsystem level such as in anti-lock braking system (ABS), in which the vehicle is prevented from skidding through a number of rapid braking pulses applied automatically on the basis of tire slippage sensing. The third generation was at the system integration level such as for control of the entire power train to optimize economy, performance, or emission. While these vehicle control electronics may not be very visible, most drivers today interact with cruise control which has provided comfort and convenience in long-distance travel. It has been suggested that ITS services for vehicle control and safety call for the fourth generation of vehicle electronics which integrates the vehicle and the roadway into a total system, a basic feature of ITS (17). Concerns about vehicle safety have led to vehicle design and new devices that give the driver and the passengers in the vehicle more protection upon impact. These include seat belts, air bags, and crash-proved bumpers. However, these are passive safety approaches that improve safety only after collision. Vehicle control and safety under ITS emphasize active safety approaches that try to avoid collision. The basic aspects in active safety or crash avoidance are longitudinal control and lateral control of vehicles, and their combinations in various circumstances. Statistical data show that most vehicle accidents involve rear-end collision resulting from erroneous longitudinal vehicle control, and most fatalities result from loss of lateral vehicle control. Both longitudinal and lateral controls can be improved within the individual vehicle autonomously, or with the help of communications between vehicles, or communications between the vehicle and the infrastructure. The most common sensors used for longitudinal control are radar and laser devices that can provide measurements of distance from the vehicle in front, gap closing rate between vehicles, and detection of obstacles on the roadway. In general, laser (Lidar) has limitation of range in the order of 50 m and operates only within line of sight. Radar has more difficulty in distinguishing roadway clutter from desired target, and may get confused by radar signals from vehicles in the oncoming traffic. Sonic and ultrasonic sensors are also used, especially for detecting people and objects in the back of the vehicle as it backs up. These devices generally operate at short distance and low vehicle speed, and have problems with beam displacement by cross wind. In spite of various limitations, sufficient technological progress in both hardware and software has been achieved to make adaptive cruise control (ACC), also known as intelligent cruise control (ICC), reliable enough for market introduction. These systems can automatically reduce vehicle speed, which has been set by the driver through cruise control, to keep safe headway from the vehicle in front and to resume the set speed when the headway is sufficiently long. Speed reduction in ACC can be accomplished through automatic closing of the throttle, gear down shifting, or braking. Driver intervention is still needed under abnormal circumstances, such as driving along sharp curves and detection of large animals crossing the roadway. However, the remaining barrier to widespread
application of ACC is no longer the technology but concerns about legal liability in certain market areas. From the safety perspective, ACC assumes that the speed set by the driver is safe when the vehicle in front is sufficiently far ahead. However, this assumption is not necessarily true even if the set speed is within legal limits under normal circumstances. The real safe speed depends on many other factors, including weather and road conditions, traffic around the vehicle, and the load on the vehicle and the condition of the vehicle itself. External conditions communicated to the vehicle (through both wide area and short-range mobile communications), and internal conditions sensed within the vehicle, can be used to provide real-time advice to the driver what maximum speed is safe to set. In fact, if the set speed is unsafe, the system can provide a series of steps beginning with warning, followed by deceleration as in ACC if necessary, but with the option for driver override. The most basic need for lateral vehicle control is lane keeping, that is, keeping the vehicle in the middle of the lane. Various approaches to lane keeping have been tested and demonstrated. The most common approach to lane sensing and lane keeping is through video image processing of the white edge, and the lateral control strategies must take into account of the growing uncertainty in lane edge positions as the video camera looks farther ahead the road. Low-cost offthe-shelf video cameras have been shown to be sufficient for lane sensing purposes. The use of GPS and digital map for lane keeping has also been tested, taking advantage of the vehicle location accuracy provided by GPS, especially if differential GPS is available. Both video and GPS approaches to lane keeping do not require modifications of the existing road infrastructure. Other types of lane keeping approaches rely on new devices put on the road infrastructure. These include the installation of guide wires along the lane pavement carrying electrical signals and the installation of magnetic nails buried under the lane pavement. The latter not only has the advantage of being completely passive but also provides digital preview information of the road geometry through the deliberate polarity arrangement of multiple magnets along the road path. The lateral positions of the magnets and the preview information are picked up by magnetometers under the vehicles for lanekeeping purposes. From the perspective of vehicle safety, much of the benefit from longitudinal and lateral controls can be realized without full automation. In fact, beginning with reliable sensors, the automotive industry has taken an evolutionary approach to begin with warning first, followed by partial automation, and eventually perhaps to full automation. Warnings are provided not just when the vehicle is getting too close to another vehicle ahead or when the vehicle begins to veer off the lane, but also when another vehicle is nearby in the neighboring lane or in the blind spot so that the driver would not attempt an unsafe lane change. Partial automation is provided only to assist the driver, who can, in most cases, override the automatic assistance. For example, a small torque may be applied automatically to the steering wheel to keep the vehicle on a particular lane. However, driver override can be achieved by manually applying a larger torque to avoid an obstacle in front. Fully automatic longitudinal and lateral vehicle controls will eventually lead to automated highway systems (AHS), which is defined as hands-off and feet-off driving. Even
INTELLIGENT TRANSPORTATION SYSTEMS
though the deployment of AHS may be further away, feasibility demonstrations have been held in several countries, including the one in San Diego, California in early August 1997 (18). Among the different concepts demonstrated are the freeagent scenario and the platooning scenario. Free agents are vehicles equipped with sensors and automatic control mechanisms operating autonomously without any assistance from the infrastructure. Free agents can therefore operate in nonautomated traffic on all existing roadways. Platoons are multiple vehicles traveling in a single-file formation with very short headway (in the order of one or two meters). Vehicles in the platoon are guided by magnetic nails in the pavement and communicate with one another. The short headway implies the potential of increasing highway throughput by several times. Since the very short headway is beyond human capability to maintain, full automation is required and human errors are virtually eliminated. Figure 3 shows a smart car fully equipped for most ITS functions, including AHS. An evolutionary approach to AHS emphasizes near-term applications of some of the AHS technologies without full automation. These applications include collision warnings to individual vehicles, hazard warning passing down a group of vehicles, computer-assisted merging and overtaking among cooperating vehicles, warning to snowplow operators about lane departure and vehicles buried under snow banks, and truck convoys in which a single driver will operate a train of trucks coupled to each other electronically. HUMAN SIDE OF ITS: TRAVELERS AND OPERATORS Fully automated driving is only a small part and a long-term goal of ITS. Most ITS services center around the driver/traveler and the traffic operator. Human factors and human interfaces with ITS terminals and devices are therefore an important part of ITS technology. Human-factors research in ITS includes all the domains of human physiology (ergonomics), perception, comprehension, and decision making. Some or all of these domains need to be included in the specifications and
Satellite communications and GPS In-vehicle signing
Cell phone CCD camera
ETTM (ETC and car probe)
Antennae
Navigation and route guidance
Speed limit 55
Radio, AHAR, RDS/TMC Vehicle controls Magnets
Radar Magnetometer
Figure 3. Selected components of a smart car. This figure shows the essential components for vehicle location and vehicle control related functions. (Source: Kan Chen Incorporated and Post Buckley Schuh & Jernigan, Inc.)
525
designs of ITS human interfaces, depending on the specific applications. In general, human interfaces with ITS terminals for nondrivers are better understood since they are extensions of traditional computer, communications, and consumer electronics technologies—television, home computer, office telephone, and so forth. Traffic management centers enhanced by ITS technologies include many more displays and communication terminals than previous centers without ITS. In case of multiple serious highway accidents, the challenge in providing the most critical (highest priority) information simply, accurately, and in a timely manner for coordinated operator decisions is not unlike that in a nuclear plant accident. Lessons learned from the latter experience and other similar emergency situations can be helpful to ITS-oriented traffic management center design. Human interfaces with ITS terminals for drivers present new and relatively unique challenges as the reception and digestion of new traffic-relevant information from ITS can distract or add substantially to the driving tasks. Although the situation is not unlike that in the cockpit of an aircraft, the human factors challenge in ITS is more difficult partly because a car driver generally has not gone through the rigorous training and selection process of an airplane pilot, and that the air traffic environment is generally more forgiving in that it allows more time for human decision making. On the infrastructure side, location, size, and brightness of changeable message signs (CMS) must be chosen so that the display can attract drivers’ attention easily in busy traffic and cluttered urban environment. The displayed messages need to be composed for brevity and ease for accurate comprehension since drivers cannot be counted on to read more than two short lines at high speed. Icons have been found to be helpful in conveying meanings of signs quickly (analogous to the shape of stop signs) and color codes have been suggested for CMS in multiple languages where bilingual signs are needed. Inside the vehicle, driver’s use of car phones has already caused public concerns, and manufacturers have offered memory dialing and voice dialing options to ease the situation. The challenge in text display of RDS/TMC is similar to that in the display of CMS. Since voice displays are less distracting than visual displays to the driver, voice displays in the driver’s preferred language have been offered for RDS/ TMC. On the other hand, voice displays could be drowned out by traffic noise. Thus, for safety warnings, both voice and visual displays are frequently used. With the space behind the dashboard of most vehicle models becoming extremely scarce, various ITS driver information has to compete for presentation. A re-configurable dashboard display has been offered as an option. While OEMs generally install their navigation system monitor in the dashboard, navigation system displays purchased after the market are usually retrofitted to the dashboard through a goose-neck connection. Route guidance information itself can be displayed either on a digital map or as arrows showing direction to turn or to go straight ahead at the next intersection. In the latter case, the arrows are usually accompanied by voice displays, and a countdown visual display is used to provide comfortable time for the driver to do lane change and other maneuvering to make safe turns at the intersections. Human factor considerations have also led to innovative product designs and technology transfers for locating visual displays. For example, head up displays (HUD) are used to
526
INTELLIGENT TRANSPORTATION SYSTEMS
put images of speedometer information and warning signals on the windshield, just above the hood in front, so that the driver can see them without taking the eyes off the road. This technology was originally developed for fighter pilots using a set of mirrors or a holographic system. Another example is to display a red tinge on the side mirror display when the sidelooking radar detects a vehicle in the neighboring lane, in order to warn the driver intending to change lanes. From safety standpoint, driver monitoring is also desirable. Two general approaches are taken to detect drivers who get tired or drowsy, or are under the influence of alcohol or other controlled substance. One approach is to monitor the driver directly, especially the driver’s eye movement. The other approach is to monitor the driving behavior such as the swerving or drifting of the car movement. For the sake of privacy, the warning is usually fed back to the driver, although suggestions have been made to provide the same information to other drivers and to regulatory bodies. Another safety related ITS service is driver-initiated distress signal. The combination of automatic vehicle location and mobile communications on the vehicle makes it possible for the driver to seek help from public or private agencies when needed (for example, when the driver gets stranded or when a truck is hijacked). The distress signal can also be automatically triggered by an airbag in case the driver becomes disabled or unconscious in an accident, or by a burglar alarm system when the vehicle is entered by an unauthorized party. This kind of service represents one of the earliest ITS markets involving the private sector.
EVOLUTIONARY DEPLOYMENT The plethora of ITS technologies has offered many possible strategies for different communities and countries to get into ITS according to their local priorities and policies. Most public agencies and private companies with ITS deployment experience have chosen evolutionary strategies that would begin with information collection and dissemination on the infrastructure side for traffic management, and that would leverage on the existing and expanding communications infrastructure which serves as common carriers for many information services beyond those for ITS. Thus, initial ITS investments tend to put emphasis on traffic surveillance hardware and software and on cooperative agreements among relevant jurisdictions for information sharing and common standards for data exchange. Beginning with existing computer control of traffic signal and loop detectors at busy intersections, advanced traffic management systems (ATMS) would move to adaptive traffic control, ramp metering at the entrances to expressways, changeable message signs to inform drivers of current traffic situation and advise them to divert from incident sites, and upgrade their traffic surveillance technologies (expanding CCTV coverage or installing video image processors). Eventually they build new traffic management centers to fuse traffic data from multiple sources, coordinate traffic control across multiple jurisdictions on a regional basis, and couple traffic management infrastructure with information superhighways and with information service providers in the private sector. Building on the comprehensive database at traffic management centers, advanced traveler information systems (ATIS) usually would move from informational stage to advisory
stage, and eventually to coordinated stage. Tourist information, real-time traffic information (including multimodal transit and rail information), and static route guidance are the key services provided in the informational stage. Dynamic route guidance and multimodal guidance are the key features in this stage. In the coordinated stage, the operations of ATMS and ATIS would be coordinated so that, for example, dynamic route guidance and adaptive signal controls would be coordinated. ETTM usually begins with conversion of existing manual operation at the toll plaza to automatic ETC. Then, additional readers would be added to collect travel time information for traffic management purposes and new toll roads are built with tolls to be collected only electronically. A common standard for DSRC will then be established for both CVO and ETTM services. The toll agencies using ETC will use their beacons (readers) to provide additional user services through in-vehicle traveler information systems. Eventually a DSRC standard would be established at least within each continent, if not for the entire world, so that multiple ITS services proliferate, using DSRC as a communications medium that complements wide-area mobile communications network services. Advanced vehicle control and safety systems would begin with the existing ABS and cruise controls and expand into adaptive cruise control and collision warnings in both longitudinal and lateral dimensions. Driver assistance and advisory systems (for safe speed, lane change, merging, etc.) would probably come next. Then free-agent AHS applications would allow equipped vehicles (especially special vehicles like buses, trucks, and military vehicles) to drive on any existing roadways with practically complete automation. Eventually other AHS concepts based on cooperative vehicles or supportive roadways will be deployed to maximize throughput, safety, and environmental benefits. Public policies and regulations are also needed to support implementation of ITS technologies and services. As a minimum, multi-agency coordination is required to support regional traffic information collection and utilization, and to support effective and efficient operations to rescue drivers in distress. Such coordination becomes more important, and also more difficult, as multimodal ITS user services are implemented. Enabling legislation is needed to support national program planning and implementation, especially ITS public–private partnerships. Road pricing or congestion pricing as an effective means for congestion relief has often failed due to public objection on the ground of double taxation. Implementation of demand management as an ITS user service must go hand in hand with strong policy support. The advantage of using smart cards for both transportation and nontransportation applications, entailing in great convenience to the end users, cannot be realized without policy support from both financial and transportation authorities at the policy level.
SYSTEM ARCHITECTURE The amalgamation of the many technologies requires that all the current and future ITS technologies work in concert with one another. System architecture is a framework for ensuring the interoperability and synergistic integration among all the ITS functions, regardless of the specific technologies to be deployed. Interoperability may be exemplified by the capability
INTELLIGENT TRANSPORTATION SYSTEMS
Vehicle Transit Commercial Emergency
Vehicle subsystems
Planning
Commercial veh. admin
Freight and fleet management
Toll administration
Transit management
Wireline communications Short range wireless communications
Vehicle to vehicle communications
Wide area wireless communications
ITS system architecture, such as the one given in Fig. 4, provides an important basis for setting standards as each interface between subsystems implies the need for standards and protocols to allow smooth information flow among the subsystems. Data dictionaries defined in the architecture also imply standard message sets that must be defined and mutually accepted for the subsystems to exchange meaningful information. ITS standards have been a subject of international discussion and cooperation through such organizations as the International Standards Organization (ISO). For years, international standards setting within Europe has been coordinated by the European Committee for Normalization (CEN) and that for the whole world has been coordinated by ISO. For ITS, CEN has set up the technical committee CEN/TC278 on RTTT (Technical Committee on Road Traffic and Transport Telematics) corresponding to the worldwide committee ISO/ TC204 on TICS (Technical Committee on Transport Information and Control Systems). The duplicate effort between CEN and ISO is considered necessary by the Europeans not only for historical reasons but also because, within Europe, the compliance to CEN standards is mandatory while the compliance to ISO standards is only voluntary. For the sake of inter-
Emergency managment
Personal information access
STANDARDS
Emissions management
Remote traveler support
many countries. Although the US national architecture may need adaptation to other countries (e.g., to include ITS services for pedestrians) the architecture appears flexible enough to accommodate many national needs around the world as well as many regional needs within the United States.
Traffic management
Information service provider
of a vehicle using a single set of antenna and in-vehicle unit to receive a host of ITS user services (e.g., toll payments, incar signing, and international border crossings) no matter where the vehicle is operating. As to synergistic integration, there are in general four sources in ITS: (1) mutual reinforcement of ITS technologies, such as those for toll collection and vehicle probe mentioned previously; (2) shared database such as the common use of data between traffic and transit management centers; (3) exchange and coordination between organizational units such as between highway patrol and emergency services; and (4) synergism between transportation and communication infrastructures. Thus, the benefits of integration include cost savings, enhanced capabilities, easier user acceptance, and faster and fuller system completion (19). System architecture also helps local and regional ITS planning by providing a big future picture for all stakeholder to see (20). To be practical, the system architecture must be flexible enough to accommodate a variety of local needs, including some of their existing systems. There are two kinds of mutually consistent architectures: the logical architecture which describes the information data flow and data processing needed to provide ITS user services; and the physical architecture which allocates specific functions to physical subsystems, taking into account of the institutional responsibilities. Figure 4 portrays the national ITS physical architecture developed for the United States through a three-year consensus building process among a wide spectrum of stakeholders. Note that the interfaces between subsystems are clearly depicted in Fig. 4. Data flows between subsystems are through four kinds of communications media, the choice of which has taken into account the rapid changes in telecommunications, partly due to deregulation, a general trend in
527
Roadway Toll collection Parking management Commercial vehicle check
Roadside subsystems
Figure 4. National ITS architecture for the United States. All ITS user services (except those related to the pedestrians) are captured by the four sets of subsystems (in the shaded blocks) interconnected through four types of communications (in the sausage-shaped boxes).
528
INTELLIGENT TRANSPORTATION SYSTEMS
national harmonization for ITS standards, CEN and ISO have agreed to cooperate through a number of working groups (WGs). Standards setting may or may not be critical at the local, national, and international levels, depending on the specific ITS user service. In general, ITS users want standards not only for the sake of interoperability but also for the advantage of being able to acquire components and systems from multiple vendors. ITS suppliers want standards for the sake of market size and economy of scale in production. However, standard setting often runs into difficulties as suppliers with proprietary standards or with established market position do not wish to change their products. Users already invested in specific systems are reluctant to switch to new standards before they realize a reasonable return to their sunk investment. Premature standards setting may also stifle innovation. Thus, the timing of standard setting is important. Even after standards are established, practical considerations must be given to acceptable migration paths for existing systems to move toward the standards over a reasonable period of time. Considerable energy has been, and will be, focused on developing ITS standards in the US and internationally. MARKET AND EVALUATION The worldwide movement in ITS since the mid-1980s has created a new ITS industry. A 1997 study for the US situation has produced the following findings of quantitative benefits and costs on the basis of the national ITS goals of the United States (21): • ITS infrastructure will generate an overall benefit–cost ratio of 5.7 to 1 for the largest 300 metropolitan areas, with even stronger returns to the top 75 most congested cities (8.8 to 1). • Present value of ITS benefits should exceed $250 billion over the next two decades. Comparable results have been reported from Europe, Canada, and Japan. The markets for ITS products and services have grown and matured rapidly over the last five years in the United States. This growth is expected to continue and even accelerate as end-user (consumer) technologies move from early trial applications to adoption. Other relevant conclusions include (22): • Over the next 20 years, the market for ITS products and services is expected to grow and cumulate to approximately $420 billion for the period. • Building on public investments in basic ITS systems, the private market is projected to represent a smaller share initially, eventually growing to represent approximately 80 percent of all sales in the market through the year 2015. • Public infrastructure-driven markets in the US metropolitan areas are projected to exceed $80 billion cumulative over the next twenty years. • Private markets, including those for consumer- and commercial-driven ITS products and services, are estimated to exceed $340 billion cumulated over the next two decades.
Results from comparable market studies for other parts of the world are not available. However, more focused studies, such as on ETC products and services, have suggested that the global ITS market is at least three times as large as the US market. Another study in the United States has compared the traditional solution of building roads to accommodate transportation demand growth versus the new alternative of investing in ITS infrastructure to help in building the same traffic handling capacity (21). The total cost for road construction to accommodate expected transportation demand growth in the United States for the next decade is about $86 billion for 50 US cities, based on the average cost of $3 million per lanemile for urban freeways. With full ITS deployment for these cities, only two-thirds of the new roads are needed to provide the same traffic handling capacity. In other words, one-third fewer new roads are needed. Even accounting for the much higher operations and maintenance costs for 24-h operation of ITS, the United States can still save 35% of the total cost to provide enough capacity for the expected growth through an appropriate mix of ITS and new road construction. The same can be said on the vehicle side. That is, with ITS, it is possible to reduce the capital needs for buses and trucks because a smaller number of these vehicles would be needed to carry the same amount of cargoes or passenger trips. Similar results are probably applicable to other countries, assuring the continuation and expansion of the worldwide ITS movement.
BIBLIOGRAPHY 1. K. Chen and J. E. Pedersen, ITS functions and technical concepts, Berlin, Germany: Proc. 4th World Congr. Intelligent Transportation Syst., 1997. 2. U.S. Department of Transportation, The national architecture for ITS, Washington, DC: documents on CD-ROM, 1997. 3. K. Gardels, Automatic car controls for electric highways, Warren, MI: General Motors Research Laboratories, GMR-276, June 1960. 4. B. W. Stephens et al., Third generation destination signing: An electronic route guidance system, Washington, DC: Highway Research Record No. 265, Route Guidance, 1968. 5. P. Braegas, Function, equipment and field testing of a route guidance and information system for drivers (ALI), IEEE Trans. Veh. Technol., VT-29: 216–225, 1980. 6. N. Yumoto et al., Outline of the CACS pilot test systems, Washington, DC: 58th Transportation Res. Board Annu. Meet., 1979. 7. K. Chen and J. Costantino, ITS in the US, J. Walker (ed.), Advances in Mobile Information Systems, Norwood, MA: Artech House, 1998, in press. 8. P. Glathe et al., The Prometheus Programme—objectives, concepts and technology for future road traffic, Turin, Italy: FISITA Proc., 477–484 (Paper 905180), 1990. 9. H. Kawashima, Present status of Japanese research programmes on vehicle information and intelligent vehicle systems, Brussels, Belgium: DRIVE Conf., Commission Eur. Communities DG XIII, 1991. 10. U.S. Department of Transportation, Operation Time Saver, Washington, DC: Remarks prepared for delivery by Secretary Federico Pena at the 75th Annu. Meet. Transportation Res. Board, January 10, 1996.
INTELLIGENT TUTORING SYSTEMS 11. J. Shibata and R. L. French, A comparison of intelligent transportation systems: Progress around the world through 1996, Washington, DC: ITS America, 1997. 12. P. B. Hunt et al., SCOOT—A traffic responsive method of coordinating traffic signals, Crowthorne, UK: Transport and Road Research Laboratory, Laboratory Report LR 1014, 1981. 13. P. R. Lowrie, The Sydney co-ordinated adaptive traffic system: principles, methodology, algorithms, Proc. Int. Conf. Road Traffic Signalling, United Kingdom: IEE Pub. No. 207: 67–70, 1982. 14. D. J. Chadwick et al., Communications concepts to support early implementation of IVHS in North America, IVHS J., 1 (1): 45– 62, 1993. 15. L. N. Spasovic et al., Primer on electronic toll collection technologies, Washington, DC: Transportation Research Record No. 1516: 1–10, 1995. 16. Parson Brinckeroff Farradyne Inc., DSRC standards development support documents, Rockville MD: Task Order (9614) Report, September 1997. 17. K. Chen, Driver information systems—a North American perspective, Transportation 17 (3): 251–262, 1990. 18. K. Chen, Survey of AHS stakeholders before and after demo 97, Troy, MI: National Automated Highway Systems Consortium, 1997. 19. K. Bhatt, Metropolitan model deployment initiative evaluation strategy and plan, 7th ITS Amer. Annu. Meet., Washington, DC: 1997. 20. R. McQueen and J. McQueen, A Practical Guide to the Development of Regional ITS Architecture, Norwood, MA: Artech House, 1998, in press. 21. M. F. McGurrin and D. E. Shank, ITS versus new roads: A study of cost-effectiveness, ITS World, 2 (4): 32–36, 1997. 22. Apogee Research, Inc., ITS national investment and market analysis—final report, Washington, DC: ITS America, 1997.
KAN CHEN Kan Chen Incorporated
INTELLIGENT TRANSPORTATION SYSTEMS. See INTERNATIONAL TRADE.
529
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7107.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Interface Design Standard Article Kim J. Vicente1 1University of Toronto, Toronto, Ontario, Canada Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7107 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (102K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Designing Smart Interfaces Advanced Applications The Future Acknowledgments About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7107.htm15.06.2008 14:27:47
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
564
INTERFACE DESIGN
INTERFACE DESIGN It is very difficult to conceive of an engineering system that does not require interaction with people somewhere along the way. In some cases, people manually control an engineered system, as is the case of a driver of an automobile. In others, people supervise the operation of an engineered system that is usually being controlled by automation, as is the case of human operators of some nuclear power plants. In still other cases, people are required to maintain or repair an engineered system that normally runs autonomously without human intervention, as is the case of technicians trouble shooting electronic hardware. Finally, in some cases people share responsibility with an engineered system in a shared mode of control, as with pilots interacting with the autopilot of a commercial aircraft. In all of these cases, and many others as well, the interaction between people and technology is an unavoidable fact of life. As computer technology is being introduced into more and more sectors of contemporary society, this interaction between people and technology is mediated by a computer interface. For example, intelligent vehicle highway systems, such as electronic maps, are being introduced into automobiles to help drivers find their way and avoid high-traffic areas. Computer-based displays are also being introduced into advanced control rooms for nuclear power plants to help operators perform their job more effectively and efficiently, thereby replacing analog, hard-wired instrumentation (e.g., analog gauges). Also, increasingly sophisticated computer software and hardware tools have been developed to facilitate the trouble-shootJ. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
INTERFACE DESIGN Player 1
Remaining piece
1
4
2
8
5
Player 2 3
6
9
7
1 + 4 + 2 + 8 = 15 Win!
3 + 9 + 7 = 15 Lose
Figure 1. One ‘‘interface’’ for playing the two-person game. Because information is presented in the form of abstract symbols, people have to play the game by relying on reasoning.
ing performance of electronics technicians. Finally, increasingly sophisticated computer-based flight management systems are being introduced into the ‘‘glass cockpits’’ of modern commercial aircraft to help pilots during the various phases of flight. As a result, the human–computer interface is becoming a dominant mode of interaction between people and technology. Engineers are increasingly being required to design a human–computer interface so that people can interact with the engineered system. The importance of this aspect of design cannot be overemphasized. Depending on the type of decisions that are made in creating the human–computer interface, the resulting system can either be successful or completely unusable. This is true even if the technological core of the engineered system remains unaltered. In other words, the adequacy of the human–computer interface can either make or break the system. This crucial observation can be illustrated by a very simple example (1). Consider the following two-person game. There are nine cardboard pieces available to each player. Each piece has drawn on it one of the integers from 1 to 9. The pieces are face up so that both players can see all of the numbers. The players take turns drawing one piece from the remaining set. The first player to hold three pieces in his hand whose integers sum up to exactly 15 wins the game. If all nine pieces are drawn and neither player has a winning combination, then the game is tied. One interface for playing this game is shown in Fig. 1. This is an obvious way to represent the problem, given the rules just described. The reader is encouraged to envision playing this game in order to get an appreciation for its demands. An alternative interface for playing the very same game is shown in Fig. 2. There is a 3 ⫻ 3 matrix of blank squares. Players alternate marking a square, one player with an X and the other with an O. The first player to get a vertical, horizontal, or diagonal sequence of three symbols (Xs or Os) wins
Player 1 Xs Win!
X
0 0 X
X 0 X
Player 2 0s Lose
Figure 2. A second ‘‘interface’’ for playing the same game. Because information is presented in the form of concrete patterns, people can play the game by relying on perceptual skills.
565
the game. If all of the squares are taken and no player has such a sequence, then the game is tied. Readers will recognize this as the well-known game of tic-tac-toe. It should be obvious to the reader that playing the game with the first interface is considerably more difficult than playing the game with the second interface. In fact, the difference is so strong that the reader may think that a different game is being played with the second interface. However, an examination of the formal properties of these two games unambiguously reveals that the logics of the two games are actually isomorphic to each other (1). At their core, the two versions of the game are exactly the same. Nevertheless, we correctly perceive that one version is much easier than the other. The reason for this is that the way in which the problem is represented (i.e., the interface) has such a big impact on human performance. The interface in Fig. 1 is based on abstract symbols (the integers from 1 to 9), and so it requires people to engage in analytical problem solving (e.g., mental arithmetic) to perform the task. It is well known that problem solving activities are comparatively slow, effortful, and errorprone (2,3). This accounts for why this version of the game seems so much more difficult to play. In contrast, the interface in Fig. 2 is based on concrete patterns (Xs and Os). Because information is presented in this more concrete form, people are able to rely on their powerful pattern recognition capabilities that have been honed through experience with everyday tasks. It is well known that pattern recognition capabilities are comparatively fast, efficient, and less prone to error, given the proper level of experience (3). As a result, this version of the game seems much easier to play. The point of this simple example can be generalized to the design of human–computer interfaces. The ease and reliability with which people interact with a particular engineered system is strongly affected by the type of interface that is provided. Given the very same technical system, human performance can either be made to be very difficult and cumbersome if the human–computer interface is designed one way, or it can be fluid and efficient if the interface is designed in a different way. As a result, the interface designer is actually responsible for shaping human behavior. If there is a good fit between human capabilities and limitations and the demands being placed by the interface, then the result will be effective and reliable performance. On the other hand, if there is a poor fit between human capabilities and limitations and the demands being placed by the interface, then the result will be human error. Therefore, human errors can frequently be traced back to inadequate interface designs (2). Now that the importance of human–computer interface design has been illustrated, the design remedy seems relatively obvious—design the interface to take advantage of people’s skills and the result should be enhanced human performance. Unfortunately, most engineers are not very well prepared to deal with the design challenges imposed by human–computer interaction. Traditionally, engineering education has focused almost exclusively on the technical component of the system, to the detriment of the human, social, and environmental considerations. As a result, it is not surprising to find that there have been, and continue to be, many computer systems that are poorly designed from the perspective of the people who have to use them (4).
566
INTERFACE DESIGN
DESIGNING SMART INTERFACES Fortunately, a fair amount is known about how to design effective human–computer interfaces that lead to efficient and reliable system performance (3,5,6). Some of this knowledge, which comes from the discipline known as cognitive engineering (7,8), will be described next. Rote and Smart Devices One useful way to classify interfaces is according to the distinction between rote and smart devices first put forward by Runeson (9) in an exceptionally insightful and entertaining paper. Traditionally, human–computer interfaces have been designed to be rote devices, but as we shall see, there are very strong reasons for moving toward a smart approach to interface design (10). Rote Devices. According to Runeson (9), a rote device performs a rather simple task, namely measuring a basic context-free (i.e., primitive) property of the environment. An example is a ruler, which measures a fundamental dimension, length. The advantage of rote devices is that they can be used to derive a variety of different properties. For example, a ruler can be used to measure various lengths, from which one can compute area and volume. The disadvantage of rote devices is this need for computation—that is, to derive more complex properties, the person must know the rules (or algorithm) for combining the elemental measurements. For instance, to derive the area of a triangle with a ruler, the person must know the appropriate formula. In general, the person must also engage in calculations to derive the higher order properties of interest. These calculations, in turn, require some time and effort to carry out. Runeson (9) originally argued that the metaphor of rote devices is not a very appropriate one for human perception. For a goal-directed organism, the most important properties of the environment are likely to be the higher-order functional properties that are relevant to its immediate goals, not the context-free primitive properties measured by rote devices. For example, within the domain of perception, it is usually more important for a person to know whether a certain object affords sitting than to know the exact dimensions of the object. The suggestion is that if human perception were to function like a rote device, it would be very inefficient, if not intractable. Returning to the sitting example, it would be very difficult for observers using primitives based on rote devices (e.g., length) to determine whether a given object was indeed sit-onable or not. In addition to an extensive knowledge of geometry, anthropometry, and biomechanics, a very large number of calculations would also be necessary, requiring substantial effort and time, for what appears to be a very basic task. Smart Devices. The alternative is what Runeson (9) refers to as smart devices. These are specialized on a particular type of task in a particular type of situation. Their disadvantage is that, unlike rote devices, they cannot be used for a large, arbitrary set of purposes. Their great advantage, however, is that they ‘‘capitalize on the peculiarities of the situation and the task’’ (9, p. 174). In other words, smart devices are special-purpose devices that are designed to exploit goal-relevant
constraints pertinent in a given setting. As a result, smart devices can directly register higher-order properties in the environment, rather than having to calculate them from sensed primitive variables using computational rules and effort. The ingenious example that Runeson gives of a smart device is a polar planimeter, a mechanical device consisting of two rigid bars connected by an articulated joint. The end of one bar is used as a fixed anchor, whereas the end of the other bar is used to trace along the perimeter of a flat surface. There is also an analog meter, which displays the value of the measurement made by the polar planimeter. Remarkably, a polar planimeter directly measures the area of any two-dimensional figure, regardless of shape. This is accomplished by anchoring one end of the planimeter on the flat surface that the figure is laying on, and then using the measuring end to trace continuously around the perimeter of the figure, until the measuring end reaches the point along the perimeter at which the measurement was initiated. At this point, the analog meter will indicate the area of the figure. A polar planimeter has several important properties. First, it does not use length to arrive at its measurement of area. In fact, it cannot be used to measure length at all, or any other property for that matter. It is a special-purpose instrument; it can only measure area. Second, there is no meaningful sense in which one can say that the polar planimeter is calculating anything. There are no primitive inputs that need to be integrated in any way, because there are no primitive inputs to begin with. There are no intermediate calculations either, because if one stops the process of measuring the area of a figure prematurely, the readout on the polar planimeter has no meaningful interpretation (e.g., stopping midway around the perimeter does not usually lead to a reading that corresponds to half of the area). Third, no rules or knowledge are represented in the device. The polar planimeter is simply a mechanical measuring instrument, and does not possess any internal representation, or model, which it uses for reasoning. It does not reason; it measures. This set of properties derives from the fact that the polar planimeter is a physical embodiment of the constraints that are relevant to the task at hand. Although it can be described in very analytical and computational terms (i.e., surface integrals), such a description has no causal, explanatory value in describing its operation. Rather, such a rationalized description can only explain the constraints to which the design of the device had to conform to be a reliable measuring instrument. But once these constraints are embedded in the physical device, the analytical account merely represents the design history, not the real-time operation, of the device. In other words, the polar planimeter does not have a symbolic model of the goal-relevant constraints in the environment, although it could be said to be a mechanical adaptation to those constraints, and thus can be described in symbolic terms. Although it may not be apparent from the description so far, the distinction between rote and smart devices has a great deal of relevance to human–computer interface design. Rote and Smart Interfaces Rote Interfaces. Traditionally, human–computer interfaces have been designed as rote devices. The philosophy has been referred to as the single-sensor single-indicator (SSSI) design approach (11). Basically, it consists of displaying all of the
INTERFACE DESIGN
Date
Time
Variable 1
Variable 2
Variable 3
3011 3011 3011 3011 3011 3011 0112 0112 0112 0112
2056 2057 2057 2057 2057 2311 1116 2234 2234 2358
23.2 23.2 23.2 23.2 23.2 23.2 23.2 23.2 23.2 23.2
156 150 143 155 155 159 165 163 140 172
897 880 880 880 903 978 979 980 950 888
Figure 3. An example of a rote interface. Only raw sensor data are shown, so people have to derive higher-order information.
elemental data that are directly available from sensors. Anything and everything that can be directly measured has a single display element associated with it. A hypothetical example is shown in Fig. 3, which is actually very similar to some human–computer interfaces currently being used by people to interact with complex engineering systems. There are many disadvantages to such an approach (5,11), some of which are identical to those associated with rote instruments. The most important drawback is related to controllability. To deal consistently with the entire range of domain demands (particularly fault situations), people need comprehensive information regarding the system state. But with the SSSI approach, only data that are directly obtained from sensors tend to be displayed. Thus, higher-order state information, which cannot be directly measured, but which is nevertheless needed to cope with many fault situations, may not be made available to users. In fault situations, it is generally not possible to derive the higher-order properties from the elemental data, and so people may not have all of the information that is required to consistently control the system under these circumstances. The disadvantages of the SSSI approach are not limited to fault scenarios. Even under normal operating conditions, SSSI interfaces put an excessive burden on operators. In these situations, it is, in principle at least, possible to recover the higher order, goal-relevant domain properties from the elemental data represented in the interface. The problem, of course, is that considerable effort and knowledge are required of the operator to carry out this derivation. Not only are the higher order properties not directly displayed, but in addition, the relationships between the various elemental display elements also are not usually represented in the interface (see Fig. 3). Finally, there is also the issue of information pickup. Just because the information is in the interface does not mean that the operator can find it easily (5). In the SSSI approach, the form in which information is usually presented (e.g., similar looking analog meters or digital numerical displays) is not very compatible with the capabilities of the human perceptual system, thereby hindering the process of information extraction. Each instrument tends to be presented individually, and there is virtually no integration or nesting of display elements. This makes it difficult for people to perceive the state of the system, even if all of the requisite information is in the interface.
567
In summary, the rote interface approach makes people’s jobs more difficult than they really need to be by requiring them to engage in computations, store information in memory, and then retrieve that information at the right time. All of this takes time and effort. Clearly, an alternative approach is required. Smart Interfaces. The advantages of smart devices suggest that the approach may be a useful one for human–computer interface design. The goal of a smart approach to interface design would be to provide the information needed for controllability in a form that exploits the power of perception. The first step in achieving this goal is to identify all of the information that is relevant to the context of interest to the person who will be using the interface. The second step requires identifying the various relationships between these variables of interest. As we will see, relationships play a critical role in smart interfaces. The third and final step is to embed these variables and relationships into the interface in a form that makes it easy for people to pick up information accurately and efficiently. The advantages of adopting a smart approach to interface design are very similar to those identified earlier for smart devices. Rather than requiring people to remember and retrieve the relationships between variables of interest, these are directly shown in the interface, thereby reducing the demand on memory. Similarly, rather than requiring people to derive higher-order information by engaging in computations, this information is also directly shown in the interface, thereby reducing the demand for analytical problem solving. As a result, human performance should be more accurate and more efficient as well. However, just as with the polar planimeter example, these advantages can only be obtained if the designers perform the required systems analysis up front and build the results of this analysis into the interface. That is, the power of smart interfaces is that they offload much of the analysis to the designer (who has more time and better tools to deal with these issues off-line and a priori), and thus relieve the burden on the person using the interface (who is busy performing many other tasks in real time). The demands imposed by the task at hand cannot be displaced if effective performance is to be achieved. The only choice is who is better capable to deal with those demands, system designers or people operating the interface. These abstract points can be illustrated by the following concrete example of a smart interface. Application Example One activity that people are usually responsible for when interacting with an engineered system is assessing the overall status of the system. Is the system in a normal or abnormal state? For a complex system with many variables, this can be a very challenging task, involving a number of different steps (5). First, the person has to know and remember which variables are the most important ones in determining overall system status. Typically, only a small subset of the hundreds or thousands of available variables will be needed. Second, the person must collect together the status of these relevant variables. This activity requires more knowledge, because the person must know where to look to find the variables of interest. This activity also requires time and effort, because the
568
INTERFACE DESIGN
variables generally will not be found all in one place. For instance, in a human–computer interface, the person may have to search through a hierarchical menu of windows to find the window that contains one of the relevant variables. This procedure would then have to be repeated for each of the relevant variables. Third, the person may also have to integrate together these variables to obtain the higher order information of interest. This activity may be required because sometimes the variable of interest is not something that can be measured directly by a sensor, but rather something that can only be derived from several of the variables that can be directly sensed. This derivation process requires knowledge of the correct integration formula, and mental effort to compute the derived variable from the lower-level sensed data. Fourth, the person will also have to know and remember the normal range for each of the variables that are relevant to determining overall system status. After all, the value of a particular variable only takes on some meaning when it is compared to its nominal or limit values. All of the activities just listed must be performed if the task of overall system assessment is to be done accurately and reliably. However, as the discussion of rote and smart devices indicates, there are different ways in which these demands can be allocated between the designer and person operating the system. With a rote interface, only raw sensor data are presented. As a result, a great burden is put on the person to perform the variable identification, collection, integration, and normalization activities listed above. Rather than getting some help from the interface, the person must perform these activities unaided, which as already mentioned, requires a fair amount of knowledge, puts a substantial load on memory, and demands a fair amount of time and effort. Given the properties of smart interfaces, it should be possible to do much better by off-loading at least some of this burden to the designer. Rather than forcing the user to deal with all of these demands, it should be possible for the designer to build these constraints into a smart interface. Figure 4 shows an example of a smart interface developed by Coekin (12) that can help people perform the task of assessing the overall status of a complex system. This display consists of eight spokes arranged in a symmetrical fashion. Each spoke displays the status of one of the variables that are relevant to determining the overall status of the system. Note that these individual variables can be either raw sensor values or higher order information that must be computed by integrating together a number of variables that are sensed individually. The current states of individual variables are connected together by a line joining adjacent spokes. Another important feature of this display is that each of the variables displayed has been normalized according to its nominal value. If each variable is at its nominal value, then it will be displayed at a fixed distance from the center of the polygon. If all eight variables are in the normal range, then a symmetrical figure will be obtained. As a result, the task of determining whether the system is in a normal state or not is dead-simple. If the figure is symmetrical and in its normal diameter, as it is in Fig. 4(a), then the system is in a normal state. On the other hand, if this symmetry is broken, as it is in Fig. 4(b), then the system is in an abnormal state. Moreover, the way in which the octagon deforms may give some information about the nature of the abnormality. For example, if the left side of the polygon caves in toward the center, this may signify one type of failure,
(a)
(b) Figure 4. An example of a smart interface. Adapted from Coekin (12). (a) Normal state specified by a symmetrical octagon. (b) Abnormal state specified by a deformed, assymetrical polygon. Concrete patterns are shown, so people can directly perceive higher-order information.
whereas a deformation of the right side towards the center may indicate a very different type of failure. This can be accomplished by grouping variables that are functionally or physically related in proximate spokes. The demands that this smart interface places on people are trivial compared to the demands imposed by a rote interface. As mentioned earlier, this is because the interface designer has taken on much of the responsibility of dealing with the relevant constraints. The reason why this smart interface required much less knowledge, memory, effort, and time to use is because the designer has identified the relevant variables, brought them together into one place, performed any necessary integration, and normalized the variables with respect to their nominal values. Because the designer has done this work, much less work is left for the person who is controlling the system in real time. The smart interface concept illustrated in Fig. 4 has been adopted to design system status displays for a number of different application domains, including aviation (to monitor the status of engineering systems), medicine (to monitor the life signs of a patient), and nuclear power plants (to monitor the state of the plant). It can surely find use in many other situations as well. Nevertheless, it is important to point out that this octagon interface is merely one example of a smart interface. The important point to take away is not so much the details of this particular exemplar, but rather the process by
INTERFACE DESIGN
which it was designed. If designers can identify the goal-relevant constraints and build them into the interface in a form that makes it easy for people to pick up information, then very different interfaces can be developed for other applications but with the same advantages as this smart interface. ADVANCED APPLICATIONS There are a number of promising directions for the advanced application of smart interfaces for complex engineered systems. Two of these are creating smart interfaces from visualizations of engineering models described in textbooks, and the application of smart interface design principles to make automation more visible. Visualization of Engineering Models One powerful and virtually untapped source of ideas for smart interfaces is the set of visualizations developed by engineers over the years to teach basic principles and models in textbooks. A prototypical example is the temperature– entropy (T–s) diagram that has been used in thermodynamics textbooks for years to represent the saturation properties of water. More specifically, the T–s diagram has been used as a frame of reference for representing the various phases of different thermodynamic heat engine cycles (e.g., the Rankine cycle). This is a fortunate choice from the viewpoint of cognitive engineering because different areas of the T–s diagram represent important thermodynamic distinctions in a form that is very easy for people to discriminate (e.g., the area under the saturation curve for water indicates a two-phase state). Similarly, vertical and horizontal lines in this diagram represent meaningful thermodynamic constraints on the various phases of the heat engine cycle in a form that is also easy to perceive (e.g., isentropic and isothermal transformations, respectively). Beltracchi (13) has taken advantage of these properties by creating a smart interface that represents the state of a water-based power plant in the form of a Rankine cycle overlaid on the T–s diagram. This interface brings to life the Rankine cycle representation in the T–s diagram found in textbooks by animating it with live sensor data and by coding information with perceptually salient properties such as color. An initial evaluation of this smart interface suggests that it has important advantages over more traditional, rote interfaces that have been (and continue to be) used in nuclear power plant control rooms (14). This suggests that it may be possible to develop other smart interfaces from the myriad of visualizations that can be found in engineering textbooks. Some obvious examples include: pressure–volume diagrams, phase diagrams, and nomograms. Making Automation More Visible So far, this article has concentrated on techniques for building constraints that govern the behavior of engineered systems into easily perceivable forms in a human–computer interface. However, the very same logic could be applied, not just to make process constraints visible, but also to make automation constraints visible as well. This seems to be a very fertile area of application, because a number of studies have indicated that one of the problems with contemporary automation (e.g., on the flight decks of ‘‘glass cockpits,’’ or in the
569
control rooms of petrochemical refineries) is that the behavior of the automated systems is not very clearly displayed (15). This creates a number of difficulties for the people who are responsible for monitoring these systems. First, it is difficult for people to monitor the actions of the automation to track how those systems are reconfiguring the process in response to disturbances or changes in demands. If people cannot keep track of the automated systems’ actions, then people’s understanding of the current configuration of the process may not correspond to the actual configuration. Thus, their subsequent actions may not be appropriate, leading to unintended consequences. Second, it is also difficult for people to monitor the state of the process to anticipate problems before they jeopardize system goals (e.g., by propagating to other parts of the process). Research has repeatedly shown that it is essential for people to be able to effectively anticipate the future state of the process if they are to function as effective controllers. If people are operating in a reactive mode, then they will always be one step behind the course of events, and given the lags in complex engineered systems, they will not be able to respond to problems until after they occur. Third, it is also difficult for people to monitor the state of the automation to quickly detect and diagnose any faults in the automation. In highly automated systems, the primary reason why there are people in the system is to supervise the automation and to detect fault situations in which the automation is not working properly so that they can improvise a solution to the problem. If people do not have rich feedback from the human– computer interface that makes it clear whether the automation is working properly or not, then they will not be able to perform this role effectively and reliably. If, on the other hand, designers think of human–computer interfaces for automation from the perspective of smart devices, then many of these problems can potentially be overcome. Just as it is possible to build process constraints into an interface in a form that is easy to perceive, it should also be possible to do the same for automation constraints. In fact, one can envision human–computer interfaces that provide integrated representations of process and automation status. These smart interfaces should allow people to independently track the status and configuration of both the process and the automation, thereby providing them with the feedback that they need to effectively understand the interaction between the controller and the process being controlled.
THE FUTURE Perhaps ironically, as technology evolves in sophistication and availability, there will be an increasing need for effective human–computer interface design. The reason for this is that there will be more and more engineered systems that require an interface with the people who will interact with those systems. The perspective of smart devices described in this article should enable designers to develop human–computer interfaces that provide a good fit between the characteristics of people and the demands imposed by the systems with which they interact. The result should be safer, more productive, and more reliable system performance. Only by designing for people will these goals be achieved. Or in other words, if technology does not work for people, then it does not work (16).
570
INTERFEROMETERS
ACKNOWLEDGMENTS
INTERFACING, MICROCOMPUTERS. See MICROCOM-
The writing of this article was made possible by research and equipment grants from the Natural Sciences and Engineering Research Council of Canada.
INTERFERENCE, COCHANNEL. See COCHANNEL INTER-
PUTER APPLICATIONS.
BIBLIOGRAPHY 1. A. Newell and H. A. Simon, Human Problem Solving, Englewood Cliffs, NJ: Prentice-Hall, 1972. 2. D. A. Norman, The Psychology of Everyday Things, New York: Basic Books, 1988. 3. K. J. Vicente and J. Rasmussen, Ecological interface design: theoretical foundations, IEEE Trans. Syst. Man Cybern., SMC-22: 589–606, 1992. 4. N. G. Leveson, Safeware: System Safety and Computers, Reading, MA: Addison-Wesley, 1995. 5. D. D. Woods, The cognitive engineering of problem representations. In J. Alty and G. Weir (eds.), Human-Computer Interaction in Complex Systems, London: Academic Press, 1991, pp. 169–188. 6. K. B. Bennett and J. M. Flach, Graphical displays: implications for divided attention, focused attention, and problem solving, Human Factors, 34: 513–533, 1992. 7. D. D. Woods and E. M. Roth, Cognitive engineering: Human problem solving with tools, Human Factors, 30: 415–430, 1988. 8. J. Rasmussen, A. M. Pejtersen, and L. P. Goodstein, Cognitive Systems Engineering, New York: Wiley, 1994. 9. S. Runeson, On the possibility of ‘‘smart’’ perceptual mechanisms, Scand. J. Psychol., 18: 172–179, 1977. 10. K. J. Vicente and J. Rasmussen, The ecology of human-machine systems II: mediating ‘‘direct perception’’ in complex work domains. Ecol. Psychol., 2: 207–250, 1990. 11. L. P. Goodstein, Discriminative display support for process operators. In J. Rasmussen and W. B. Rouse (eds.), Human Detection and Diagnosis of System Failures, New York: Plenum, 1981, pp. 433–449. 12. J. A. Coekin, A versatile presentation of parameters for rapid recognition of total state. In Proceedings of the IEE International Symposium on Man-Machine Systems, Cambridge: IEE, 1969. 13. L. Beltracchi, A direct manipulation interface for water-based rankine cycle heat engines. IEEE Trans. Syst. Man Cybern., SMC17: 478–487, 1987. 14. K. J. Vicente, N. Moray, J. D. Lee, J. Rasmussen, B. G. Jones, R. Brock, and T. Djemil, Evaluation of a Rankine cycle display for nuclear power plant monitoring and diagnosis. Human Factors, 38: 506–521, 1996. 15. D. A. Norman, The ‘‘problem’’ of automation: inappropriate feedback and interaction, not ‘‘over-automation.’’ Phil. Trans. Roy. Soc. Lond. , B 327: 585–593, 1990. 16. A. M. Lund, Advertising human factors. Ergonomics Design, 4 (4): 5–11, 1996.
KIM J. VICENTE University of Toronto
INTERFACES, SEMICONDUCTOR-ELECTROLYTE. See SEMICONDUCTOR-ELECTROLYTE INTERFACES.
INTERFACES, SEMICONDUCTOR-INSULATOR. See SEMICONDUCTOR-INSULATOR BOUNDARIES.
INTERFACE STATES. See SURFACE STATES. INTERFACE TRAPS. See SURFACE STATES.
FERENCE.
INTERFERENCE IN WIRELESS COMMUNICATION SYSTEMS. See COCHANNEL INTERFERENCE. INTERFERENCE, SIGNAL. See SYMBOL INTERFERENCE. INTERFERENCE, TELEPHONE. See TELEPHONE INTERFERENCE.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7108.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Modeling and Simulation Standard Article K. Preston White Jr.1 1Professor of Systems and Information Engineering, University of Virginia, Charlottesville, VA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7108.pub2 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (383K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Models and Model Theory Mathematical Systems Models Classification of System Models Some Common Model Forms Simulation Continuous System Simulation Discrete-Event Simulation Modeling and Simulation Issues Model Validation About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7108.htm15.06.2008 14:28:40
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
MODELING AND SIMULATION
MODELS AND MODEL THEORY A model is an entity that is used to represent some other entity for some well-defined purpose. In this most general sense, examples of models include:
Mental models, such as the internalized conception of
a person’s relationships with the environment, used to guide everyday behavior Iconic models, such as (a) a circuit diagram used to represent the functional interconnection of electronic components or (b) a map used to record geographical, geological, or meteorological data Linguistic models, such as (a) a verbal protocol for a biological experiment, or (b) a written specification defining the purpose, requirements, and operation of a software program Physical models, such as (a) a scale mock-up of an airfoil used in wind-tunnel testing for a new aircraft design or (b) an analog circuit developed to replicate the neural activity of the brain Mathematical and computational models, such as (a) the set of mass- and energy-balance equations that predict the end products of a chemical reaction or (b) the mathematical and logical relations embodied in a computer program that simulates the behavior of an electromechanical device
Models are a mainstay of every scientific and engineering discipline. Social and management scientists also make extensive use of models. The specific models adopted in different disciplines differ in subject, form, and intended use, and every discipline tends to develop its own approach and techniques for studying models. However, basic concepts such as model description, simplification, solution, and validation are universal across disciplines. Model theory (1) seeks a logical and axiomatic understanding of these common underlying concepts, independently of their particular expression and any modeling endeavor. At its core, the theory of models and modeling cannot be divorced from broader philosophical issues that concern the origins, nature, methods, and limits of human knowledge (epistemology) and the means of rational inquiry (logic and the scientific method). Philosophical notions of correlation and causality are central to model theory. For example, a department store may have data that show that more umbrellas are sold on days when it rains and fewer umbrellas are sold on days when the sun shines. A positive correlation between the average number of umbrellas sold each month and the average amount of rain that falls in that month seems entirely plausible. A model which captures this relationship might be used to predict umbrella sales from a record of the amount of precipitation received. Moreover, since it is easy to imagine that increased precipitation causes customers to buy umbrellas,
this model also might be used to show how many more umbrellas the store might sell if somehow customers could be induced to believe that it would rain. The correlation also holds in the opposite direction—we might well use the inverse model to predict the amount of precipitation received based on a record of umbrella sales. However, it would be difficult to support the idea that umbrella sales cause it to rain. A model that shows how much more it would rain if we could somehow increase umbrella sales clearly would not be credible. Models are pervasive in fields of inquiry simply because a good model is more accessible to study than the actual entity the model represents. Models typically are less costly and less time-consuming to build and analyze. Variants in the parameters and structure of a model are easier to implement, and the resulting changes in model behavior are easier to isolate, understand, and communicate to others. A model can be used to achieve insight when direct experimentation with the actual system is too demanding, disruptive, or dangerous to undertake. Indeed, a model can be developed to answer questions about a system that has not yet been observed or constructed, or even one that cannot be measured with current instrumentation or realized with current technologies.
MATHEMATICAL SYSTEMS MODELS Mathematical and computational models are particularly useful because of the rich body of theory and wide range of quantitative and computational techniques that exist for the study of logical expressions and the solution of equations. The availability and power of digital computers have increased the use and importance of mathematical models and computer simulation in all modern disciplines. A great variety of programming languages and applications software is now available for modeling, data reduction and model calibration, computational analysis, system simulation, and model visualization. Icon-based, drag-and-drop programming environments have virtually eliminated the need for writing code and have automated many of the other routine tasks formerly associated with developing and analyzing models and simulations. Given these powerful tools, many applications that formerly relied on other types of models now also use mathematical models and computer implementations extensively. Much of the current thinking about models and modeling is intimately tied to mathematical system theory. Broadly, a system is a collection of elements that interact to achieve some purpose. The boundary of a system separates the elements internal to the system from the environment external to the system. The key attributes of a system are represented by a set of parameters, with essentially fixed values, and three sets of system variables, which may assume different values at different times and/or at different locations in space. The action of the environment on the system is defined by a set of input or predictor variables. Inputs typically are represented by the input vector u ∈ U, where the set U of all values that can be assumed by the input is called the input space. The internal condition of the system is defined
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.
2
Modeling and Simulation
ate theory and mathematical techniques that can be used to study and or solve alternative models. These criteria and the classification of models based on these criteria are given in the following subsections. Number of System Variables
Figure 1. Mathematical systems model showing state and output equations and the relationships among input, state, and output variables. Arrows indicate direction of causal relationships.
by a set of state variables represented by the state vector x ∈ X, where X is the state space. The action of the system on the environment is defined by a set of output or predicted variables represented by the output vector y ∈ Y, where Y is the output space. Variations in the system variables over time and/or space are ordered by a set of index variables represented by the vector k ∈ K, defined on the index set K. The relationship between system inputs and outputs is determined by the parameters and structure of the system, as mediated by the independent variables and system state. In other words, the input alters the internal state of the system and the altered state is observed in turn through changes in the output for various values of the independent variables. These relationships are shown in Fig. 1. The state equation f : (x, u, k) → x describes how the new state is determined from the inputs and current state. The output equation g : (x, u, k) → y describes how the current output is determined from the current input and state. The pattern of changes exhibited by the system output in response to any particular series of inputs is called an output trajectory, (ui , yi ). The collection of all possible trajectories {(ui , yi ), ∀i} represents the behavior of the system. Clearly, this behavior is generated by the functions f and g and is the fundamental interest in model-based studies. In this formalism, a system is completely determined by the algebraic structure {U, Y, X, f, g, K}. If a system So has the structure {Uo , Yo , Xo , fo , go , Ko }, then a model of So is just some other system Sm with structure {Um , Ym , Xm , fm , gm , Km }. The model system Sm is used as a proxy for object system So for some well-defined purpose. Clearly, there can be many alternative models Sm(i) for So , i = 1, 2, . . . . Model theory deals in a fundamental way with the nature and adequacy of the correspondence of these alternative systems, depending on the intended purpose of the modeling exercise. CLASSIFICATION OF SYSTEM MODELS Mathematical models are distinguished by criteria that describe the fundamental properties of model variables and equations. These criteria in turn prescribe the appropri-
Models that include a single input, state, and output are called scalar models, because the system-dependent variables assume scalar values. Models that include multiple system variables are called multivariate models or MIMO (multiple-input, multiple-output) models. Continuity of the System Variables Continuous-state models are those in which the system variables are continuously variable over a (finite or infinite) range of permissible values; that is, U, Y, and X are continuous vector spaces for real-valued vectors. Continuous-state models are typical of physical systems at macroscopic resolution, where key dependent variables might include flows (currents, forces, fluid flows, and heat fluxes) and potentials (voltages, velocities, pressures, and temperatures). Discrete-state models are those in which the system variables assume only a countable number of different values; that is, U, Y, and X are countably finite or countably infinite vector spaces. Discrete-state models are typical of sample-data systems, including computer control and communication systems, as well as systems in which variables are measured naturally in terms of item counts (e.g., jobs, packets, parts, or people). Number of Index Variables A fundamental distinction can be made between mathematical models based on the number of index variables incorporated within the model. Static models relate the values of the input to the corresponding values of the output, without explicit reference to the index set K. In many cases, static models are used to organize and summarize experimental data, derived from empirical tests or generated by computer simulations. In contrast to static models, dynamic models seek to explain the behavior of a system in response to changes over time and/or space. Lumped-parameter dynamic models involve a single index variable, most often time. Lumped-parameter models are common in circuit design and control engineering. Distributed-parameter dynamic models involve multiple index variables, most often time and one or more spatial coordinates. Distributed-parameter models are common in the study of structures and of mass and heat transport. Continuity of the Index Variable In continuous-time dynamic models, the system variables are defined over a continuous range of the index variables, even though the dependence is not necessarily described by a mathematically continuous function. Continuous-state, continuous-time models (sometimes called analog models) are the staple of classical engineering science. In discretetime dynamic models, the system variables are defined only at distinct instants of time. Discrete-time models are typical of digital and computer-based systems. In discrete-event
Modeling and Simulation
dynamic models, the index variable is a discrete index which references a set of events. The events are instantaneous (they take no time to occur), strictly sequential (no two events occur at exactly the same instant of time), and asynchronous (the time between events can vary). The occurrence of events typically is determined by the internal logic of the system and the system state changes only in response to these occurrences. Discrete-event models are typical of a wide range of man-made systems, where queuing for shared resources is a key determinant in the behavior and performance of the system. These include models typically developed for the operation of manufacturing, distribution, transportation, computer and communications networks, and service systems.
Linearity and Superposition Consider a system with arbitrary trajectories (ui , yi ) and (uj , yj ). If the output to a linear combination of the inputs is simply a linear combination of the independent outputs [i.e., if for some constant vectors ai , aj , bi , and bj the pair (ai ui + aj uj , bi yi + bj yj ) also is a valid trajectory], then the principle of superposition applies for these inputs. Systems for which superposition applies over the range of inputs of interest are represented by linear models over this range. Nonlinear models, for which superposition does not apply, typically are significantly more difficult or even impossible to solve analytically. For this reason, nonlinear models often are approximated by linear models for analytical ease.
Treatment of Uncertainty Models in which the parameters and variables can be known with a high degree of certainty are called deterministic models. While the values of system attributes are never known with infinite precision, deterministic models are a common and useful approximation when actual uncertainties are small over the range of values of interest for the purpose of the modeling exercise. Probabilistic models are used when significant uncertainty exists in the values of some parameters and/or variables. Model parameters and variables are expressed as random numbers or processes and are characterized by the parameters of probability distributions. Stochastic models are those which are both probabilistic and dynamic.
Mixed Classifications Models that include combinations of continuous-time, discrete-time, and/or discrete-event subsystems are called hybrid models. Hybrid models are most common in computer control and communication systems, where physical devices represented by continuous-time, continuousstate models and computer control device are represented by discrete-time models. Continuous variables are sampled and quantized using A/D (analog-to-digital) conversion, control action is determined digitally, and control signals are reconstructed using D/A (digital-to-analog) conversion. Hybrid models sometimes are approximated as entirely continuous or entirely discrete.
3
SOME COMMON MODEL FORMS Response Surfaces Static models do not make explicit reference to index variables. If explicit reference to the state variables also is suppressed, then the resulting static model reduces to a set of coupled, input–output equations of the form y = f (u) Experimenters frequently employ models of this form to organize field or simulation data, to explore the corresponding correlation between system inputs and outputs, to hypothesize causal relationships for the underlying system, or to summarize data for efficient storage, search, or optimization. When used for this purpose, models of this form are called response surfaces (2). A response surface can be determined by regression over a sample of n observations of input–output pairs (uj , yj ), j = 1, n. The input and observed output values can be scaled using alternative transformations α(uj ) and β(yj ) and alternative functional forms f for the surface β(yj ) = f (α(uj )) can tested. The objective is to find a surface that provides a good fit between the data the input–output values predicted by the model. Response surface methodology (RSM) has been developed in recent years to build empirical models and to apply these models in model-based optimization studies. For example, Fig. 2 shows the discrete-time trajectories of umbrella sales and precipitation each month for one calendar year. The scatter plot in Fig. 3 illustrates the correlation between sales and precipitation. The straight line in Fig. 3 is the response surface y = 70.1u, where monthly precipitation (in inches) is the input u and umbrella sales (in units) is the output y. Analog State-Variable Models Continuous-state, continuous-time, lumped-parameter models are sometimes called analog models. Analog models are natural representations for a great many physical systems, including systems with electrical, mechanical, fluid, and/or thermal subsystems. As such, analog models are a staple of classical engineering science. The solution of an analog model relates the current value of the system output (at some arbitrary time t) to the current value of the system state through the output equation y(t) = g(x(t)) The current value of the state is determined from the known initial value of the state (at some earlier time t0 ) based on the history of inputs and states over the interval from t0 to t:
t
x(t) = x(t0 ) +
f (x(τ), u(τ), τ)dτ t0
4
Modeling and Simulation
Figure 2. Discrete-time trajectories of umbrella sales and precipitation each month for one calendar year.
Figure 3. Scatter plot illustrating the correlation between monthly sales and precipitation. The straight line is the response surface y = 70.1u, where monthly precipitation (in inches) is the input u and umbrella sales (in units) is the output y.
This equation is the solution to the state-variable model, represented by the vector differential equation dx = f (x(t), u(t), t) dt
(1)
For linear, time-invariant systems, the state-variable model has the form dx = Ax(t) + Bu(t) dt that is readily solved by standard techniques. Statevariable methods of this form are the basis for modern control theory.
Finite-Element Models Continuous-time, continuous-state, distributed-parameter models commonly arise in the study of electrical transmission, mass and heat transport, and the mechanics of complex structures and structural components. These models are described by partial differential equations, containing partial derivatives with respect to each of the index variables. These equations typically are so complex that direct solutions are difficult or impossible to obtain. To circumvent this difficulty, distributed-parameter models can be approximated by a finite number of spatial “lumps,” each characterized by some average value of the state within the lump. By eliminating the independent spatial coordinates,
Modeling and Simulation
5
instant tk ) to the current value of the system state y(k) = g(x(k)) The current value of the state is determined from the known initial value of the state (at some earlier instant t0 ) based on the history of inputs and states over the k steps from t0 to tk x(k) = f (k) (x(0), u(0), t0 ) The k-step transition function f (k ) is the solution to the state-variable model represented by the first-order vector difference equation x(k + 1) = f (x(k), u(k), k) For linear time-invariant systems, the discrete-time statevariable model has the form x(k + 1) = Ax(k) + Bu(k) which is readily solved by standard techniques, very similar to those for differential equations. Perhaps more importantly, nonlinear discrete-time state variables can be solved iteratively. Beginning with the known initial state, successive values of the state can be computed from the preceding (computed) value of the state and the corresponding, known value of the input as x(1) x(2)
= f (x(0), u(0), 0) = f (x(1), u(1), 1) .. . x(k) = f (x(k − 1), u(k − 1), k − 1) x(k + 1) = f (x(k), u(k), k) Figure 4. Heat transfer through a wall: (a) Distributed parameter model and, (b) approximating lumped-parameter model for three state variables.
the result is an analog model of the form previously described. If a sufficiently fine-grained representation of the lumped microstructure can be achieved, an analog model can be derived that will approximate the distributed model to any desired degree of accuracy. Increasing the granularity of the approximation requires increasing in the dimensions of the analog model, however, with a resulting compromise between model accuracy and precision and the difficulty of solving higher-order differential equations. Figure 4 illustrates the approximation of a distributed parameter model for temperature in a one-dimensional wall, where θ(x, t) is the temperature depth x and time t, with a lumped-parameter model for three state variables, where θ i (t) is the temperature at depth xi , i = 1, 2, 3, also at time t. Discrete-Time State-Variable Models Discrete-time models are natural representations for computer-based systems, in which operations are synchronized by an internal digital clock, as well as for many social, economic, and management systems, in which data are sampled and recorded at intervals according to a survey schedule. The solution of a discrete-time model relates the current value of the system output (at some arbitrary
Implementation of this iterative or numerical solution strategy on a digital computer is straightforward, allowing extraordinarily complicated and otherwise difficult difference equations to be solved quickly and easily for specific inputs, initial conditions, and parameter values. Moreover, differential equations that are difficult or impossible to solve analytically can be approximated by difference equations, that in turn can be solved numerically. This computational approach to solving state-variable models is the basis for continuous system simulation, described in the following. SIMULATION Simulation is a model-based approach to the design, analysis, and control of systems which is fundamentally experimental. In principle, computer simulation is much like running laboratory or field tests, except that the physical system is replaced by a computational model. Broadly speaking, simulation involves creating a model which imitates the behavior of interest; running the model to generate observations of this behavior; analyzing the observations to understand and summarize this behavior; testing and comparing alternative designs and controls to improve system performance; and validating, explaining, and supporting the simulation outcomes and recommendations. A simulation run or replication is a controlled experiment in which a specific realization of the model is manip-
6
Modeling and Simulation
ulated in order to determine the response associated with that realization. A simulation study always comprises multiple runs. For deterministic models, one run must be completed for each different combination of model parameters and/or initial conditions under consideration. The generalized solution of the model then must be inferred from a finite number of such runs. For stochastic models, in which inputs and outputs are realizations of random variables, the situation is even more complicated. Multiple runs, each using different input random number streams, must be completed for each combination of model parameters and/or initial conditions. The response for this combination must be inferred statistically from the set of runs or sample paths. The generalized solution in turn must be inferred, again statistically, from multiple sets of multiple runs. Simulation stands in contrast to analytical approaches to the solution of models. In an analytical approach, the model is expressed as a set of equations that describe how the state changes over time. We solve these equations using standard mathematical methods—algebra, calculus, or numerical analysis—to determine the distribution of the state at any particular time. The result is a general, closedform solution, which gives the state at any time as a function of the initial state, the input, and the model parameters. Because of the generality of closed-form solutions, when models readily can be solved analytically, this is always the preferred approach. Simulation is used widely instead of analytical approaches because closed-form solutions for nonlinear, timevarying, and discrete-event systems are rarely available. In addition, while explicit closed-form solutions for timeinvariant linear systems can always be found, this is sometimes impractical if the systems are very large. Moreover, simulation models can incorporate necessary procedural information that describes the process through which the state changes over time—information that often cannot be expressed in terms of equations alone. While simulation suffers all of the disadvantages of experimentalism, it is highly versatile and frequently the only practical means of analyzing complex models. CONTINUOUS SYSTEM SIMULATION Digital continuous-system simulation involves the approximate solution of an analog state-variable model over successive time steps. Consider the general state-variable equation [Eq. (1)] to be simulated over the time interval t0 ≤ t ≤ tK . The solution to this problem is based on the repeated solution of the single-variable, single-step subproblem. The subproblem may be stated formally as follows: Given: 1. t(k) = tk − tk−1 , the length of the kth time step 2. dxi /dt = fi [x(t), u(t), t] for t0 ≤ t ≤ tk , the ith equation of state defined for the state variable xi (t) over the kth time step. 3. u(t) for t0 ≤ t ≤ tk , the input vector defined for the kth time step.
4. x˜ (k − 1) ∼ = x(tk−1 ), an initial approximation for the state vector at the beginning of the time step. Find: 5. x˜ i (k) ∼ = xi (tk ), a final approximation for the state variable xi (t) at the end of the kth time step. Solving this single-variable, single-step subproblem for each of the state variables xi (tk ), i = 1, 2, . . . , n, yields a final approximation for the state vector x˜ (k) ∼ = x(tk ) at the end of the kth time step. Solving the complete single-step problem K times over K time steps, beginning with the initial condition x˜ (0) ∼ = x(t0 ) and using the final value of x˜ (k) from the kth time step as the initial value of the state for the (k + 1)st time step, yields a discrete succession of approximations x˜ (1) ∼ = x(t1 ), x˜ (2) ∼ = x(t2 ), . . . , x˜ (K) ∼ = x(tK ) spanning the solution time interval. The basic procedure for completing the single-variable, single-step problem is the same regardless of the particular integration method chosen. The procedure consists of two parts: 1. Calculation of the average value of the ith derivative over the time step as dxi xi (k) ∼ ˜ = fi [x(t ∗ ), u(t ∗ ), t ∗ ] = = fi (k) dt t(k) 2. Calculation of the final value of the simulated variable at the end of the time step as x˜ i (k) = x˜ i (k − 1) + xi (k) ∼ = x˜ i (k − 1) + f˜ i (k)
If the function fi is continuous, then t* is guaranteed to be on the time step; that is, t0 ≤ t* ≤ tk . Since the value of t* is otherwise unknown, however, the value of the derivative can only be approximated as f˜ i (k). Different numerical integration methods are distinguished by the means used to calculate the approximation f˜ i (k). A wide variety of such methods is available for digital simulation of dynamic systems. The choice of a particular method depends on the nature of the model being simulated, the accuracy required in the simulated data, and the computing effort available for the simulation study. Several popular classes of integration methods are outlined in the following subsections. Euler Method The simplest procedure for numerical integration is the Euler or rectangular method. As illustrated in Fig. 5, the standard Euler method approximates the average value of the ith derivative over the kth time step using the derivative evaluated at the beginning of the time step; that is, f˜ i (k) = fi [x˜ (k − 1), u(k ˜ − 1), k − 1] A modification of this method uses the newly calculated state variables in the derivative calculation as these new
Modeling and Simulation
7
where the N approximations to the derivative are fi1 (k) = fi [x˜ (k − 1), u(k ˜ − 1), k − 1] (the Euler approximation) and
fij = fi x˜ (k − 1) + t
j−1 t=1
Figure 5. Geometric interpretation of the Euler method for numerical integration.
values become available. Assuming that the state variables are computed in numerical order according to the subscripts, this implies f˜ i (k)
=
fi [{x1 (k), . . . , xi−1 (k), xi (k − 1), . . . , xn (k − 1)}T , u(k ˜ − 1), k − 1]
The modified Euler method is modestly more efficient than the standard procedure and, frequently, is more accurate. In addition, since the input vector u(t) is usually known for the entire time step, using an average value of the input such as u(k) =
1 t(k)
tk
u(τ)dτ
tk−1
frequently leads to a superior approximation of f˜ i (k). The Euler method requires the least amount of computational effort per time step of any numerical integration scheme. Local truncation error is proportional to t2 , however, which means that the error within each time step is highly sensitive to step size. Because the accuracy of the method demands very small time steps, the number of time steps required to implement the method successfully can be large relative to other methods. This can imply a large computational overhead and can lead to inaccuracies through the accumulation of roundoff error at each step.
f˜ i (k) =
N j=1
wi fij (k)
Ib jt fit , u
j−1
tk−1 + t)
b jt
t=1
where I is the identity matrix. The weighting coefficients wi and bjt are not unique, but are selected such that the error in the approximation is zero when xi (t) is some specified Nth-degree polynomial in t. Because Runge–Kutta formulas are designed to be exact for a polynomial of order N, local truncation error is of the order tN +1 . This considerable improvement over the Euler method means that comparable accuracy can be achieved for larger step sizes. The penalty is that N derivative calls are required for each scalar evaluation within each time step. Euler and Runge–Kutta methods are examples of single-step methods for numerical integration, so-called because the approximate state x˜ (k) is calculated from knowledge of the approximate state x˜ (k − 1) without requiring knowledge of the state at any time prior to the beginning of the current time step. These methods are also referred to as self-starting methods, since calculations may proceed from any known state. Multistep Methods Multistep methods differ from the single-step methods in that multistep methods use the stored values of two or more previously computed states and/or derivatives in order to compute the derivative approximation f˜ i (k) for the current time step. The advantage of multistep methods over Runge–Kutta methods is that these require only one derivative call for each state variable at each time step for comparable accuracy. The disadvantage is that multistep methods are not self-starting, since calculations cannot proceed from the initial state alone. Multistep methods must be started, or restarted in the case of discontinuous derivatives, using a single-step method to calculate the first several steps. The most popular of the multistep methods are the Adams–Bashforth predictor methods and the Adams– Moulton corrector methods. These methods use the derivative approximation
Runge–Kutta Methods Runge–Kutta methods precompute two or more values of the derivative in the time step t0 ≤ t ≤ tk and use some weighted average of these values to calculate f˜ i (k). The order of a Runge–Kutta method refers to the number of derivative terms (or derivative calls) used in the scalar single-step calculation. A Runge–Kutta routine of order N therefore uses the approximation
f˜ i (k) =
N
bi fi [x˜ (k − j), u(k − j), k − j]
j=0
where the bi are weighting coefficients. These coefficients are selected such that the error in the approximation is zero when xi (t) is a specified polynomial. Note that the predictor methods employ an open or explicit rule, since for these methods b0 = 0 and a prior estimate of x˜ i (k) is not required. The corrector methods used a closed or implicit rule, since for these methods b0 = 0 and a prior estimate of x˜ i (k) is required.
8
Modeling and Simulation
Predictor–Corrector Methods Predictor–corrector methods use one of the multistep predictor equations to provide an initial estimate (or “prediction”) of xi (t). This initial estimate is then used with one of the multistep corrector equations to provide a second and improved (or “corrected”) estimate of xi (t) before proceeding to the next step. A popular choice is the four-point Adams–Bashforth predictor together with the four-point Adams–Moulton corrector, resulting in a prediction of t x˜ i (k) = x˜ i (k − 1) + [55f˜ i (k − 1) − 59f˜ i (k − 2) 24 + 37f˜ i (k − 3) − 9f˜ i (k − 4)] (for i = 1, 2, . . . , n) and a correction of t ˜ [9f i [x˜ (k), u(k), k] x˜ i (k) = x˜ i (k − 1) + 24 + 19f˜ i (k − 2) − 5f˜ i (k − 3) + f˜ i (k − 4)] Predictor–corrector methods generally incorporate a strategy for increasing or decreasing the size of the time step depending on the difference between the predicted and corrected x(k) values. Such variable time-step methods are particularly useful if the simulated system possesses local time constants that differ by several orders of magnitude, or if there is little a priori knowledge about the system response. Numerical Integration Errors An inherent characteristic of digital simulation is that the discrete data points generated by the simulation x˜ i (k) are only approximations to the exact solution xi (t) at the corresponding point in time. This results from two types of errors that are unavoidable in the numerical solutions. Round-off errors occur because numbers stored in a digital computer have finite word length (i.e., a finite number of bits per word) and therefore limited precision. Because the results of calculations cannot be stored exactly, round-off error tends to increase with the number of calculations performed. Therefore, for a given total solution interval, round-off error tends to increase (i) with increasing integration-rule order (since more calculations must be performed at each time step) and (ii) with decreasing step size t (since more time steps are required). Truncation errors or numerical approximation errors occur because of the inherent limitations in the numerical integration methods themselves. Such errors would arise even if the digital computer had infinite precision. Local or per-step truncation error is defined as e(k) = x(k) − x(tk ) given that x(k − 1) − x(tk−1 ) and that the calculation at the kth time step is infinitely precise. For many integration methods, local truncation errors can be approximated at each step. Global or total truncation error is defined as e(K) = x(K) − x(tK ) given that x(0) − x(t0 ) and the calculations for all K time steps are infinitely precise. Global truncation error usually cannot be estimated, nor can efforts to reduce local truncation errors be guaranteed to yield acceptable global errors. In general, however, truncation errors can be decreased by
using more sophisticated integration methods and by decreasing the step size t. Time Constants and Time Steps As a general rule, the step size t for simulation must be less than the smallest local time constant of the model simulated. This can be illustrated by considering the simple first-order system dx = λx(t) dt and the difference equation defining the corresponding Euler integration: x(k + 1)
= x(k) + tλx(k) = (1 + tλ)x(k)
The continuous system is stable for λ < 0, while the discrete approximation is stable for |1 + λ t| < 1. Therefore, if the original system is stable, the simulated response will be stable only for t ≤ |λ−1 | where the equality defines the critical step size. For larger step sizes, the simulation will exhibit numerical instability. In general, while higher-order integration methods will provide greater per-step accuracy, the critical step size itself will not be greatly reduced. A major problem arises when the simulated model has one or more time constants |λ−1 | that are small when compared to the total solution time interval t0 ≤ t ≤ tk . Numerical stability will then require very small t, even though the transient response associated with these higher-frequency subsystems may contribute little to the particular solution. Such problems can be addressed either by neglecting the higher-frequency components where appropriate or by adopting special numerical integration methods for stiff systems. Selecting an Integration Method The best numerical integration method for a specific simulation is the method that yields an acceptable global approximation error with the minimum amount of round-off error and computing effort. No single method is best for all applications. The selection of an integration method depends on the model simulated, the purpose of the simulation study, and the availability of computing hardware and software. In general, for well-behaved problems with continuous derivatives and no stiffness, a lower-order Adams predictor is often a good choice. Multistep methods also facilitate estimating local truncation error. Multistep methods should be avoided for systems with discontinuities, however, because of the need for frequent restarts. Runge–Kutta methods have the advantage that these are self-starting and provide fair stability. For stiff systems where highfrequency modes have little influence on the global response, special stiff-system methods enable the use of economically large step sizes. Variable-step rules are useful when little is known a priori about solutions. Variable-step
Modeling and Simulation
rules often make a good choice as general-purpose integration methods. Round-off error usually is not a major concern in the selection of an integration method, since the goal of minimizing computing effort typically obviates attendant problems. Double-precision simulation can be used where round-off is a potential concern. An upper bound on step size often exists because of discontinuities in derivative functions, or because of the need for response output at closely spaced time intervals. Digital simulation can be implemented for a specific model in any high-level language such as FORTRAN or C. In addition, many special-purpose continuous system simulation languages are commonly widely available across a wide range of platforms. Such languages greatly simplify programming tasks and typically provide friendly user interfaces and good graphical output. DISCRETE-EVENT SIMULATION In discrete-event dynamic models, the independent variable is a discrete index which references a set of events. The events are instantaneous, strictly sequential, and asynchronous, as previously defined. The occurrence of events is determined by inputs and by the internal logic of the system. The system state changes in response to the occurrence of events. For example, consider a simple queuing system comprising a server (called a permanent entity) and a set of jobs (called temporary entities) to be processed by the server. If the server is idle when a new job arrives for processing, then the job typically begins processing immediately. However, if the server is busy when a new job arrives, then the arriving job typically must wait for some or all of the prior jobs to complete processing before the new job can access the server and begin processing. The state of the queuing system is the total number of jobs in the system x(t) at any time t, including any job in process and all jobs waiting. Outputs are performance measures derived by knowing the trajectory of the state over time, including such measures as average cycle time for jobs, the average queue length and waiting time, and the average throughput for the system. Clearly, the state of the system changes in response to two types of events. An arrival event increases the number in system by one job. A departure event (or service completion event) decreases the number in system by one job.
Random Number Generation The discrete-event systems of greatest interest are stochastic, with events occurring randomly over time as constrained by the process logic. Inputs to a simulation are random numbers, and the theory of discrete-event simulation is intimately bound to the rich and evolving theory of generating pseudorandom numbers and variates on a computer. (Pseudorandom numbers are real values generated deterministically by a computer algorithm. While completely deterministic, these numbers satisfy standard tests for statistical independence and randomness). In the
9
queuing example, arrival events typically are determined by generating random variates that define the time between the arrivals of successive jobs. Departure events are determined in part by generating random numbers which define the time required to complete each job, once the job begins processing. Time Advance Mechanism In contrast to continuous systems simulations, which use some form of fixed-increment time advance to determine the next value of the independent variable after each iteration, almost all discrete-event simulations employ a next-event time-advance approach. With the next-event approach, at each time step the state of the system is updated to account for the fact that an event has occurred. Then the times of occurrence of future events are determined. The value of simulated time is then advanced to the time of occurrence of the first or most imminent of these future events. This process advances the simulation time from one event time to the next and is continued until some prescribed stopping event occurs. The next-event approach clearly is more efficient than the fixed increment approach, since computation time is not wasted during periods of inactivity between events when by definition the state cannot change. Logical Components The next-event time-advance approach relies on searching and manipulating data structures that are lists or chains of current and future events. At each time step, the current event list or calendar is scanned to determine the next event to be processed. The processing of an event typically involves linking and unlinking existing and/or newly created events to the lists. The logical components shared by most discrete-event simulations using the next-event timeadvance approach implement these list processing, event processing, accounting, and reporting requirements. These components include the following (3): System Image or State. The collection of state variables necessary to describe the system at a particular time. Simulation Clock. A variable giving the current value of simulated time. Event List or Calendar. A list containing the next time when each type of event will occur. Statistical Counters. Variables used for storing statistical information about system performance. Initialization Routine. A subprogram to initialize the simulation model at time zero. Timing Routine. A subprogram that determines the next event from the event list and then advances the simulation clock to the time when that event is to occur. Event Routines. Subprograms that update the system state when a particular type of event occurs (there is one event routine for each event type). Library Routines. A set of subprograms used to generate random observations from probability distribu-
10
Modeling and Simulation
tions that were determined as part of the simulation model. Report Generator. A subprogram that computes estimates (from the statistical counters) of the desired measures of performance and produces a report when the simulation ends. Main Program. A subprogram that invokes the timing routine to determine the next event and then transfers control to the corresponding event routine to update the system state appropriately. The main program may also check for termination and invoke the report generator when the simulation is over. MODELING AND SIMULATION ISSUES Ziegler (4) proposed a formal theory of modeling and simulation that builds on the ideas of mathematical systems models. Ziegler’s theory encompasses five basic elements: 1. Real Model. The system modeled. It is simply a source of observable data, in the form of input–output pairs (uj , yj ). Typically, there are no other clues available to determine its structure. 2. Base Model. The investigator’s image or mental model of the real system. It is a system that is capable (at least hypothetically) of accounting for the complete behavior of the real system. If the real system is highly complex, the base model also is highly complex. The cost, time, and difficulty of realizing the base model explicitly most often are prohibitive, unwarranted, or impossible. Therefore, as a practical matter, the structure of the base model is, at best, partially known to the investigator. 3. Experiment Frame. The set of limited circumstances under which the real system is to be observed and understood for the purpose of the modeling exercise. It is a restricted subset of the observed output behaviors. 4. Lumped Model. The concept most often associated with the term model. It is a system which is capable of accounting for the output behavior of the real system, under the experiment frame of interest. It is an explicit simplification and partial realization of the base model, with its structure completely known to the investigator. 5. Computer. The means by which the behavior of the lumped model is generated. The computer, in the sense intended by Ziegler, is not necessarily a digital computer. For simple models, it may represent the explicit analytical solution to the model equations, worked out by hand. For more complex models, however, the computer may need to generate individual trajectories step by step, based on instructions provided by the model. This step-by-step process is what is most often associated with the concept of simulation and is usually conducted by using a digital computer. Model theory illuminates the fundamental relationships among these modeling elements. Three of the most important modeling relationships are briefly described below. Validation concerns the relationship between models and the real system. The objective of validation is to en-
sure that a model matches the system modeled, so that the conclusions drawn about the model are reasonable conclusions about the real system as well. A base model is valid to the extent that it faithfully reproduces the behavior of the real system in all experiment frames. On the other hand, a lumped model is valid to the extent that it faithfully matches the real system under the experimental frame for which it is defined. There can be many different lumped models that are valid, and a lumped model can be valid in one experiment frame and not another. Model validation is a deep and difficult issue, and there are many different levels and interpretations of validity. Simplification concerns the relationships between a base model and its associated lumped models. The objective of simplification is to achieve the most efficient and effective lumped model that is valid within the experiment frame for which it is defined. Simplification can be achieved in many ways. These include dropping relatively insignificant system variables and associated structures, replacing deterministic variables and structures with random variables and associated generating functions, coarsening the range set of system variables, and aggregating system variables and structures into larger blocks. Many of the formal ideas associated with simplification, such as isomorphism and homomorphism, concern the preservation of structural similarities between mathematical systems. Simulation, in the sense intended by Ziegler, concerns the relationship between models and the computer. The objective of simulation is to ensure that the computer faithfully reproduces the behavior implied by the model. The behavior of a lumped model must be distinguished from the correctness of its computer implementations or solutions, in the same way that the behavior of a real system must be distinguished from the validity of its models. While a valid model may have been developed, it is also necessary to have a correct simulation. Otherwise, the model solution cannot be used to draw conclusions about the real system. Formal ideas associated with simulation include the completeness, consistency, and ambiguity of the computer implementation. The process of matching a simulation to its lumped model sometimes is known as verification. Many of the same techniques used to validate models are also used to verify simulations.
MODEL VALIDATION One of the most important and most difficult issues in modeling is that of establishing the level of credibility that should be given to a model and, as a consequence, the level of confidence a decision maker should have in conclusions derived from model results. As introduced above, validation is the process of determining whether or not a model is adequate for the specific tasks to which it will be applied. Validation tests the agreement between the behavior of the model and the behavior of the real-world system which is being modeled (5). Validation is the process of bringing to an acceptable level the user’s confidence that any inference about the real-world system derived from the model is correct (6). Validation is not a general seal of approval, but is instead an indication of a level of confidence in the
Modeling and Simulation
model’s behavior under limited conditions and for specific purposes—that is, a check on its operational agreement with the real-world system (7). Verification, Validation, Problem Analysis, and Model Assessment A number of concepts have evolved in the literature on model validation which provide useful distinctions between the various related activities involved in evaluating a model for use in support of decision-making. Fishman and Kiviat (5) introduced the now standard division of evaluation activities into three categories: verification, the process of ensuring that a model behaves as the modeler intends it should behave; validation, the process of demonstrating agreement between the behavior of a model and the behavior of the system the model is intended to represent; and problem analysis, the process of interpreting the data and results generated by application of a model. More recently, a number of investigators (7–10) have extended the domain of evaluation activities to include the more general and dynamic notion of model assessment. Model assessment includes not only the activities of verification, validation, and problem analysis, but also model maintenance and quality control, to ensure the continued usability of the model and its readiness for use, and model understanding, to determine the assumptions and limitations of the model, the appropriate and inappropriate uses of the model, and the reasons a model generates the results that it does. Verification is concerned with the internal consistency of a model and the degree to which the model embodies the intent of the modeler. A model that is fully verified is not necessarily an accurate representation of the system modeled, but is instead a precise interpretation of the modeler’s description of the system as he or she intends to represent it. Verification involves tests to ensure that model equations and logic are accurately stated, that the logic and order of model computations are accurately carried out, and that the data and input to the model are correctly interpreted and applied during computations. For computer-based models, verification is closely associated with computer programming and with the techniques of software engineering used to develop readable, reliable computer programs. The techniques of structured programming, in general, are invaluable in the verification of large and complex models. In addition, Law (3) describes several techniques for verification that are perhaps unique to computer simulation modeling. Validation is concerned with the accuracy of a model as a representation of the system modeled. How to measure the validity of a model is problematic, however, and certainly depends upon the intended use of the model. Indeed, no model can ever be entirely valid in the sense of being supported by objective truth, since models are, by nature and design, simplifications of the systems they are intended to represent. As Greenberger et al. (7) suggest, “useful,” “illuminating,” “convincing,” and “inspiring confidence” are more reasonable descriptors of models than is “valid.” Validation issues, tests, and philosophy are the central concerns of this article and are considered in greater detail in the sections which follow.
11
Problem analysis, or output analysis, is concerned with determining the true parameters of a model and with correctly interpreting data generated by solving the model. As with model verification, problem analysis is largely a model-based activity that says nothing directly about the true parameters or characteristic behavior of the system modeled. Inferences about the behavior of the real system apply only to the extent that the model is valid for the system under study and for the particular analysis applied. Output analysis is a specialized technical subject. Assessment is concerned with determining whether or not a model can be used with confidence for a particular decision problem within a particular decision-making environment. The basic idea behind model assessment, as distinct from the more limited idea of model validation per se, is the accumulation of evidence by independent and dispassionate investigators regarding the credibility and applicability of a model. Model assessment serves many purposes: education, model, development and improvement, theoretical analysis, understanding the model and the process being studied, obtaining insights to aid decision-making, ensuring the reproducibility of results, improving model documentation, making the model more accessible, and determining the utility of the model for a particular decisionmaking situation. Model assessment is typically discussed in the context of models developed (and intended to be institutionalized) for policy analysis, but the evaluation procedure applies equally well to models which are to be used regularly within any decision-making environment. Assessment is important because the decision maker typically has had little or no involvement in the modeling process and therefore requires an independent basis for deciding when to accept and when to reject model results. This basis is difficult to develop without independent evaluation of the impact on model structure and behavior of the assumptions of the model, the availability of data used to calibrate the model, and the other elements of the process implicit in model development. Direct and Indirect Approaches to Model Validation Validation represents a collection of activities aimed at deducing just how well a model captures those behavioral characteristics of the system modeled which are essential for understanding a given decision-making situation. If a good deal is known about the past behavior of the system modeled, or if the system modeled is accessible to study or at least some limited experimentation, then comparison of the behavior of the model with the behavior of the modeled system provides a direct means for model validation. A model is said to be replicatively valid if the data generated by the model correlate (within tolerances established by the investigator) with data collected from the real system, where the data from the real system have been collected prior to development and calibration of the model. A model is said to be predictively valid if data generated by the model correlate with data collected from the real system, where the data from the real system have been collected after the model has been developed and run. Of the two validity tests, predictive validity is the stronger. Replicative validity is typically an objective of the modeling exercise, rather than a test of model validity per se
12
Modeling and Simulation
since every modeler almost certainly endeavors to modify and refine a model until its conformity with known behavioral data is achieved. A model is said to be structurally valid if it not only replicates the data generated by the real system, but does so for the same reasons and according to the same causal mechanisms as the real system. A model which can be shown to be both predictively and structurally valid obviously is highly desirable. In a great many situations, direct approaches to model validation are not available. These are common in futureoriented decision situations in volatile environments, where the potential risks and rewards are greatest. In these situations the real-world system modeled is not well understood, or does not yet actually exist, or is otherwise inaccessible to study or experimentation. The past behavior of the real-world system is either unknown, or provides a poor guide to the likely future behavior of that system. The future behavior of the real-world system cannot be known a priori with certainty. In these cases, the modeler or decision maker must rely upon indirect tests of a model’s credibility. Face validity is the primary objective of the first phase of model development. A model with face validity is one which appears to be reasonable (or does not appear to be unreasonable) to people who are knowledgeable about the system under study. In general, even the most complex models are constructed from simpler primitives. Complexity results from the large number of hypotheses used in constructing the model and from the myriad interactions that can occur between individual model components, rather than from any inherent complexity in the hypotheses as such. Face validity ensures that the individual relationships and hypotheses built into a model are consistent with what is understood or assumed to be true about the real-world system by experts. Face validity also ensures that any relationship that can be rejected based upon prior knowledge and experience will not be incorporated within the model. Face validity is achieved by using all existing information about the system modeled during model development and initial testing. Information sources include observation of the system elements or subsystems, existing theory, conversations with experts, general knowledge, and even the intuition of the modeler. Early involvement of the ultimate user or decision maker in the initial formulation of a model, if possible, along with continued close involvement of the user as the model develops, tends to promote “ownership” of the model on the part of the user. This is the most highly desired form of face validity, since decision makers are far more likely to accept as valid and to use models which they intimately understand and which they have actively helped to develop. Sensitivity analysis, or variable-parameter validity, is a set of quantitative procedures for testing the validity or credibility of assumptions that were made during the initial stages of model development and that have survived the test of face validity. Sensitivity analysis seeks to assess the amount of change in the model state, output, or other critical variables that result from changes in selected model parameters or inputs. One use of sensitivity analysis is to test the sense and magnitude of the impact of one variable upon another, in order to ensure that the changes
induced are intuitive (or, if counterintuitive, to determine the reasons for such changes from the underlying causal structure of the model). A second use of sensitivity analysis is to isolate pairs of inputs and outputs where small changes in the input result in large changes in the corresponding output. Since the model is particularly sensitive to the relationships that couple these pairs, sensitivity analysis can be used in this way to determine those assumptions and hypotheses upon which the model most critically depends. Relationships to which the model is highly sensitive can be singled out for further critical evaluation. In a similar fashion, sensitivity analysis can be used to determine relationships to which the model is insensitive. This introduces the possibility of simplifying the model by reducing the level of detail with which insensitive subsystems are represented. Sensitivity analysis is one way to compensate for uncertainty in a model, but it typically is a difficult, technically demanding, and potentially expensive and time-consuming activity. Monte Carlo techniques can be used to explore formally the distribution of model outcomes resulting from the distributions of uncertainty in model parameters and inputs. Other advanced techniques of statistical sensitivity analysis, such as response surface methods, can also be used to great advantage, when time and budget permit and when the importance of the decision consequences demands. A Framework for Validation Activities Schellenberger (11) developed a general validation framework which is particularly useful in organizing related validation tasks and objectives. This three-part framework includes the ideas of technical validity, operational validity, and dynamic validity. Technical validity concerns the assumptions and data used to formulate a model and is the aggregate of model validity, data validity, logical/mathematical validity, and predictive validity. Model validity conforms to the standard definition of validity and seeks to determine the degree to which the model is an accurate representation of the system modeled. The activities of model validation include identifying and criticizing all the assumptions of a model (stated and implied), such as content assumptions concerning the scope and definition of variables, causal assumptions concerning the nature and extent of cross impacts among variables, and mathematical assumptions concerning the exact form and continuity of model relationships. Data validity addresses the adequacy of raw and processed data. The activities of data validation include determining the accuracy, impartiality, and representativeness of raw data, as well as the effects and potential biases introduced in reducing raw data to the structured form actually used in the model. Logical/mathematical validity conforms to the standard definition of model verification and seeks to determine errors in translating the model into an accurate computer code. The activities of logical/mathematical validation include determining whether computations are accurate and precise, whether the flows of data and intermediate calculations are correct, and whether all of the necessary variables and relationships have been
Modeling and Simulation
included within the computer program. Predictive validity addresses the correlation between behavioral data generated by the model and the corresponding data from the system modeled. The activities of predictive validation include statistical tests, analyses of time series data, and even intuitive comparisons of behavioral trends. Operational validity assesses the meaning and importance of technical errors in a model and focuses on differences between the model and the system modeled as determined through technical validation. The fundamental question here is one of degree. Are the differences between the model and the real world sufficient to draw into question the results of the model? Are the insights gained from the model sufficient to overcome concerns about inaccuracies in the specific numerical data generated by the model? Is the model sufficiently robust to yield consistent conclusions for reasonable ranges of parameter variations? Clearly, sensitivity analysis is one means of exploring these questions and, for this reason, is a key element in determining operational validity. Also included under the heading of operational validity activities is the notion of implementation validity. The idea here is to determine the extent to which recommended actions, derived from a model-based study, will have the intended effect, when implemented in the real-world system. Dynamic validity concerns maintaining a model over time in order to extend the usefulness of the model for decision-making. Dynamic validation requires both updating and review. Updating refers to the process through which the need for incremental changes in the model database or model structure is identified and through which the necessary changes are implemented. On a broader scale, review refers to the process through which the success or failure of the model is regularly gauged and through which major changes in, or revalidation of, the model is triggered. Philosophical Perspectives The subject of model validation cannot be divorced from the broader philosophical issue of how, in general, we may know the truth. Epistemology is the branch of philosophy which is concerned with the origins, nature, methods, and limits of knowledge; it is also concerned with what we can know, how we can know it, and how much faith we can have in the validity of our knowledge. One epistemological theory (and most likely the predominant theory among scientists, engineers, and those others who build and use mathematical models) is that all human knowledge that is strictly of a rational nature is fundamentally, inescapably model-based. Our knowledge of any aspect of the real world in this view constitutes an internalized “mental model” of that world, namely, Ziegler’s base model. Because our organized knowledge of the real world is internalized in mental models, such knowledge must be based upon perceptual information passed through our senses. We develop and refine our mental models according to primary sensory information (data), secondary information given us by others (data, theory, and opinion), and reason and logic (a form of verification for mental models that
13
itself is based upon internalized models). We know that our senses are selective and can be deceived, and therefore we can never have guarantees that our mental models are based upon information that is either complete or entirely correct. We know that information from secondary sources is not immutable, and therefore we cannot rely upon authority for absolute substantiation of our mental models. We know that any logical construct must begin with some fundamental predicate assumption that cannot be tested, and therefore an appeal to pure reason is insufficient to validate our mental models. In summary, it is unlikely that we can know anything with absolute certainty. All knowledge is to a greater or lesser degree personal and subjective, and it is tentative and subject to future revision or rejection (12). This is not to say that necessarily there are no truths to be known, but simply to recognize that our means of knowing these (if these exist) are inherently imperfect. It is within this context that the issue of model validation must be viewed. We can and should test and refine models in an effort to develop a high degree of confidence in their usefulness, but validation in an absolute sense is most likely a quest which belies the fundamental limits of human understanding. Validation in an absolute sense is neither possible nor necessary. Validation Checklist By far the most important test for the validity of a model rests with the question, “Does it make sense?” If the results of a modeling exercise defy the intuition of those most intimately familiar with the real-world system, then it is unlikely that any amount of explanation will ever persuade. By the same token, if the results are reasonable—if these conform to prior experiences and, best of all, offer insight to match our intuitions—then the issue of validation will undoubtedly be resolved favorably. In summary, validation is a continuous process by which confidence and credibility are developed for a model. Shannon (6) leaves us with the following checklist of actions that will ensure that the greatest possible validity is achieved:
Use common sense and logic. Take maximum advantage of the knowledge and in
sight of those most familiar with the system under study. Test empirically all of the assumptions of the model, whenever possible, using the appropriate statistical techniques. Pay close attention to details, and check and recheck each step of the model building process. Use test data and all available means during debugging to ensure that the model behaves as intended. Compare the input–output transformation of the model with that of the real system, whenever possible, using the appropriate statistical tests. Run field tests and conduct peripheral research where possible.
14
Modeling and Simulation
Undertake sensitivity analyses of model inputs and parameters. Check carefully the predictions of the model and the actual results achieved in the real-world system.
BIBLIOGRAPHY 1. K. P. White, Jr. Model theory, in Encyclopedia of Science and Technology, New York: McGraw-Hill, 2002. 2. G. E. P. Box and N. R. Draper, Response Surfaces, Mixtures, and Ridge Analyses, 2nd ed., New York: Wiley, 2007. 3. A. M. Law, Simulation Modeling and Analysis, 4th ed., New York: McGraw-Hill, 2007. 4. B. P. Ziegler Theory of Modelling and Simulation, New York: Wiley, 1976. 5. G. S. Fishman P. J. Kiviat The statistics of discrete-event simulation, Simulation, 10: 185–195, 1968. 6. R. E. Shannon Systems Simulation: The Art and Science, Englewood Cliffs, NJ: Prentice-Hall, 1975. 7. M. Greenberger M. A. Crenson B. L. Crissey Models in the Policy Process: Public Decision Making in the Computer Era, New York: Russell Sage Foundation, 1976. 8. S. I. Gass Evaluation of complex models, Comput. Oper. Res., 4: 27–35, 1977. 9. S I. Gass Decision-aiding models: Validation, assessment, and related issues for policy analysis, Oper. Res., 31, 603–631, 1983. 10. US Government Accounting Office,1979 Guidelines for Model Evaluation, Report No. PAD-79-17, Washington, DC: US Government Accounting Office, 1979. 11. R. E. Schellenberger Criteria for assessing model validity for managerial purposes, Decis. Sci., 5: 644–653, 1974. 12. M. Polanyi Personal Knowledge: Toward a Post-Critical Philosophy, New York: Harper and Row, 1958.
Reading List J. Banks, B. L. Nelson, J. S. Carson, D. M. Nicol, Discrete-Event System Simulation, Englewood Cliffs, NJ: Prentice Hall, 2004. J. Banks R. R. Gibson Selecting simulation software, IIE Solutions, 29 (5): 30–32, 1997. B. S. Bennett Simulation Fundamentals, Upper Saddle River, NJ: Prentice-Hall, 1996. G. Gordon System Simulation, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 1978. C. Harrell K. Tumay Simulation Made Easy: A Manager’s Guide, Norcross, GA: IIE Press, 1995. W. D. Kelton, R. P. Sadowski, D. T. Sturrock, Simulation with Arena, 4th ed., New York: McGraw-Hill, 2007. N. A. Kheir (ed.) Systems Modeling and Computer Simulation, 2nd ed., New York: Marcel Dekker, 1996. D. G. Luenberger Introduction to Dynamic Systems: Theory, Models, and Applications, New York: Wiley, 1978. R. H. Myers and D. C. Montgomery, Response Surface Methodology: Process and Product Optimization Using Designed Experiments, 2nd ed., New York: Wiley, 2002. A. P. Sage C. C. White Optimum Systems Control, 2nd ed., Englewood Cliffs, NJ: Prentice-Hall, 1977.
T. J. Schribner D. T. Brunner Inside simulation software: How it works and why it matters, Proc. 2006 Winter Simulation Conf., 2006. W. Thissen Investigations into the World 3 model: Lessons for understanding complicated models, IEEE Trans. Syst., Man Cybern., 8: 183–193, 1978. K. P. White, Jr. Modeling and simulation, inM. Kutz (ed.), Handbook of Mechanical Engineering, 3rd ed., New York: Wiley, 2005.
K. PRESTON WHITE Jr. Professor of Systems and Information Engineering, University of Virginia, Charlottesville, VA
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7111.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Petri Nets and Their Applications Standard Article MengChu Zhou1, Venkatesh Kurapati2, Huanxin Henry Xiong3 1New Jersey Institute of Technology, Newark, NJ 2American International Group, Inc., New York, NY 3Lucent Technologies, Inc., Eatontown, NJ Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7111 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (159K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Formal Definition Properties of Petri Nets and Their Implications Types of Petri Nets Modeling, Scheduling, and Control Multidisciplinary Engineering Applications About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7111.htm15.06.2008 14:28:58
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
PETRI NETS AND THEIR APPLICATIONS Petri nets (PN) were named after Carl A. Petri, who created a netlike mathematical tool for the study of communication with automata in 1962. Their further development stemmed from the need to specify process synchronization, asynchronous events, concurrent operations, and conflicts or resource sharing for a variety of industrial automated systems at the discrete-event level (1–3). Computer scientists performed most of the early studies. Starting in the late 1970s, researchers with engineering backgrounds, particularly in manufacturing automation, investigated Petri nets’ possible usage in human-made systems. These systems have become so complicated that the continuous/discrete-time systems theory has become insufficient to handle them. In any physical net, we can find two basic elements: nodes and links. Both nodes and links play their own roles. For example, forces could be transferred from one end to another through nodes and links; and different nodes/links may bear different forces. A PN divides nodes into two kinds: places and transitions. The places are used to represent the condition and/or status of a component in a system and are pictured by circles. The transitions represent the events and/or operations and are pictured by empty rectangles or solid bars. Two common events are “start” and “end.” Instead of bidirectional links in some physical nets, a PN utilizes directed arcs to connect from places (called input places with respect to a transition) to transitions or from transitions to places (called output places). In other words, the information transfer from a place to a transition or from a transition to a place is one way. Two-way transfer between a place and transition is achieved by designing an arc from a transition to a place and another arc from the transition back to the place. The places, transitions, and directed arcs make a PN a directed graph, called a Petri net structure. It is used to model a system’s structure. A system state is defined by the location of “state markers” in the places of a PN. These state markers are called “tokens” for short. The dynamics are introduced by allowing a place to hold either none or a positive number of tokens pictured by small solid dots. These tokens could represent the number of resources or indicate whether a condition is true or not in a place. When all the input places hold enough tokens, an event embedded in a transition can happen, called transition firing. This firing changes the token distribution in the places, signifying the change of system states. The introduction of tokens and their flow regulated through transitions allow one to visualize the material, control, and/or information flow clearly. Furthermore, one can perform a formal check of the properties related to the underlying system’s behavior (e.g., precedence relations among events, concurrent operations, appropriate synchronization, freedom from deadlock, repetitive activities, and mutual exclusion of shared resources). PNs belong to state-transition models. The simplest state-transition model is a state machine. Its graphical representation is a state diagram. In a state diagram, a node pictured as a circle represents a state that characterizes
the conditions of all the components in a system at a time instant. An arc represents an event that changes one state to another. Note that in a PN, a transition represents an event and an arc represents information, control or material flow. For example, in a robotic assembly system, the initial state is that a robot is ready to pick up a component and a component is ready for pick-up. Then the event “start a pick-up operation” brings the system into the state “the robot holds a component” or “the robot fails to pick it up.” At the new state, the next event can occur. State-machine models are suitable when a system has few active components, such as a robot or machine, or needs only a few states to describe. A single robot may need two states, “idle” and “busy,” and a dual-robot system requires four states. However, a 20-robot system requires over a million states. Clearly the number of states grows exponentially with the system size. It is difficult to represent systems with an infinite number of states. The states, events, precedence relations, and conflicting situations are explicitly represented, but synchronization concepts, concurrent operations, and mutually exclusive relations are not explicitly represented. A formalism that can overcome some of the limitations of state-machine modeling is desired to handle complex systems. It should use local states rather than global states, thereby avoiding the state enumeration problems in the modeling/design stage. It can explicitly represent conditions and events together, precedence relations, conflicting situations, synchronization concepts, concurrent and repetitive operations, and mutually exclusive relations. PNs are such formalism. FORMAL DEFINITION A marked Petri net (PN) Z = (P, T, I, O, m) is a five tuple where 1. P = {p1 , p2 , . . . , pn }, n > 0, is a finite set of places pictured by circles; 2. T = {t1 , t2 , . . . , ts }, s > 0, is a finite set of transitions pictured by bars, with P ∪ T = ∅ and P ∩ T = ∅; 3. I: P × T → N is an input function that defines the set of directed arcs from P to T, where N = {0, 1, 2, . . . }; 4. O: P × T → N is an output function that defines the set of directed arcs from T to P; 5. m: P → N is a marking whose ith component represents the number of tokens in the ith place. An initial marking is denoted by m0 . Tokens are pictured by dots. The four tuple (P, T, I, O) is called a PN structure that defines a directed graph structure. Introduction of tokens into places and their flow through transitions enable one to describe and study the discrete-event dynamic behavior of the PN, and thereby the modeled system. I and O represent two n × s nonnegative integer matrices. An incidence matrix C = O − I. A PN can alternatively be defined as (P, T, F, W, m), where F is a subset of {P × T} ∪ {T × P}, representing a set of all arcs, and W: F → N defines the multiplicity of arcs.
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.
2
Petri Nets and Their Applications
Figure 1. A Petri net example modeling a robot that picks a component and place.
A marked PN shown in Fig. 1 and its formal description are given as follows:
Input and output functions can be represented as matrices; that is,
The incidence matrix
The execution rules of a PN include enabling and firing rules: 1. A transition t ∈ T is enabled if and only if m(p) > I(p, t), ∀ p ∈ P. 2. Enabled in a marking m, t fires and results in a new marking m following the rule
Marking m is said to be (immediately) reachable from m. The enabling rule states that if all the input places of transition t have enough tokens, then t is enabled. This means that if the conditions associated with the occurrence of an event are all satisfied, then the event can occur. The firing rule says that an enabled transition t fires and its firing removes I(p, t) tokens from p, and then deposits O(p, t) tokens into p. In Fig. 2(a), transition t1 is enabled since m(p1 ) = 1 = I(p1 , t1 ) and m(p2 ) = 1 = I(p2 , t1 ); t2 is not since m(p3 ) = 0 < 1 = I(p3 , t2 ). Firing t1 removes one token from p1 and p2 , respectively, and deposits one token to t1 ’s only output place p3 . The result is shown in Fig. 2(b). The new marking is m = (0 0 1)τ . At m , t2 is enabled since m (p3 ) = 1 = I(p3 , t2 ). Firing t2 results in m = (0 1 0)τ . At this marking no transition is enabled. Note that t1 is not since m (p1 ) = 0 < 1 = I(p1 , t1 ) although m (p2 ) = 1 = I(p2 , t1 ).
Figure 2. Evolution of markings of a PN: (a) m = (1 1 0)τ , (b) m = (0 0 1)τ , and (c) m = (0 1 1)τ . This illustrates the state change of the modeled system.
PROPERTIES OF PETRI NETS AND THEIR IMPLICATIONS PNs as a mathematical tool have a number of properties. These properties, when interpreted in the context of the modeled manufacturing system, allow one to identify the presence or absence of the functional properties of the system. Two types of properties can be distinguished: behavioral and structural. The behavioral properties are those that depend on the initial state or marking of a PN. The structural properties, on the other hand, do not depend on the initial marking of a PN. They depend on the Petri net topology or structure.
Reachability Given a PN Z = (P, T, I, O, m0 ), marking m is reachable from marking m0 if there exists a sequence of transitions firings that transforms m0 to m. Marking m is said to be immediately reachable from m if firing an enabled transition in m leads to m . R is used to represent the set of all reachable markings. Reachability checks whether the system can reach a specific state or exhibit particular functional behavior.
Petri Nets and Their Applications
Boundedness and Safeness Given a PN Z and its reachability set R, a place p ∈ P is Bbounded if m(p) ≤ B, ∀ m ∈ R, where B is a positive integer. Z is B-bounded if each phase in P bounded. Safeness is 1boundedness. Z is structurally bounded if Z is B-bounded for some B, for any finite initial marking m0 . Places are frequently used to represent storage areas for parts, tools, pallets, and automated guided vehicles in manufacturing systems. Boundedness is used to identify the absence of overflows in the modeled system. When a place models an operation, its safeness guarantees that the controller does not attempt to initiate an ongoing process. The concept of boundedness is often interpreted as stability of a discrete manufacturing system, particularly when it is modeled as a queuing system. Liveness A transition t is live if at any marking m ∈ R there is a sequence of transitions whose firing reaches a marking that enables t. A PN is live if every transition in it is live. A transition t is dead if there is m ∈ R such that there is no sequence of transition firings to enable t starting from m. A PN contains a deadlock if there is m ∈ R at which no transition is enabled. Such a marking is called a dead marking. Deadlock situations are a result of inappropriate resource allocation policies or exhaustive use of some or all resources. For example, a deadlock may occur when a system is jammed or two or more processes are in a circular chain, each of which waits for resources held by the process next in the chain. Liveness of a PN means that for any marking m reachable from the initial marking m0 , it is ultimately possible to fire any transition in the net by progressing through some firing sequence. Therefore, if a PN is live, there is no deadlock. For discussion of other properties, refer to Refs. .2–4 The net shown in Fig. 1 is safe since each place holds at most one token. It is not live since the net contains a deadlock; that is, at marking (0 1 0)τ , no transition is enabled. Note that this deadlock results from the exhaustion of the tokens in place p1 . Only markings (0 0 1)τ and (0 1 0)τ are reachable from the initial marking (1 1 0)τ . Others are not. Starting from the initial system condition or state, it is desired to enumerate all the possible states the system can reach, as well as their relationship. The resulting representation is called a reachability tree or graph. The resulting method is called the reachability analysis method. All the behavioral properties can be discovered if the number of states is finite. TYPES OF PETRI NETS Marked Graphs A marked graph is a PN such that each place has exactly one input and one output arc. It is also called an event graph and is used to model decision-free concurrent and repetitive systems exhibiting no choice. These systems include robotic manufacturing cells, transportation systems, job shops, and machine centers where the sequences of jobs
3
or car movements are fixed. Extended PNs To increase the modeling power of PNs, they can be extended by including inhibitor arcs to test whether a place has no token. An inhibitor arc connects an input place to a transition and is pictorially represented by an arc terminated with a small circle. In the presence of an inhibitor arc, a transition is enabled if m(p) > I(p, t) for each input place connected to t by a normal directed arc and no tokens are present on each input place connected to t by an inhibitor arc. The transition firing rules are the same for normally connected places. The firing, however, does not change the marking in the inhibitor arc–connected places. Another way to enhance the PN’s modeling power is to assign priority over the transitions. The resulting nets are called extended PNs. They greatly facilitate the PN modeling of complex systems. They can model whatever systems a Turing machine can. Timed PNs A deterministic timed Petri net (DTPN) is defined as a marked graph, and either zero or positive time delays are associated with places, transitions, and/or arcs. The cycle time of a strongly connected deterministic timed PN is determined as follows: π = max {Di /Ni } i
where Di is the total time delay of loop i, and Ni is the token count of loop i. The ratio Di /Ni is called the cycle time of loop i. DTPN can be used to analyze a system cycle time and determine a bottleneck machine of a concurrent and repetitive system such as a robotic cell and a job shop. Stochastic PNs Associating random time delays with exponential distribution yields stochastic PNs (SPN). The SPN models that allow for immediate transitions (i.e., with zero time delay) are called generalized SPN (GSPN) (5). Both models may include extensions, such as priority transitions and inhibitor arcs, and can be converted into their equivalent Markov processes for analysis. They can be used to model and evaluate flexible manufacturing systems, random polling systems, concurrent programs, and concurrent computer architectures. Their use avoids the initial enumeration of all states, which is needed if Markov processes are used at the beginning. The latter is impossible for a large system. High-Level PNs Allowing different tokens, enabling, and executing rules in a PN leads to colored PNs. They can offer compact representation of a system with many similar subsystems. Allowing predicates in transitions leads to predicate-transition nets. Embedding attributes, procedures, or objects into tokens, places, and/or transitions in a PN leads to objectoriented PNs. All these high-level nets gain their applications in design and development of information systems
4
Petri Nets and Their Applications
Figure 3. A Petri net model of a production system: two jobs are to be processed by M1 and M2 . Job 1 needs Robot for its loading and unloading.
and complicated software systems. Other special nets include state-machine PN, freechoice PN, asymmetric-choice PN, production-process nets and augmented marked graphs, (dis)assembly PN, augmented timed PNs, and real-time PNs (6, 8). MODELING, SCHEDULING, AND CONTROL Modeling is a fundamental step for all the applications of PNs. We illustrate a general modeling method through a production system. The system consists of two machines, M1 and M2 , and one robot R. It processes two types of jobs, J1 and J2 . Both have to go through M1 and M2 sequentially but require different processing times. J1 also needs Robot for its part holding. There is one buffer slot assigned to J1 and J2 between their two processes, respectively. The number following the resources requirement in Table 1 is the processing time.
First, we identify the operations/status as follows: J1 : M1 processing J1 , J1 ’s part in its buffer, and M2 processing J1 ; and
J2 : M1 processing J2 , J2 ’s part in its buffer, and M2 processing J2 . The resources include M1 , M2 , R, J1 , and J2 ’s raw parts, buffer slots, and final products. Next, we identify the relationships among the preceding operations/status. Each job’s routing is fixed, and its operations form a precedence relation. Two jobs do not need to follow each other, however. Third, we design and label the places that represent operations and resources (i.e., p1 –p15 , as shown in Fig. 3). Each operation and resource corresponds to a unique place. We arrange the operations in a series for each job since they form a precedence relationship. We designate a transition that starts at Jk and one that ends at Jk , k = 1 and 2. We insert transitions between two operation places if they have a precedence relationship. Thus we have t1 –t8 in Fig. 3. For each transition, we draw an input arc to it from a place if enabling it requires the resource(s) or the completion of the operation(s) represented in the place; we draw an output arc from it to a place if firing it releases resource(s) or signals the initiation of the operation in the place. We take t1 as an example. We link p1 , p11 , and p13 to it since availability of J1 ’s raw part, M1 , and Robot is required to start the operation in p3 . We link it to p3 since its firing leads to the operation in p3 . Finally, we determine the initial number of tokens over all places according to the system’s initial state. If initially either of J1 and J2 has only one raw part, then the initial marking is the one shown in Fig. 3. We associate time delays with places. Hence, p3 , p4 , p7 , and p8 are associated
Petri Nets and Their Applications
5
trol, in which any part flow is inhibited. For example, suppose that the lot size of Job 1 in the preceding system is 2. Then the firing of transitions t1 t3 t1 leads the system from initial state (2 1 0 0 0 0 0 0 0 0 1 1 1 1 1)T to deadlock (0 1 1 0 1 0 0 0 0 0 0 1 0 0 1)T , at which any further part flow is inhibited. The algorithms (8) are available to derive optimal or or near-optimal deadlock-free schedules based on a system’s PN model. PN applications to complex manufacturing systems are referred to (2–8, 9). MULTIDISCIPLINARY ENGINEERING APPLICATIONS
Figure 4. Schedules represented by transition firing sequences (a) t1 t3 t2 t5 t4 t7 t6 t8 and (b) t2 t4 t1 t6 t3 t8 t5 t7 . The numbers in parentheses are time units.
with 1, 4, 4, and 1 time unit, respectively, and others with zero. Once we have its PN model, we can perform analysis, scheduling, and control of the system. The purpose of analysis is to check the properties discussed previously. Scheduling aims to derive a schedule optimizing a certain performance index. Control deals with the coordination and execution of part flow and processing. The controller keeps track of system states, such as the location of all parts and the status of each resource. Based on the current state and production plan, the controller supervises all the individual system components. Sensors and actuators have to be connected to the controller, which is often implemented as a computer or a microprocessor chip. Consider that both jobs 1 and 2 are in the shop and ready for processing at time 0, with each job having lot size of 1. We seek a production schedule to minimize the time required to complete both jobs. The initial marking is (1 1 0 0 0 0 0 0 0 0 1 1 1 1 1)T , and the final one (0 0 0 0 0 0 0 0 1 1 1 1 1 1 1)T . Both transition firing sequences of t1 t3 t2 t5 t4 t7 t6 t8 and t2 t4 t1 t6 t3 t8 t5 t7 give a path from the initial to final one. The production activities and markings corresponding to the first one are shown in Table 2. The Gantt charts in Fig. 4 show the two schedules with the makespan of the first one being 6 and the second being 9. Oi,j ,k represents the jth operation of the ith job being performed at the kth machine. Clearly, the first one should be selected as our schedule. In automated manufacturing, a deadlock situation may occur due to inappropriate allocation of resources and con-
Typically, discrete event systems have characteristics that exhibit synchronization, concurrency, resource sharing/conflicts, time dependency, and repetition. Since such systems are omnipresent in the real world, PNs are modified and extended in several ways and applied as a diversified modeling technique crossing several important disciplines of engineering, such as electrical and computer engineering, manufacturing engineering, industrial engineering, software engineering, biomedical engineering, and systems engineering. Electrical and Computer Engineering: PNs are applied for modeling and analyzing communication protocols, validating microprocessors and hardware, and performing process control. Computer-aided software tools are developed using PNs as a formal specification technique for the specification and analysis of computer communication protocols. These tools are used for interactive simulation and exhaustive reachability analysis to determine the liveness (absence of deadlocks) of PN models of communication protocols. By studying the transition sequences in the PN model, events that lead to the undesired behavior of a protocol can be traced. By associating time delays with certain transitions in the PN model, the time required to perform certain operations of a communication protocol can be modeled. Then the performance issues of a protocol, such as total time taken by the protocol to do a job, buffer holding times, and buffer requirements in the communication subsystem, can be investigated. Generalized stochastic PNs are applied for performance evaluation of interprocessor interrupt mechanisms in a shared but multi-microprocessor system (5). Performance issues, such as each processor’s interrupt request origination rate and capacity of message box versus the mean overhead time per interrupt request of each source pro-
6
Petri Nets and Their Applications
cessor, can be analyzed. PN models are also used to design distributed communication structures in interprocess communications to recognize and avoid deadlocks. Programmable logic controllers (PLC) are used for the sequential control or process control of manufacturing systems, chemical processing systems, and power plants. Traditionally the PLC programs are developed by ladder logic diagrams. However, to overcome their limitations in dealing with complex systems, PNs have recently been used to develop PLC programs; thus understandable and maintainable control systems can be developed for large automation projects (6, 7). Manufacturing and Industrial Engineering PNs serve as a graphical modeling tool for specification of the operations in a computer integrated manufacturing system (CIMS). When developed, they are used for analysis, design, scheduling, and simulation of CIMS to study such performance criteria as production rate, resource utilization, work-in-process inventory, number of resources needed, and cost of production. Alternatives can be investigated by changing the parameters, such as operational policies, operation times, number of resources, and mean time between failures of a resource. PNs are also used for supervisory control, sequence control, sequence control, fault detection, fault recovery, and monitoring of CIMS. Some studies integrate such techniques as expert systems and neural networks with PNs for controlling and monitoring of CIMS. PNs are used for rapid prototyping of control software using different programming languages (10). They are also integrated with object-oriented methodologies for the development of control software (6). Software Engineering PNs have been used for addressing several issues related to database systems, operating systems, distributed software systems, programming languages, etc. Concurrency control is one of the problems when distributed database systems are implemented. This problem involves modeling synchronization and concurrency among database transactions and how these transactions access data in a database. PNs are successfully used to model concurrency control of distributed databases. Concurrency control algorithms, such as centralized locking algorithm and distributed locking algorithm, are studied for their performance via their PN models. One of the main tasks in developing complex software systems is to design the system to be fault tolerant such that it can detect and eliminate the faults that may arise due to erroneous data, undetected hardware failures, and design flaws in hardware/software. Predicate/transition nets are used as a formal specification tool to describe and model complex software systems. These resulting PN models are then used systematically to integrate fault tolerance properties in the design of these software systems. PNs are combined with concurrent computer programming languages such as Ada and Flat Concurrent Prolog to study certain aspects of computer programs that have concurrency. The complete PN model of a computer program clearly models all the parallel tasks
and their sequence dependence in the program. It is then used to detect deadlocks and hence debug the program. BIBLIOGRAPHY 1. R. David H. Alla Petri Nets and Grafcet, Englewood Cliffs, NJ: Prentice Hall, 1992. 2. A. A. Desrochers R. Y. Al-Jaar Applications of Petri Nets in Manufacturing Systems, Piscataway, NJ: IEEE Press, 1995. 3. M. C. Zhou F. DiCesare Petri Net Synthesis for Discrete Event Control of Manufacturing Systems, Boston: Kluwer Academic, 1993. 4. T. Murata Petri nets: Properties, applications and analysis, Proc. IEEE, 77: 541–580, 1989. 5. M. Ajmone Marsan et al. Modeling with Generalized Stochastic Petri Nets, Chichester, England: Wiley, 1995. 6. M. C. Zhou K. Venkatesh Modeling, Simulation and Control of Flexible Manufacturing Systems: A Petri Net Approach, River Edge, NJ: World Scientific, 1998. 7. M. C. Zhou (ed.) Petri Nets in Flexible and Agile Automation, Boston: Kluwer Academic, 1995. 8. J.-M. Proth X. Xie Petri Nets: A Tool for Design and Management of Manufacturing Systems, New York: Wiley, 1996. 9. M. C. Zhou andM. P. Fanti (Ed.), Deadlock Resolution in Computer-Integrated Systems, Marcel Dekker: New York, January 2005. 10. Hruz, B. and M. C. Zhou Modeling and Control of Discrete Event Dynamic Systems, Springer, London, UK, 2007.
MENGCHU ZHOU VENKATESH KURAPATI HUANXIN HENRY XIONG New Jersey Institute of Technology, Newark, NJ American International Group, Inc., New York, NY Lucent Technologies, Inc., 10 Industrial Way East, Eatontown, NJ
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7117.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Quality Control Standard Article Fatemeh Mariam Zahedi1 1University of Wisconsin—Milwaukee, Milwaukee, WI Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7117 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (172K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are What is Tqm? Quality Principles Quality Awards Quality Standards Components of TQM Quality Information Systems Data and Information Quality Quality Tools Quality Metrics Analysis of Quality Metrics About Wiley InterScience | About Wiley | Privacy | Terms & Conditions
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20E...%20Systems,%20Man,%20and%20Cybernetics/W7117.htm (1 of 2)15.06.2008 14:29:15
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7117.htm
Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20E...%20Systems,%20Man,%20and%20Cybernetics/W7117.htm (2 of 2)15.06.2008 14:29:15
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering c 1999 John Wiley & Sons, Inc. Copyright
QUALITY CONTROL What is Tqm? Total quality management (TQM) is a collection of principles, concepts, tools, and processes, all designed to promote quality within an organization and its functions, as well as in interactions with its customers, suppliers, and the environment. The word total emphasizes the all-encompassing nature of TQM (all processes, functions, and people). The word management adds organizational and behavioral components to the technical scope of quality, emphasizing its business focus. There is no universal agreement on the definition of quality. However, a number of leaders in the quality movement have provided definitions that have a common focus on the customer’s needs, requirements, and expectations. Feigenbaum (1) defines quality as “[t]he total composite of product and service characteristics of marketing, engineering, manufacturing, and maintenance through which the product and service in use will meet the expectations of the customer.” Other definitions of quality include: “conformance to requirements,” (2) and “fitness for use” (3). The quality movement has its origin in the industrial revolution, which replaced craftsmen’s pride in the quality of their work, with industrial workers focused on one aspect of production and removed from the quality of the final product. The first systematic attempt in increasing quality was the scientific management theory, pioneered by Fredrick Winslow Taylor (4). In this theory, production was considered a closed system, which was to be improved by scientific and technical methods. The human behavior movement started with a series of studies (mostly performed at Western Electric’s Hawthorne plant), which showed that the supervisors’ special attention to workers increased their productivity (5). This was called the Hawthorne effect and introduced a behavioral approach to management. The attention to quality of industrial production coincided with the development of modern statistical theory, pioneered by Sir Ronald A. Fisher, a British statistician working at the turn of century. In the early 1920s, Walter A. Shewhart, a physicist at AT&T Bell Laboratories, used Fisher’s work to develop control charts for controlling quality. As statistics grew in theory and found more applications in business and economics, the use of statistics in management decisions, including quality control, took a firm hold. The development and application of mathematical models (such as linear programming and optimization) for logistic decisions in World War II increased the use of quantitative methods in management decision processes. It was in this historical context that W. Edwards Deming, a physicist and later a statistician, began his advocacy for the use of statistics in the service of the industry and humanity (6). As a graduate student, Deming worked at the Hawthorne plant (10 years earlier than the Hawthorne experiments) and saw the impact of the industrial division of labor (7). In 1938, he invited Shewhart to deliver a lecture series on the use of statistical methods for quality control, and worked with Shewhart in the aiding the war effort (8). After the war, Deming and later Juran contributed extensively to the revival of Japan’s economy by advising Japanese industry on the total quality approach, which began the global quality movement. In 1956, Feigenbaum (another TQM guru) introduced the concept of the cost of quality, which rejects the idea that higher quality means a higher cost. He advocated the development of measures and the collection of 1
2
QUALITY CONTROL
data for the cost of quality. In 1962, Kaoru Ishikawa, a professor at Tokyo University, started quality circles in Japan, which consisted of small, voluntary teams of workers who developed and monitored quality-control activities in their units, as a part of a company-wide quality program. Quality circles were the forerunners of TQM’s quality team concept.
Quality Principles Quality gurus, Deming, Juran, Crosby, Feigenbaum, and Taguchi, each advocated a set of principles in promoting quality. Deming emphasized Seven Deadly Diseases in the U.S. quality crisis as (1) lack of constancy of purpose, (2) emphasis on short-term profits, (3) evaluation of people by rating and annual review, thus destroying teamwork and creating rivalry and fear, (4) management mobility, leading to inadequate understanding of how the organization works and lack of incentive for long-range planning, (5) managing organizations by visible numbers only, (6) excessive employee health-care costs, and (7) excessive warranty costs, encouraged by lawyers. To counter these problems, Deming proposed a 14-point solution. (1) Create constancy of purpose; (2) adopt quality and customer orientation philosophy; (3) cease dependence on inspection to achieve quality; (4) end the practice of rewarding business on the basis of price tag; (5) improve constantly and forever; (6) institute training; (7) institute leadership; (8) drive out fear; (9) break down barriers between departments; (10) eliminate slogans, exhortations, and arbitrary numerical goals; (11) eliminate numerical quotas; (12) remove barriers that rob employees of their pride of workmanship; (13) institute a vigorous program of education and self-improvement; and (14) take action to accomplish the transformation. Juran advocated managing quality in three parts (9): planning, control, and improvement. He recommended seven breakthrough sequences: (1) breakthrough attitude, (2) identify the few vital projects, (3) organize for breakthrough in knowledge, (4) conduct analysis, (5) determine how to overcome resistance to change, (6) institute the change, and (7) institute controls. Crosby has a 14-point set of principles (10). Feigenbaum was first to coin the term total quality control and identified 10 benchmarks for controlling quality (1,11). Taguchi emphasized quality, robustness, and minimum variation in product design (12).
Quality Awards After the Japanese success with TQM, various agencies have taken initiatives to encourage and promote quality. Among them are the Deming Prize, the Malcolm Baldrige Award, and the European Quality Award. The Deming Prize is a Japanese award, instituted in 1951 by the Japanese Union of Scientists and Engineers (JUSE) for companies dedicated to quality. This prize has an extensive application process. The Malcolm Baldrige Award was created in the United States in 1987, named after a Reagan Administration’s Commerce Secretary. The purpose of this annual award is to raise US industry’s awareness of the significance of quality in the global market. The European Quality Award was created in 1988 by 14 European companies, which formed the European Foundation for Quality Management. This award is the European equivalent of the Malcolm Baldrige Award in the United States.
Quality Standards Quality standards were developed before the popularity of TQM. The US airforce in World War II recognized the need for quality standards, and initiated quality assurance programs that led to the military quality-assurance
QUALITY CONTROL
3
standard MIL-Q-9858, and later revised as MIL-Q-9858A in 1963. This standard was adopted by NATO in 1968 (AQAP-1). Its European version (DEF STAN 05-21), created in 1970, was the basis of the Britain’s standards (BS 5750). Europe’s standard-setting body (European Committee for Standardization) commissioned a private organization [International Standards Organization (ISO)] to develop quality-assurance standards. ISO used the previously developed US and NATO standards to develop the ISO 9000 series (13). ISO 9000 consists of a series of quality-assurance guidelines for companies engaged in different phases and types of production of goods and services. ISO 9000 is a general guideline. ISO 9001 provides guidelines for companies engaged in design, development, production, installation, and servicing functions. ISO 9002 has guidelines for companies engaged in production and installation functions only. ISO 9003 is developed for companies engaged in final inspection and testing functions. ISO 9004 describes the elements of a quality-management system. ISO 9000-2 is a guideline for selected service industries, and ISO9000-3 provides guidelines for applying ISO 9001 to software companies (14). Various countries have adopted ISO 9000 standards and applied their own codes. An example of the British adoption of ISO 9000 is BS 5760; in the United States it is called ANSI/ASQC Q90; European Community knows it as EN 29000, Australia as AS 3900, and Japan as JIS Z 9900. Brazil, Denmark, France, Germany, Portugal, and Spain also have their own code names for ISO 9000 quality-management standards. In 1993, ISO established a committee to develop the ISO 14000 series for environmental management systems. In 1994, the US auto manufacturers (Chrysler, Ford, and General Motors) combined their company-wide quality management standards under the Quality Systems Requirements QS 9000 standard, which is based on the ISO 9000 series, especially on ISO 9001 guidelines. The difference between ISO 9000 and the US quality-management guidelines (such as Q90 or QS 9000) is that the ISO 9000 series is used for the certification process, which is required for dealing with European Common Market countries, while the US guidelines are voluntary standards of quality assurance. The ISO 9000 certificates are used as the mark of a company’s commitment to quality. However, there are concerns that the certification process does not deal with process inefficiencies and customer-related issues.
Components of TQM There is no single or uniform approach to implementing TQM in organizations. Quality pioneers have their own principles and points of emphasis. However, one can categorize and glean common themes from the vast array of available principles and recommendations. The components of TQM approach include: leadership, organizational structure, principles, methodologies, and metrics. Awards and standards discussed above form another component of TQM. These areas are discussed both in a general context and in the context of their application to quality information systems. Leadership. A culture of quality requires the creation of a shared vision to promote the constancy of purpose and the uniformity of direction in the organization’s activities. A vision is defined as the desired or ideal state of the unit. The leadership of the organization develops the vision through a participatory process, which includes the organization’s stakeholders, consisting of employees, stockholders, managers and, in some cases, even the representatives of internal and external customers. The vision of an information systems unit is derived from the organization’s vision and applies specifically to information technology. An example of IT vision is: “an integrated and reliable information technology that meets its customers’ requirements for timely, accurate, and on-demand information” (13). There are a number of related concepts that make the vision tangible and operational. These are mission, values and principles, goals, strategies, plans, and tactics. While vision describes what the desired state is, a mission describes how the organization should move toward its vision. Values and principles are the core beliefs under which the organization operates. Goals are defined in terms of measures (often numerical) that
4
QUALITY CONTROL
Fig. 1. Traditional versus TQM structure.
lead the organization toward its vision. Strategies, plans, and tactics constitute road maps of actions leading to the vision. TQM requires strong leadership, which starts with developing a clear vision for the organization that is shared by its members. It has now evolved into the leadership system, which is defined as creating clear values that (1) respect stakeholder capabilities and improve performance, (2) build loyalty and teamwork for the pursuit of the shared vision, (3) support initiatives and avoids long chains of command, and (4) include mechanisms for the leader’s self-examination, receipt of feedback, and improvement (15). Organizational Structure. TQM requires a particular organizational structure. The hierarchy of the organization is more flat and upside-down, in that customers and workers who come in contact with customers are on the top of the organizational pyramid (Fig. 1). The managerial approach is coaching and supporting employees. Workers are empowered to make decisions. Cross-functional teams are the hallmark of quality organizations. The organizational culture is one of openness and communication. The performance of workers and managers is evaluated by their internal and external customers, and this evaluation forms the basis of their reward systems. In this scheme, workers are the managers’ internal customers. Managers are rewarded on the basis of their ability to provide support for workers to achieve their best. Problems in such organizations are considered opportunities for continuous improvement. A worker believed to be responsible for the problem should be involved in finding ways to ensure that the process is improved and that the problem will not occur again. Continuous Improvement. Continuous improvement has its origin in the Japanese concept of kaizen, which means “ongoing improvement involving everyone—top management, managers, and workers” (16). In the United States, the idea of the continuous improvement cycle was developed by Shewhart and popularized by Deming. The cycle has four components: plan, do, check, and act (Fig. 2). This cycle is continuously implemented in order to improve quality in the processes and outcomes. Applied to information systems, the concept of continuous improvement provides a third option to maintenance and innovation by creating new systems. Figure 3 shows the process of change in information systems. Benchmarking. Benchmarking is based on the concept of finding the best industry practices and applying them in the continuous quest for improvement. Benchmarking has a long history in China and Japan. The Japanese word dantosu (the best of the best) embodies the idea of benchmarking. In the United States, Juran in his book Management Breakthrough posed the question: “What is it that organizations do that gets results much better than ours?” The first known comprehensive benchmarking in the United States was carried out by Xerox in 1979 (17) and later by Motorola in early 1980s. Benchmarking is a part of the continuous-improvement process. It requires: (1) self-evaluation, (2) identification of weak spots, (3) definition of metrics, and (4) identification of processes, policies, and structures
QUALITY CONTROL
5
Fig. 2. Shewhart–Deming cycle.
Fig. 3. Information systems changes.
of interest. These steps provide focus and purpose for benchmarking activities. Benchmarking requires the identification of benchmarking partner(s), planning, information collection, analysis, goal setting, and implementation. These steps are discussed in detail in (13). To select the appropriate benchmarking partner, one needs to select the appropriate type of benchmarking: internal, competitive, functional, or generic. In locating the benchmarking partner, one can use a number of resources such as: internal experts, internal functional areas, professional associations, journals, external experts, and consultants. There are a number of issues that should be addressed: obtaining the benchmarking partner’s consent, forming benchmarking teams, selecting methods of benchmarking (such observations, interviews, questionnaires, site visits, telephone surveys, and mail surveys), analyzing the results, communicating the recommendations, and implementing them. In successful cases of benchmarking, companies could continue the benchmarking partnership or make benchmarking an integral part of the continuous-improvement process. Benchmarking is particularly valuable for information systems. The speed of change in the technology and the relative paucity of information regarding the innovative use of information technology make benchmarking a requirement for creating quality information systems. Since business-process reengineering often requires the integration of information systems in the new process, benchmarking the use of information technology in business processes is extremely useful.
6
QUALITY CONTROL
Reengineering. Continuous improvement emphasizes relatively small changes, while business-process reengineering focuses on sweeping changes at the corporation or functional level. Hammer and Champy (18) defined reengineering as “[t]he fundamental rethinking and radical design of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality, service, and speed.” Based on their observations of reengineering efforts in major corporations, they have found that reengineering has the following characteristics—focused on process, sweeping and extensive, rule-breaking, and reliant on the innovative use of information systems. The common features in business-process reengineering are: combining jobs, empowering employees as decision-makers, creating multitrack processes, placing the work in a natural and commonsense way, reducing costly controls and checks, reducing the number of externalcontact points, creating one contact point through a case manager, and using a combination of centralized and decentralized structures. Quality Information Systems Information technology is one of the most important tools an organization can use to facilitate continuous improvement, business-process reengineering, creativity, and innovation within an organization. One can use information technology in quality-management functions, such as making quality guidelines and processes accessible to all workers through company intranets, automated collection and analysis of quality-management data (in order to identify improvement opportunities in the Shewhart–Deming cycle), or computerized suggestion boxes for collecting and analyzing innovative ideas from workers. While information technology can contribute substantially to the organization’s TQM undertakings, using the principles of TQM to develop and operate information systems (IS) can substantially improve the organization’s IS. Systems built based on TQM are called quality information systems (QIS) (13). These systems have a customer focus, and the development process is based on the early determination of systems customers (such as internal and external, developers and users, direct users and indirect users, and people impacted by the system). The principles of zero defects and designing quality into the system are incorporated in these systems. The management of QIS is based on collecting quality and reliability data on the system and using them to set up early-warning signals (19). The operation of QIS includes recovery plans (rather than fire-fighting) and is based on continuous improvement cycles.
Data and Information Quality One of the important aspects of using information technology in TQM, as well as creating and operating QIS, is the quality of data and information used in these systems. Strong, Lee, and Wang (20) report that 50% to 80% of criminal records in the United States contain poor-quality data; and that low data quality has a social and economic cost in the billions of dollars. Redman (21) discusses far-reaching impacts of low data quality on operational, tactical, and strategic decisions within an organization. One approach for improving data quality is to view the creation, storage, and use of data as a manufacturing process (22). The application of the TQM principles to data and information manufacturing process is called total data quality management (TDQM). Wang (22) defines the TDQM’s continuous-improvement cycle as: define, measure, analyze, and improve, similar to the Shewhart-Deming cycle. Orr (23) suggests a systems-theory approach to data quality, in which customers and users of data are a significant part of the continuous-improvement feedback loop. In this approach, data quality is defined on the basis of the use of data, and is improved by users’ feedback. Although TDQM is relatively new, a number of researchers are working on developing theories and methodologies for dealing with data and information quality within the context of TDQM.
QUALITY CONTROL
7
Quality Tools In the early 1900s, quality efforts were concentrated mostly on controlling the end-products and, subsequently, on inspecting the quality of input materials. These early quality-control efforts relied heavily on statistical sampling techniques and on control charts for controlling the quality of inputs, machine operations, and outputs. With the TQM movement, quality concerns moved upstream from the end-product to design and requirements analysis. The philosophy of control management shifted from discovering errors after they are made and “fire-fighting,” to preventing errors from happening. Concepts of designing quality into the product, such as Taguchi methods and zero-defect concepts, became popular. Taguchi developed methods for designing quality into the system or product with the goal of minimizing any variation from the design targets (24). This design philosophy makes it possible to have the goal of zero defects in the final output. To this end, quality tools also evolved to include methods for collecting, organizing, and analyzing ideas, incorporating customer preferences into the design of products and systems, identifying root-causes of problems, and designing for zero defects. With the increased popularity of TQM and business-process reengineering, and their applications to various areas, the list of available tools is expanding (25). Here some of the original TQM tools are briefly reviewed, and quality information systems are used to provide a context for the description of these tools. Group Idea Generation Tools. The employee-empowerment and team approach in TQM requires tools and methods for generating ideas and consensus in groups. Early TQM tools for this purpose included questioning, brainstorming, mental mapping, and affinity diagram. Questioning stimulates group members to identify the nature of the outcome of the group’s activity and its internal and external customers. Examples of questions for developing a vision statement are: What is the most desirable outcome of our work? What is the value of the outcome for us and for others? Who are our external customers? Who are our internal customers? Brainstorming was developed by Osborne in the 1930s (26) and is used extensively by TQM teams to generate ideas. The philosophy in brainstorming is to generate a large set of ideas by a number of individuals in a short period of time (27). It has four phases: (1) idea generation, (2) categorization of generated ideas, (3) discussion, and (4) selection. The selection phase involves rating the ideas based on a number of criteria (determined by the team). Brainstorming requires a facilitator who conducts the meeting, has no stake in the outcome, and does not take part in the process. The meetings should be facilitated on the basis of principles that encourage participation and cross-fertilization of ideas (13). Mental mapping is a method of aiding the creative process and unblocking mental barriers. Its purpose is to use unorganized and sometimes seemingly illogical associations in order to tap mental creativity. Mental mapping starts with a core idea. One group member draws branches from the core, and other group members join in to generate a mental picture of all issues related to the core questions. Figure 4 shows an example of a partial mental map for the core questions of “What is our information-systems vision?” The affinity diagram is another tool for collecting information (such as ideas, issues, proposals, and concepts) and organizing them in a creative fashion. The affinity diagram is based on the JK Method, developed in the 1960s by Jiro Kawasha, a Japanese anthropologist. This tool is helpful when the problem is complex and has no apparent logical structure. This method has the following steps: (1) assemble the team, (2) pose the central question, (3) generate ideas, (4) group ideas, and (5) assign headers. Generating ideas is done in complete silence on small cards. In grouping ideas, members group the cards based on similarity of ideas with no discussion. The grouping would continue until nobody wants to move any card. The assignment of headers is either from a card within a stack of grouped cards or by the suggestions of the members. The headers then may be put into a hierarchical structure.
8
QUALITY CONTROL
Fig. 4. An example of mental map.
Fig. 5. An example of Pareto chart.
Hoshin Planning. Organization-wide policies, goals, objectives, strategies, and tactics, collectively form a master plan for the organization. Hoshin karni or policy deployment is the choice of focus areas for implementation. The identification and selection of hoshins is a team effort and an integral part of the planning process. The hoshins guide the choices in the continuous-improvement process and focus on the root causes of problems. Tools for Problem Identification. A number of tools are used in problem identification for the continuous-improvement process, which include the Pareto chart, cause-and-effect diagram, and tree hierarchy process. A Pareto chart is a simple tool for identifying the value (normally in the form of relative frequency) of the item that has the highest contribution to the problem under study. It is a bar chart of relative frequencies, sorted in descending order of frequency values. The cumulative frequency values are also shown on the chart. Using a Pareto chart requires that one already has determined the metric for identifying the problem and the sources of problems. For example, in Fig. 5, sources of errors in information systems and the frequencies are identified. The bar with the highest frequency should be the focus of the continuous improvement process. The cause-and-effect diagram, also known as the Fishbone diagram, was first developed by Ishikawa, and is used for determining the root causes of problems and their constraints. The diagram starts with a straight
QUALITY CONTROL
9
Fig. 6. An example of cause-and-effect diagram.
Fig. 7. Hierarchy and tree.
line, at the end of which is the problem, such as Errors in data in Fig. 6. The team members add sources of problems, such as people, equipment, software, procedures, and more specific details are added to the diagram. The tree and hierarchy diagrams are tools for organizing a sequence of ideas, tasks, or attributes, starting from the most general level, and gradually breaking down into more detail. They are tools for organizing complex structures into manageable and comprehensible forms. A tree diagram starts at the left-hand side of the page with the most abstract or general concept and moves to the right. A hierarchy for a similar structure starts from the top and moves down the page. Hierarchy could be more general by allowing a lower-level concept to have more than one parent in the upper level (and hence become a network), as shown in Fig. 7. Process Analysis Tools. There are a growing number of tools for process analysis, due to the popularity of business-process reengineering (25). However, two older tools have remained popular—data flow diagrams and flowcharts. A data flow diagram consists of: (1) data flows, (2) external and internal entities, (3) data storage, (4) processes or actions, and (5) labeled, directed arrows. Each component has its own symbol and is used to show the flow of data in and out of processes, data storage, and internal and external entities. Processes in a data flow diagram are called “actions” to avoid confusion with the business processes that are more general. A flowchart is used to show the details of an action or computation. It has symbols for: (1) action, (2) comparison or decision, (3) labeled, directed arrows, (4) start and end, and (5) connectors.
10
QUALITY CONTROL
Fig. 8. House of quality.
Data flow diagrams and flowcharts differ in a number of ways: (1) A data flow diagram focuses on the flow of data and does not imply a time sequence, in that many flows may take place simultaneously or at different times. A flowchart documents a sequence of actions and implies a time sequence. (2) The data flow diagram does not show repetitions, conditions, and choices (such as if, while, and for structures), whereas a flowchart documents these structures. (3) The data flow diagram shows the sources and destinations of data, whereas the flowchart does not document the external and internal sources of data. (4) The data flow diagram shows the overall picture of a process with its organizational and environmental links, whereas the flowchart provides a detailed documentation for a given action or computation (13). Requirements Analysis and Design Tools. Trees and hierarchies, discussed above, are used also in requirements analysis and design in TQM. However, the major tool for connecting the requirements of customers to the design of products, systems, or services is quality function deployment (QFD). QFD consists of a series of interrelated houses of quality. The first house of quality captures customers’ requirements and connects them to the functional or technical requirements of the product or system. The subsequent houses of quality break down the requirements and translate them into general and then specific design components. The first house of quality is used for planning, and has the components shown in Fig. 8. The left side of the house contains the customers’ requirements and their relative rating of each requirement, determined by customers. This part of the house determines what is to be created. It may be structured in a tree or hierarchical form. The flat top part of the house contains the system requirement. (If used for product development, then “system” should be replaced by “product” in this discussion.) The top determines how the system should be created. The roof of the house shows the degree of interdependencies among various parts of the system. The main part of the house documents the extent of the relationship among customers’ requirements and systems requirements. The right side of the house compares the performance of major competitors in each category of customers’ requirements and rates them in each category on a scale (c). The lower part of the house shows the relative rating of the systems requirements, determined by multiplying the relationship values (in the main part of the house: A) by the customers’-requirements rating (on the left part of the house:
QUALITY CONTROL
11
c), and summed (s = c A), where c is the transpose of c. The outcome of the first house of quality is a rated list of systems requirements and their degree of interdependencies. The next house of quality has on its left side the systems requirements and their relative ratings (which are the outcomes of the first house of quality), and on the top has general design components. The outcome of the second house of quality is the list of general design components with their relative importance ratings. This becomes the input (left side) of the third house of quality, in which the detailed design components and their relative ratings would be determined. This process continues until the required specificity for design or even implementation is satisfied. Using QFD connects the design of the system’s components and their relative importance in the project with the customers’ requirements. To create the hierarchy of the customers’ requirements and to have them assign relative weights to their requirements, one can use the analytic hierarchy process (AHP) approach as described in (13). Other Quality Tools. Other quality-analysis tools include: interrelationship digraph, prioritization matrix, matrix diagram, process-decision program charts, and activity-network diagrams. A brief review of these tools can be found in Zahedi (13), and details are discussed in Brassard (28,29). The origin of these quality tools is the work by a committee of Japan’s Society for QC Technique Development, published in 1979 as Seven Quality Tools for Managers and Staff (30). Brassard (28,29) brought these tools to the United States. Affinity diagrams, Pareto charts, the cause-and-effect diagrams, tree and hierarchy diagrams, and data flow diagrams are also part of the seven quality tools.
Quality Metrics In order to evaluate quality objectively, one needs to define metrics that formalize and quantify quality. The handbook on metrics published by the US Air Force (31) defines a metric as a combination of measures designed for depicting an attribute of a system or entity. The handbook lists the characteristics of a good quality metric: (1) meaningful to customers, (2) containing organizational goals, (3) simple, understandable, logical, and repeatable, (4) unambiguously defined, (5) capable of showing trends, (6) economical in data collection, (7) driving appropriate action, and (8) timely. Metrics could be categorized in two dimensions: organization-customers versus processes-results, thus generating four types of metrics: (1) organization-process focus, (2) customer-process focus, (3) organizationresult focus, and (4) customer-result focus. Zahedi (13,19) discusses in detail design, reliability, implementation, and operations metrics for information systems, as well team-management quality metrics.
Analysis of Quality Metrics Statistical quality control (SQC) and control charts are the main methods for analyzing quality metrics. In SQC, the observed values of quality metrics, like most business data, have random elements. Therefore, the deviation of quality metrics from their target values could have two sources: (1) random or chance elements that have the same probability distribution for all observed values of the quality metric (common cause), and (2) a shift in performance, which causes the metric to deviate from its target value (special cause). Statistical analysis offers the capability to distinguish between the two causes and to identify the early signals for a change of conditions. Two types of statistics are commonly computed for summarizing data: (1) measures of central tendency (mean, median, or mode) and (2) measures of dispersion or variation (standard deviation, range, or mean-absolute deviation). For a given sample, the common test in SQC is to test the hypothesis that the metric’s mean (or any other statistics) has deviated from its desired level. Control charts graphically show the SCQ test. Control charts could be categorized into Shewhart type and non-Shewhart type. A Shewhart-type control chart has a central line that indicates the target value of the
12
QUALITY CONTROL
metric, with two warning lines (one upper and one lower with the central line in the middle), and two lines farther out (one upper and one lower) showing control limits. There are a number of Shewhart-type control charts, such as x chart, c chart, np chart, p chart, u chart, x-bar chart, m chart, r chart, s chart, and more. Non-Shewhart charts require more complex methods of establishing control limits. Examples of this type are MOSUM (moving sum of sample statistics), EWMA (exponentially weighted moving average), CUSUM (cumulative sum), and modified Shewhart charts [see, e.g., Zahedi (13) for a brief description of each]. The purpose of control charts is to discover variations caused by special circumstances and to take appropriate actions. The data on quality metrics should be collected regularly and the sample statistics (mostly the mean) are plotted on the control charts. Any regular pattern or movement outside the warning lines and control limits are signals for activating the Shewhart–Deming cycle of continuous-improvement actions.
BIBLIOGRAPHY 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31.
A. V. Feigenbaum Total Quality Control, 3rd ed. New York: McGraw-Hill, 1991. P. B. Crosby Quality Without Tears: The Art of Hassle-Free Management, New York: McGraw-Hill, 1984. J. M. Juran Juran on Quality by Design, New York: Free Press, 1992. F. W. Taylor Scientific Management, New York: Harper and Brothers, 1947 (originally published in 1911). F. J. Roethlisberger W. J. Dickson Management and the Worker, Cambridge, MA: Harvard University Press, 1939. W. E. Deming Out of the Crisis, Cambridge, MA: Massachusetts Institute of Technology, 1986. W. E. Deming The New Economics: For Industry, Government, Education, 2nd ed. Cambridge, MA: Massachusetts Institute of Technology Publication, 1994. J. Woodall D. K. Rebuck F. Voehl Total Quality in Information Systems and Technology, Delray Beach, FL: St. Lucie Press, 1997. J. M. Juran Juran on Quality by Design, New York: Free Press, 1992. J. Byrne High priests and hucksters, Bus. Week, October 25, 1991, pp. 52–57. A. V. Feigenbaum Total quality control, Harv. Bus. Rev., 34 (6): 93–101, 1956. G. Taguchi Taguchi Methods, Tokyo, Japan: Amer. Supplier Inst., 1989. F. Zahedi Quality Information Systems, Danvers, MA: Boyd & Fraser, 1995. C. H. Schmauch ISO 9000 for Software Developers, Milwaukee, WI: ASQC Quality Press, 1994. Malcolm Baldrige National Quality Award. 1998 Criteria for Performance Excellence, Milwaukee, WI: Amer. Soc. for Quality, 1998. M. Imai Kaizen: The Key to Japan’s Competitive Success, New York: McGraw-Hill, 1986. R. C. Camp Benchmarking: The Search for Industry Best Practices That Leads to Superior Performance, Milwaukee, WI: Quality Press, 1989. M. Hammer J. Champy Reengineering the Corporation, New York: Harper Business, 1993. F. Zahedi Reliability metric for information systems based on customer requirements, Int. J. Qual. Reliabil., 14: 791– 813, 1997. D. M. Strong Y. W. Lee R. Y. Wang Data quality in context, Commun. ACM, 40 (5): 103–110, 1997. T. G. Redman The impact of poor data quality on the typical enterprise, Commun. ACM, 41 (2): 79–82, 1998. R. Y. Wang A product perspective on total data quality management, Commun. ACM, 41 (2): 58–65, 1998. K. Orr Data quality and systems theory, Commun. ACM, 41 (2): 66–71, 1998. G. Taguchi D. Clausing Robust quality, Harv. Bus. Rev., 68 (1): 65–75, 1990. W. J. Kettinger J. T. C. Teng S. Guha Business process change: A study of methodologies, techniques, and tools, MIS Quarterly, 21 (1): 55–80, 1997. A. F. Osborne Applied Imagination—The Principles and Problems of Creative Problem Solving. New York: Scribner’s, 1953. L. Jones R. McBride An Introduction to Team-Approach Problem Solving, Milwaukee, WI: ASQC Quarterly Press, 1990. M. Brassard The Memory JoggerTM , Methuen, MA: GOAL/QPC, 1984. M. Brassard The Memory Jogger IITM , Methuen, MA: GOAL/QPC, 1994. Society for QC Technique Development, Seven Quality Tools for Managers and Staff, 1979. U.S. Air Force, The Metrics Handbook, Washington, DC: Andrews AFB, 1991, HQ AFSC/FMC.
FATEMEH MARIAM ZAHEDI University of Wisconsin—Milwaukee
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7113.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering System Requirements and Specifications Standard Article James D. Palmer1 1George Mason University, Fremont, CA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7113 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (102K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases ❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
Abstract The sections in this article are Problems and Issues Concerning System Requirements and Specifications Development System Requirements and Specifications Processes Characteristics of System Requirements and Specifications That Influence The Process Tool Support For Management of System Requirements and Specifications Final Observations Keywords: client needs; user requirements; system definition; systems acquisition; requirements engineering; specification languages; knowledge acquisition; natural languages; risk management; prototypes About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7113.htm15.06.2008 14:29:37
SYSTEM REQUIREMENTS AND SPECIFICATIONS
301
SYSTEM REQUIREMENTS AND SPECIFICATIONS System requirements and specifications are essential, for without them there is no sure knowledge of the system to be built. They help us understand the relationships that exist within and across systems development, design, and implementation and are critical to the development process. They provide a means with which to validate and verify user needs, execute testing procedures, understand performance measures, and determine nonfunctional and functional characterJ. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
302
SYSTEM REQUIREMENTS AND SPECIFICATIONS
istics. Keys to understanding the relationships that exist among system requirements, design, construction, test, and implementation are to be found in these. Correct and accurate system requirements and specifications are critical to the delivery of a system that is on time and within budget. These statements are investigated to ascertain feasibility and practicality and to examine tradeoffs. After the feasibility and practicality of the desired system have been determined, the resulting statements are analyzed for errors, difficult or incomplete concepts are prototyped, and the final product is transformed to formal or semiformal specification languages. The emphasis on error detection recognizes that between 50 and 80% of the errors found in delivered systems can be traced to errors in the interpretation of requirements (1–3). System requirements and specifications are essential to the verification and validation that user needs are properly interpreted and to ensure that the outcomes are those intended. They provide visualization and understanding into the techniques necessary for system development and are used to validate the impact of changes, provide process control, and enable early risk management. Insights are provided to quality, consistency, completeness, impact analysis, system evolution, and process improvement. Traceability of requirements and specifications to their origin is equally needed. The value of correct requirements for a system are realized through the identification and resolution of risk, development of appropriate integration tests, and successful delivery of the final product (2). They provide the basis for the development of an audit trail for the project by establishing links within and across system entities, functions, behavior, performance, and the like.
PROBLEMS AND ISSUES CONCERNING SYSTEM REQUIREMENTS AND SPECIFICATIONS DEVELOPMENT Before we examine how to develop system requirements and specifications, let’s look at some of the problems. Although the necessity to provide adequate system requirements and specifications for development is widely accepted, there is considerable controversy as to the ultimate need, purpose, and cost of performing these activities. This arises primarily as a result of the need to acquire knowledge and the associated technical difficulties, the lack of automated approaches to implement the processes, and the concomitant time and effort that must be used to apply any of the presently available support tools. Difficulties generally revolve around elicitation and development of information from users and/or existing systems and lie at the interface between the system developer and the user. Transformation to the exact language of specification is another source of problems. There are also technical difficulties that relate to factors such as hardware, performance, capacity, and interfaces. Issues and concerns often emanate from the complexity of a project. Each discipline (e.g., environmental monitoring systems, automated fingerprint systems, sediment extraction systems, C3I systems, and health monitoring systems) has language, methods, and tools peculiar to the discipline. The same language constructs are not used across disciplines. This leads to potential errors in the development of requirements used to provide linkages within and across disciplines. Establishing threads across disciplines is difficult because of
language, method, and tool peculiarities. Generally, system requirements and specifications are stated in natural language, which means a high potential for ambiguities, conflicts, inconsistencies, and incompleteness. The types of issues and errors that are typically found in system requirements and specifications include: 1. Conflict within and across requirements 2. Lack of consistency of requirements individually and in clusters of similar statements 3. Incompleteness across requirements and their clusters 4. Inability to determine the ripple impact of adding new requirements to existing systems 5. Issues related to storage and retrieval 6. Degree of volatility in the requirements generation 7. Failure to provide traceability throughout the life of the project (including maintenance) 8. Technically and economically unfeasible requirements The key to success is the resolution of the problems and issues at the time of system initiation, not after deficiencies are noted during design or testing. SYSTEM REQUIREMENTS AND SPECIFICATIONS PROCESSES The development of the system requirements and specifications should move according to a stated plan. This plan minimally must include the development of system objectives, refinement of these objectives, and development and refinement of system constraints and variables; express as concisely as possible a high-level functional model; formulate a design and implementation strategy; and document the outcomes of these activities as specifications. The plan must include the essential functions necessary for developing system requirements and specifications. These are: 1. Elicitation 2. Organization of materials to form a logical order 3. Analyses of statements for problems such as consistency, errors, omissions, risk, and completeness 4. Modeling difficult to understand concepts 5. Transforming informal natural language to formal or semiformal specification languages System requirements and specifications development requires approaches, such as those shown in the process model of Fig. 1, that address the following activities:
Elicitation Organization Assessment Prototyping Transformation Figure 1. Process model for system requirements and specification development.
SYSTEM REQUIREMENTS AND SPECIFICATIONS
1. Developing needs and desires of users 2. Determining potential problems or difficult factors (e.g., incompleteness, inconsistencies, redundancies, and ambiguities) 3. Encouraging alternative approaches to solve problems (e.g., discovery, prototyping, and simulation) 4. Providing for alternative courses of action at each step in development and appropriate methods to support the Course of Action (COA) (e.g., elicitation, assessment, and transformation) 5. Identifying personnel needed for each activity 6. Stimulating use of support tools (e.g., multimedia elicitation tools, decision support tools, classification techniques, transformation techniques, and prototyping) 7. Supporting development of high-quality products to meet time and cost constraints 8. Enabling management control Domain knowledge plays an essential role in this process model. Analysts eliciting requirements information from domain experts, users, existing documentation, legacy systems, and current systems need to be domain experts or have a working knowledge of the domain for which the system is to be constructed. Domain knowledge might be represented as a taxonomy of concepts against which clusters of data may be compared to identify missing information, conflicts, or inconsistencies. For organization, assessment, and prototyping, classification processes used require domain knowledge to identify potential problems and issues, solutions, and architectures and structures. Transformation to specifications is a knowledge-intensive activity. Finally, documentation of domain knowledge ensures that baselines are consistent with domain semantics. Elicitation The initial task is most formidable: elicitation. The needs that are elicited are based on the objectives of the user, the constraints and variables that have an impact on potential solutions that can be implemented, as well as the impact of nonfunctional requirements such as management activities, quality factors, operating environment, and other nonbehavioral aspects. Development constraints must be identified and recognized. These might include the need to accommodate a particular target machine for system operation, the timing necessary for response between interactions, the need for real-time interrupts, or the length of time available for system development. Experience in elicitation will assist in acquiring this knowledge. Activities performed during elicitation include the following: (1) collection of statements in a database, (2) formulation of system-level issues, and (3) interpretation of the requirements that result. There are semiautomated approaches to assist elicitation that includes decision support systems, group decision support systems, questionnaires, interviews, prototypes, and capture of all information in a database. A typical elicitation activity can include convening groups of experts on all aspects of the new system and use of questionnaires and interviews with system personnel to ascertain needs and constraints and use the Delphi Techniques should the participants be geographically distributed. The elicitation activity may also employ presentations, large
303
group meetings, and small groups supported by decision support systems. Organization The organization of system requirements and specifications is essential to overall system development and supports a variety of activities. Statements that have been collected in a database may be grouped to reflect common characteristics, similarity, or other meaningful attributes. These clusters use classification and clustering techniques to group statements together based on characteristics defined by user and developer. Typical attributes may include factors such as risk, design constraints, real-time functions, testing, or performance. Classification has its basis in the utilization of categories, criteria, or taxonomies that are in turn based on specific characteristics (e.g., behavioral and nonbehavioral attributes), around which organization is to take place. Clusters that center on one or more degrees of functionality or one or more nonbehavioral aspects may be generated. Classification and clustering techniques also aid in identifying orthogonality or interdependence (i.e., those modules with the greatest degree of independence or the greatest degree of interdependence). Examples of classification include use of keywords to describe the contents of a document, categorize information, or catalog artifacts (4). This has two major advantages: data simplification and information discovery. Clustering objects into groups effectively summarizes information about each object increasing efficient organization and retrieval, reducing complexity and increasing understandability (5). Discovery leads to knowledge about possible structure and relationships of the objects (6). Clustering may result in a classification scheme (or taxonomy) that describes groups and the membership of each object in the cluster. The resulting classification scheme may be used to classify individual objects into appropriate clusters. Clustering structures can be categorized as dissection, partitioning, hierarchical, and clumping. Dissection refers to the process of dividing the objects into clusters that are based on proximity. Partitioning divides the objects into disjoint groups, where each object belongs to one and only one group. Hierarchical structures are generally depicted as tree diagrams, where each level of the tree partitions the objects into disjoint groups. Clumping permits an object to belong to more than one group, producing ‘‘clumps’’ or overlapping groups. Assessment When the organization is complete, review and analysis of omissions, commissions, constraints, or issues are initiated. This process may employ some of the same methods used in the organization function (e.g., classification and clustering) as well as techniques to determine risk, uncover errors of commission (e.g., ambiguities), and errors of omission (e.g., conflicts). These methods are used to assist in the detection and identification of correct statements, errors, issues, risk, and incomplete sets of functions. Potential problems, omissions, and commissions are discovered by searching the database using a variety of techniques or queries. For example, testing is crucial to the operation of a real-time life critical system, thus search criteria would center around all aspects of testing. These might include reliability, mean time to failure, timing sequences, and
304
SYSTEM REQUIREMENTS AND SPECIFICATIONS
similar features. Another major activity is the detection and correction of errors within and across requirements using classification and clustering techniques. Comparison lists of ambiguous terminology, risk factors, performance characteristics, and the like, make it possible to detect errors related to factors such as conflict, inconsistency, incompleteness, and ambiguity. Detection is enabled by using rules and/or predefined taxonomies (classification structures). Following detection, errors of these types are presented to the user for resolution. Finally, new statements are developed from the information prepared and documented during assessment. This includes definition of criteria for evaluation and assessment of any changes that may occur.
Finite state machines (FSM), such as state transition diagrams (STD), are used to model system dynamic behavior (9). A key in developing FSMs is the identification of dynamic behavior concepts, such as events, conditions, actions, transitions, and states. A pattern-matching and knowledge-based approach that uses both syntactic and semantic knowledge to process statements may be used. A primary feature of this approach is to establish correspondence, using a semantic scheme, between linguistic patterns and system dynamic behavior concepts, such as events, conditions, actions, transitions, and states. A behavior concept classification structure may be used to identify concepts and a parsing grammar and associated pattern rules and mapping rules to provide automated support to identification of dynamic behavior concepts.
Prototyping
Management as Part of the Process
Candidates for prototyping are identified by the organization and assessment processes. Prototyping is an approach to increase the utility of system requirements and specifications information for user and developer. It is used to clarify and/ or generate user needs and to transform these to system specifications. It assists in understanding the operating environment and modeling potential operation environment changes and assisting understanding of intended and desired functionality in terms of what the system should accomplish. Prototyping is also used to assist developers examine various structural approaches to design (see MODELING AND SIMULATION). Prototyping is important in determining elements of risk, as well as statements that are incomplete or difficult to understand. Many techniques for prototyping are in common use, including animation and simulation, which are supported by several automated tools. Prototyping is a common activity and is included in most development approaches.
System requirements and specifications are generally grouped in two categories: functional and nonfunctional. Functional represents the way the internal system components interact with their environment. They intend to present a precise set of properties, or operational needs that the system must satisfy. Nonfunctional results from restrictions or constraints that are placed on the types of solution paths that may be considered for system design. Nonfunctional includes quality factors, management processes, cost, and development tools. System requirements and specifications stipulate functional requirements and performance objectives for the system. Management specifications include:
Transformation to Specifications System models are prepared from the preceding outputs and transformed to formal or semiformal languages of specification or design. Various approaches may be used to construct these models. Characteristics of natural languages hold the keys to these approaches (i.e., syntactic and semantic structure of the language as the basis of linguistic approaches) (7). Several automated approaches to transformation use either syntactic and/or semantic information. Generally they require statements presented in a form amenable to the particular technique. Most involve using databases, term frequency identification, lexical affinity identification, and some application of semantics. Term-frequency-based techniques have been used for many years to develop key words to describe book and article content and to develop abstracts (8). Statistical methods have been used to analyze text based on the frequency distribution of terms in the text. A primary concept is that the importance of a term is proportional to the frequency of occurrence. A method used in conjunction with term frequency is lexical affinity to identify concepts within a text. Generally, the concept is used to identify pairs of terms such as verb-noun, adjective-noun, and adverb-verb pairs to provide semantic information. Together these provide specific detailed specifications based on entity-action pairs (i.e., the object acted upon by a particular function).
1. Operating environment concerns 2. Overall design concept or objectives 3. Trustworthiness of requirements, including quality factors 4. Maintenance concerns, including the need for system evolution over time 5. Economic resources and time allocation A systems management plan, or set of systems management needs, is developed and incorporated as part of the process. Thus, there are two components: systems management and technical or functional system requirements and specifications. Technical or functional aspects are analyzed to produce exact specifications, whereas systems management needs are analyzed to produce explicit management strategies for system development. The activities to be carried on include: 1. Evaluation of system functional requirements and specifications for completeness and feasibility 2. Evaluation of the system-level requirements and specifications for compatibility with other systems operational in the user environment 3. Exploration of the user environment, including existing systems, to determine how the new system might be best deployed 4. Identification of other organizations and units that have interfaces with the new system 5. Evaluation of user-stated available resources, including funding and time available
SYSTEM REQUIREMENTS AND SPECIFICATIONS
6. Development of a systems management strategy to incorporate these considerations Major objectives to be met by the systems management plan include definition and scope of effort, specification of required resources, development of specific schedules, and cost estimates. This information, in conjunction with technical information, is used by the user for a go/no go decision relative to continuing the development effort. This decision is made on the basis of (1) technical requirements feasibility, (2) system quality or trustworthiness factors, (3) system maintenance, (4) evolution of products over time, and (5) systems management needs. CHARACTERISTICS OF SYSTEM REQUIREMENTS AND SPECIFICATIONS THAT INFLUENCE THE PROCESS There is a set of characteristics most important to the user. The qualities in this list distinguish those system requirements and specifications that will produce the desired system. These characteristics, which are user-centered and straightforward, are summarized in Table 1. The qualities most important to the developer relate to correctness and realizability. These are of critical importance in determining whether system requirements and specifications represent an accurate representation of need (1,10,11). Such a set reduces the possibility of errors, and therefore the risk of misinterpretation during later activities of the life cycle. These qualities are summarized in Table 2. Errors introduce the potential for multiple interpretations, which may cause disagreements between users and developers and may result in costly rework, lawsuits, or unproductive systems. Risk management must be initiated as a necessary component to produce a quality system (see DECISION THEORY). Brooks (12) feels that it is extremely difficult for users to articulate ‘‘completely, precisely and correctly’’ an accurate set of system requirements and specifications without first iterating through versions of the system. Different versions allow users to ‘‘visualize’’ how the system satisfies their needs and helps to ‘‘stimulate’’ unstated needs. Developers and users often view system issues from very different perspectives. Errors occur because the user may not clearly understand system needs and/or may use imprecise or ambiguous terms to describe these needs. Developers may lack the necessary communication skills needed to elicit system needs. Developers may not be acquainted with the domain and are unable to determine whether system requirements and specifications reflect system needs. Users and developers may speak different languages, lacking a common ground to communicate. The language of the user is usually specific to the domain, whereas the language of the developer is based on the techTable 1. System Characteristics from a User’s Perspective Characteristic Realizable Accurate Affordable On time
Definition User needs achieved in delivered system Reflects user needs and desires System realization is possible within available resources System completion and operation within timeframe determined
305
Table 2. System Characteristics from a Developer’s Perspective Characteristic
Definition
Complete Consistent Correct Feasible Maintainable
Everything system is required to do No conflicts Accurate representation of user need Achievable and within scope of project resources Changes achieved easily, completely, and consistently Stated clearly and specifically Test cases developed for each function Origin and responsibility is clear Only one interpretation Comprehensible to user and developer Authenticated at project completion Confirmed at the end of each development activity
Precise Testable Traceable Unambiguous Understandable Validatable Verifiable
nology used to attempt to solve user needs. Users may not be able to visualize how a system will satisfy their needs (12). Developers may not be able to represent the system via paper-based requirements in a form that users can understand and that is the result of a transformation process that may not accurately record the intent of the user (1). This disparity in understanding must be bridged to have a successful transfer of user needs to requirements specifications. TOOL SUPPORT FOR MANAGEMENT OF SYSTEM REQUIREMENTS AND SPECIFICATIONS Management activities for system requirements and specifications apply to the entire process depicted in Fig. 1, using a combination of manual, semiautomated, and automated techniques. Essential elements of successful management provide methods for the usual management functions as well as for error detection, risk management, and change control. These are provided, in part, by currently available computer assisted software (or system) engineering (CASE) tools, which include the ability to link requirements forward to designs, code, test, and implementation and backward from any of these activities to system requirements and specifications. Techniques currently in use establish, maintain, and provide assistance to development beginning with elicitation and continuing through to transformation. This assistance is essential for large complex systems because the sheer number of statements that must be elicited, organized, analyzed, and transformed may run into the several thousands. Contemporary Requirements Practices The development of system requirements and specifications has suffered from lack of formal or standard processes supported by appropriate automated tools. The most commonly used approach has been that of entering requirements information in a database and using the database capabilities to organize and manage requirements. An organized approach to the activities noted in Fig. 1 includes the following activities: 1. Formulate system-level concepts and determine requirements issues.
306
2.
3.
4.
5.
SYSTEM REQUIREMENTS AND SPECIFICATIONS
1.1. Users identify needs, constraints, and variables, including budget and any operational and legal requirements. With developers, users determine what the system requirements and specifications should be, how they should be stated, and how they are to be derived. 1.2. Identify objectives of user groups and ways to determine how system-level objectives can be met. 1.3. Identify specifications that are affected by existing systems and determine the degree to which the existing system may be retained. Organize and analyze issues and select the approach. 2.1. Organize elicited information and review and analyze any constraint or issues related to systemlevel requirements and specifications to include technical, operational, and economic aspects. 2.2. Define criteria for evaluation and assessment of requirements. Interpret information gathered previously in 2. 3.1. Assess the requirements statements for potential issues and errors and for risk. 3.2. Develop a validation plan for evaluation of the delivered product. 3.3. Review performance demands. 3.4. Analyze cost and benefits. Develop prototypes as appropriate. 4.1. Identify candidates for prototyping. 4.2. Develop and operate prototypes and interact with user to ascertain whether a proper solution has been developed. 4.3. Archive the results in the requirements database. Transform statements to formal or semiformal specification languages. 5.1. Maintain a database of the initially transformed requirements. Use basis, version 0 as the database for the system development throughout the life of the project. 5.2. Maintain subsequent versions in a database.
Continuous change in the system is common as needs are added, modified, and deleted, and management and maintenance of a basis set becomes essential. As new needs are added or existing ones are updated, deleted, or modified, the process continues to provide analysis to ensure that each change is properly included in the system development process and that new problems, if introduced, are resolved. This provides the major verification and validation procedure to ensure that user needs are met. Change notification is traced to determine the impact of such activities on cost, schedule, and feasibility of system design and implementation and tests that must be conducted. Tool Characteristics A wide variety of semiautomated and automated CASE tools assist various activities involved in the development and analysis of requirements. The tools range from those that use a variety of methods to those that are single purpose only. Several tools that represent a number of approaches to re-
quirements development are examined in the next section. No single tool has either captured the market nor is deemed to be up to the general task of requirements development. The various approaches that are used in these tools have many common characteristics that include: 1. 2. 3. 4. 5. 6. 7. 8.
A means for information analysis An approach for functional representation An approach for nonfunctional representation A definition of interfaces A mechanism for partitioning Support for abstraction A database for requirements statements Representation of physical and logical views
There are common tool characteristics that are deemed to be minimally necessary to provide support for management. The tool(s) must be well understood by and be responsive to users and match the characteristics of the development environment used by the developers. Tools must accept and use the data that are supplied in the form provided. In addition, the tool(s) must be flexible and capable of operating in an automated assistance mode to support various activities and services such as active and passive data checking; batch as well as on-line processing; addition, deletion, and modification of a requirement; customization to specific domain applications; dynamic database structure for change management; and a tailorable user interface. Management tools for this process will never be fully automated because human decision making is essential to the establishment of classification schema and system architecture designation. Human interaction and decision making is both desirable and necessary to maximize the interaction of user/developer in development of the project. Typical of the currently available automated (or semiautomated) assistance approaches to requirements management are tools that provide support through a variety of syntactic language analyses that include hypertext linking, syntactical similarity coefficients, preselected or predefined verbs and nouns, or combinations of these. In hypertext linking, the hotword or word/phrase to be linked to other requirements is manually identified and entered into the hypertext tool. Links are automatically made and maintained by the tool to provide forward and reverse linkages to the words or phrases selected. Use of syntactic similarity coefficients ascertains whether or not a predefined number of words of a given statement are found in another statement. When the value of similarity is above a predefined threshold, the two statements in question are said to be similar. Using preselected or predefined objects, nouns, and verbs permits the user to describe information in a context-free manner without affecting the ultimate application. There are problems with each of these approaches. Hypertext linking finds the search text without regard to the placement in the text and without regard to the way in which the words are used. Syntactic similarity coefficient is like hypertext linking in that it does not pay attention to the meaning and context of the requirement. Using preselected or predefined objects provides access only to those statements so identified without searching for a meaning other than the predefined designation.
SYSTEM REQUIREMENTS AND SPECIFICATIONS
For commercially available requirements tools, data must be input manually, and manually developed information must be used to generate relationships across requirements. This is followed by automated management of information after input is made to the tool database. At present, no standards are available to support tools for requirements management. This has led to the development and use of a large number of commercial tools, each with differing methods, as well as proprietary tools developed by certain industries because it is considered to be a competitive advantage for most large complex projects. One common aspect for all tools is the manual development of architectural perspectives and classification schemes. CASE Tools for the Requirements Process Some commercially available tools that use a single method within a single phase have been developed for requirements management information, whereas others have been developed specifically to link requirements to other activities within the development life cycle. SADT, a product of Softech, Inc., provides assistance through the use of activity diagrams to accomplish system definition, requirements analysis, and design (13). General-purpose system analysis tools also are used for requirements management. Some of the more robust of these tool sets include Requirements Driven Design (RDD100) by Ascent Logic (14), which is used to document system conceptual models, and Foresight (15) which is used to maintain a data dictionary and document system simulation. Other tools and techniques that support requirements management include Software Requirements Methodology (SREM) (16); and ARTS, a database management system for requirements (17). Yet another method used by commercial tool vendors is the hypertext approach. In this approach, keywords or phrases are identified and linked by hypertext throughout the document or set of documents. An example of a tool that uses this approach is Document Director (18). There are also tools that are intended for the explicit purpose of requirements tracing; however, they also provide for requirements management. These tools link information from multiple disciplines and phases and include Requirements Traceability Manager (RTM) (Marconi Corporation) (19), SLATE (TD Technologies) (20), and DOORS (Zycad Corporation) (21). These tools use an entity-relation-attribute-like schema to capture information in a system database, either relational or object-oriented, to enable formation of queries about traceable entities, and to generate reports. RTM uses a relational database structure to capture information and provide management, whereas DOORS provides an object-oriented database for management of information. SLATE follows a multiuser, user/server, object-oriented approach that provides dynamic representation of the system as it evolves. Several other CASE tools are in various stages of development. They will provide support to the requirements process, even though they have not seen wide-spread application beyond the group that did the original research and investigation. These include the following efforts: (1) the work at Southern California University by Johnson et al. (21) on a tool named Acquisition of Requirements and Incremental Evolution of Specifications (ARIES). This tool supports requirements analysis through evaluation and codification of the results in formal specifications. (2) Mays et al. (22) of IBM
307
have developed a requirements planning process described as the Planning and Design Methodology (PDM) that includes requirements acquisition, problem definition, development of external functional descriptions to address the problems, and provision for system and product designs from the descriptions. (3) Rich and Waters (23), in work at MIT, describe the development of the Programmer’s Apprentice, a knowledgebased approach to development. The intent is to take a set of disorganized imprecise statements and provide a coherent requirements representation. (4) Faulk et al. (24) at the Software Productivity Consortium have developed an approach they call the Consortium Requirements Engineering method (CoRE). In this method a coherent approach is taken for specifying real-time requirements information. (5) Kaindl (25) has produced a hypertext tool for semiformal representation of natural languages. The tool is named Requirements Engineering through Hypertext (RETH). (6) Palmer and Evans (26) at George Mason University have developed and applied a syntactic and semantic analyzer they call Advanced Integrated Requirements Engineering System (AIRES). In this approach, natural language statements are entered into a database and automatically analyzed for similarities, redundancies, inconsistencies, ambiguities, and the like. The output is a database of organized requirements. These represent the diverse activities being undertaken in an attempt to provide an automated CASE tool for requirements engineering and management. FINAL OBSERVATIONS The future of system requirements and specifications support lies in the development of the capability to deal directly with requirements in natural language (the language of choice of most users), the ability to provide automated assistance to allocation of requirements to various architectural and classification systems, and the management of these activities to ferret out errors, issues, and risks. The following areas presently are being addressed through ongoing research and development programs in both industries and universities: • Automated allocation of entities to architectures and classifications • Requirements management that is independent of methods used to develop architectures and classifications • Derivation of explicit attributes that are consistent from the highest-level system requirements to the lowest levels of decomposition Because of the very nature of this area, which is a humanintensive activity, the process will never be a fully automated one. Just the elicitation activity itself is sufficiently humanintensive to justify both users and developers to spend many hours to ensure that the system to be constructed is properly represented by the requirements. The other activities require less human interaction; however, this interaction is necessary to ensure that we do the best we can do for the product. From origination to final product, system development is a difficult, arduous, and manually intensive task at the present time. Advances in technology should provide some relief to assist in automating allocation and classification procedures and generally to provide more assistance to the user and de-
308
SYSTEMS ANALYSIS
veloper. However, the manual aspects of examining each of the needs for major large systems is not likely to be replaced by automated processes soon. This is desirable because it is truly necessary for users and developers to exercise human judgment on each and every need to ascertain whether it is the correct one for the system desired and it is correctly stated to avoid interpretation problems. BIBLIOGRAPHY
24. S. Faulk et al., The CoRE method for real-time requirements, Software, 9 (5): 22–33, 1992. 25. H. Kaindl, The missing link in requirements engineering, ACM SIGSOFT Softw. Eng. Notes, 18 (2): 30–39, 1993. 26. J. D. Palmer and R. P. Evans, An integrated semantic and syntactic framework for requirements traceability: Experience with system level requirements for a large complex multisegment project, CSESAW 94, White Oak, MD, NSWCDD/MP-94/122, pp. 9– 14, July 19–20, 1994.
JAMES D. PALMER 1. J. D. Palmer and N. A. Fields, An integrated environment for software requirements engineering, Software, 80–85, May 1992. 2. A. P. Sage and J. D. Palmer, Software Systems Engineering, New York: Wiley, 1990. 3. B. W. Boehm, Improving software productivity, Computer, 20 (9): 43–57, 1987. 4. A. D. Gordon, Classification, New York: Chapman and Hall, 1981. 5. J. D. Palmer and Y. Liang, Indexing and clustering of software requirements specifications, Inf. Decision Technol., 18: 283–299, 1992. 6. M. R. Anderberg, Clustering Analysis for Application, New York: Academic Press, 1973. 7. J. D. Palmer and S. Park, Automated support to system modeling from informal software requirements, Sixth Int. Conf. Software Eng. Knowledge Eng., Latvia, June 20–23, 1994. 8. G. Salton and M. J. McGill, Automatic Information Organization and Retrieval, New York: McGraw-Hill, 1983. 9. IEEE Software Engineering Standards, 1987. 10. H. Gomaa, A software design method for real-time systems, Commun. ACM, 27 (9): 657–688, 1984. 11. A. M. Davis, Software Requirements: Objects, Functions, and States, Englewood Cliffs, NJ: Prentice-Hall, 1993. 12. F. P. Brooks, Jr., No silver bullet: Essence and accidents in software engineering, Computer, 20 (4): 10–19, 1987. 13. D. T. Ross and K. E. Shoman, Jr., Structured analysis for requirements definition, IEEE Trans. Softw. Eng., SE-3 (1): 69–84, 1977. 14. RDD-100—Release Notes Release 3.0.2.1, October, 1992, Requirements Driven Design, Ascent Logic Corporation, San Jose, CA, 1992. 15. M. D. Vertal, Extending IDEF: Improving complex systems with executable modeling, 1994 Annu. Conf. Bus. Re-eng., IDEF Users Group, Richmond, VA, May 1994. 16. M. W. Alford, SREM at the age of eight: The distributed computing design system, Computer, 18 (4): 36–46, 1985. 17. R. F. Flynn and M. Dorfman, The automated requirements traceability system (ARTS), AIAA Third Computers in Aerospace Conf., AIAA, pp. 418–428, 1981. 18. Document Director—The Requirements Tool, B.G. Jackson Associates, Houston, TX, 1989. 19. J. Nallon, Implementation of NSWC requirements traceability models, CSESAW 94, White Oak, MD, NSWCDD/MP-94/122, pp. 15–22, July 19–20, 1994. 20. N. Rundley and W. D. Miller, DOORS to the digitize battlefield: Managing requirements discovery and traceability, CSESAW 94, White Oak, MD, NSWCDD/MP-94/122, pp. 23–28, July 19–20, 1994. 21. W. L. Johnson, M. Feather, and D. Harris, Representing and presenting requirements knowledge, IEEE Trans. Softw. Eng., SE-18 (10), 853–869, 1992. 22. R. G. Mays et al., PDM: A requirements methodology for software system enhancements, IBM Systems J., 24 (2): 134–149, 1985. 23. C. Rich and R. Waters, The programmers apprentice: A research overview, Computer, 21 (11): 10–25, 1988.
George Mason University
SYSTEMS. See BUSINESS DATA PROCESSING.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7112.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Systems Analysis Standard Article James M. Tien1 1Rensselaer Polytechnic Institute, Troy, NY Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7112 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (160K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Evaluation Approach Evaluation Modeling Observation About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7112.htm15.06.2008 14:29:55
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
308
SYSTEMS ANALYSIS
SYSTEMS ANALYSIS Does the system work? Is it worth the cost? Can and should it be implemented elsewhere? It is the reputed purpose of evaluation to provide answers to these and related questions. The need for conducting evaluations becomes more critical as systems or programs become more complex and more costly and, concomitantly, as the tax base or resources for their funding remain fixed or decrease. Unfortunately, system or program evaluation has not lived up to expectations (1). The field of evaluation is littered with efforts that do not adequately address the important issues or objectives, that do not employ valid controls for comparison purposes, that rely on inadequate measures or include expensive collections of data on measures that are in fact never used in the evaluation, that rely on inappropriate measurement methods, or that employ inadequate analytic techniques. Most, if not all, of the above-cited problems could be mitigated by developing, at the beginning of an evaluation effort, a valid and comprehensive evaluation design. Although there is no stock evaluation design that can be taken off the shelf and implemented without revision, there should be an approach or process by which such designs can be developed. Indeed, Tien (2) outlines a systems approach— that is at once purposeful and systematic—for developing valid and comprehensive evaluation designs. The approach was first proposed by Tien (3) and has since been successfully employed in a number of evaluation efforts [see, e.g., Colton et al. (4), Tien and Cahn (5), and Tien and Rich (6)]. The approach is outlined in the next section, followed first by an illustration of the importance of evaluation modeling, and then by an observation that what is also needed is a continuous layered approach to the monitoring, diagnosis, and improvement of systems that could complement the broader system evaluations that, by necessity, are undertaken on an intermittent and as-needed basis. EVALUATION APPROACH The evaluation approach is based on a dynamic roll-back framework that consists of three steps leading up to a valid and comprehensive evaluation design. The roll-back aspect of the framework is reflected in the ordered sequence of steps. The sequence rolls back in time from (1) a projected look at J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
SYSTEMS ANALYSIS
the range of program characteristics (i.e., from its rationale through its operation and anticipated findings); to (2) a prospective consideration of the threats (i.e., programs and pitfalls) to the validity of the final evaluation; and to (3) a more immediate identification of the evaluation design elements. The logic of the sequence of steps should be noted; that is, the anticipated program characteristics identify the possible threats to validity, which in turn point to the evaluation design elements that are necessary to mitigate, if not to eliminate, these threats. The three-step sequence can also be stated in terms of two sets of links that relate, respectively, an anticipated set of program characteristics to an intermediate set of threats to validity to a final set of design elements. Although some of the links between program characteristics and threats to validity are obvious (e.g., a concurrent program may cause an extraneous event threat to internal validity), an exhaustive listing of such links—for purposes of, say, a handbook—will require a significant amount of analysis of past and ongoing evaluations. Similarly, the second set of links between threats to validity and design elements will also require a significant amount of analysis. Both sets of links are briefly considered herein. The ‘‘dynamic’’ aspect of the framework refers to its nonstationary character; that is, the components of the framework must be updated constantly, throughout the entire development and implementation phases of the evaluation design. In this manner, the design elements can be refined, if necessary, to account for any new threats to validity that may be caused by previously unidentified program characteristics. In sum, the dynamic roll-back framework is systems oriented; it represents a purposeful and systematic process by which valid and comprehensive evaluation designs can be developed. Each of the three steps in the design framework is elaborated on in the next three subsections, respectively. Program Characteristics In general, the characteristics of a program can be determined by seeking responses to the following questions: What is the program rationale? Who has program responsibility? What is the nature of program funding? What is the content of the program plan? What are the program constraints? What is the nature of program implementation? What is the nature of program operation? Are there any other concurrent programs? What are the anticipated evaluation findings? Again, it should be noted that the purpose of understanding the program characteristics is to identify the resultant problems or pitfalls that may arise to threaten the validity of the final evaluation. The possible links between program characteristics and threats to validity are considered in the next subsection, following a definition of the threats to validity. Threats to Validity After more than three decades, the classic monograph by Campbell and Stanley (7) is still the basis for much of the ongoing discussion of threats to validity. However, their original 12 threats have been expanded by Tien (3) to include 8 additional threats. The 20 threats to validity can be grouped into the following five categories. 1. Internal validity refers to the extent that the statistical association of an intervention and measured impact can
2.
3.
4.
5.
309
reasonably be considered a causal relationship. This category includes the following 9 threats: (1) extraneous events, (2) temporal maturation, (3) design instability, (4) pretest experience, (5) instrumentation changes, (6) regression artifacts, (7) differential selection, (8) differential loss, and (9) selection-related interaction. External validity refers to the extent that the causal relationship can be generalized to different populations, settings and times. This category includes the following 4 threats: (10) pretest intervention interaction, (11) selection-intervention interaction, (12) test-setting sensitivity, and (13) multiple-intervention interference. Construct validity refers to the extent that the causal relationship can be generalized to different interventions, impact measures, and measurements. This category includes the following 2 threats: (14) intervention sensitivity, and (15) measures sensitivity. Statistical conclusion validity refers to the extent that an intervention and a measured impact can be statistically associated: error could be either a false association (i.e., Type I error) or a false nonassociation (i.e., Type II error). This category includes the following 2 threats: (16) extraneous sources of error, and (17) intervention integrity. Conduct conclusion validity refers to the extent that an intervention and its associated evaluation can be completely and successfully conducted. This category includes the following 3 threats: (18) design complexity, (19) political infeasibility, and (20) economic infeasibility.
Although the 20 threats to validity are, for the most part, self-explanatory, it is helpful to highlight three aspects. First, the threats to external and construct validities are threats to the generalizability of the observed impacts. Generalization involves the science of induction, which causes a number of problems that are, according to Campbell and Stanley (7, p. 17), painful because of a recurrent reluctance to accept Hume’s truism that induction or generalization is never fully justified logically. Whereas the problems of internal validity are solvable within the limits of the logic of probability and statistics, the problems of external validity are not logically solvable in any near, conclusive way. Generalization always turns out to involve extrapolation into a realm not represented in one’s sample. Such extrapolation is made by assuming one knows the relevant laws.
Although generalization is difficult to undertake, it is a fundamental aspect of social program evaluation. While the classical sciences (i.e., physics, chemistry, biology, etc.) emphasize repeatability in their experiments, the social sciences emphasize representativeness in their experiments, thus facilitating extrapolations or generalizations. Second, it can be seen that the threats to validity identified above are overlapping in some areas and conflicting in other areas. For example, seasonal effects could be identified either as extraneous events or a result of temporal maturation. Additionally, factors that mitigate threats to conduct conclusion validity would most likely be in conflict with those that mitigate the other threats to validity. It is, however, essential that the threats to conduct conclusion validity be borne in mind
310
SYSTEMS ANALYSIS
when developing an evaluation design; the field of evaluation is littered with studies that were not concluded because of the design’s complexity or because of the political and economic infeasibilities that were initially overlooked. Third, the threats to validity can be regarded as plausible rival hypotheses or explanations of the observed impacts of a program. That is, the assumed causal relationships (i.e., test hypotheses) may be threatened by these rival explanations. Sometimes the threats may detract from the program’s observed impacts. The key objective of an evaluation design is then to minimize the threats to validity, while at the same time to suggest the causal relationships. The specific evaluation design elements are considered next. Evaluation Design Elements Tien (3) has found it systematically convenient to describe a program evaluation design in terms of five components or sets of design elements, including test hypotheses, selection scheme, measures framework, measurement methods, and analytic techniques. Test Hypotheses. The test hypotheses component is meant to include the range of issues leading up to the establishment of test hypotheses. In practice, and as indicated in the dynamic roll-back framework, the test hypotheses should be identified only after the program characteristics and threats to validity have been ascertained. The test hypotheses are related to the rationale or objectives of the program and are defined by statements that hypothesize the causal relationships between dependent and independent measures, and it is a purpose of program evaluation to assess or test the validity of these statements. To be tested, a hypothesis should (1) be expressed in terms of quantifiable measures, (2) reflect a specific relationship that is discernible from all other relations, and (3) be amenable to the application of an available and pertinent analytic technique. Thus, for example, in a regression analysis, the test hypothesis takes the form of an equation between a dependent measure and a linear combination of independent measures, while in a before-after analysis with a chi-square test, a simple test hypothesis, usually relating two measures, is used. In the case of a complex hypothesis, it may be necessary to break it down into a series of simpler hypotheses that could each be adequately tested. In this manner, a measure that is the dependent measure in one test could be the independent measure in another test. In general, input measures tend to be independent measures, process measures tend to be both independent and dependent measures, while impact measures tend to be dependent measures. Another difficulty arises in the testing process. Analytic techniques exist for testing the correlation of measures, but correlation does not necessarily imply causation. However, inasmuch as causation implies correlation, it is possible to use the relatively inexpensive correlational approach to weed out those hypotheses that do not survive the correlational test. Furthermore, in order to establish a causal interpretation of a simple or partial correlation, one must have a plausible causal hypothesis (i.e., test hypothesis) and at the same time no plausible rival hypotheses (i.e., threats to validity) that could explain the observed correlation. Thus, the fewer the number of plausible rival hypotheses, the greater is the likeli-
hood that the test hypothesis is not disconfirmed. Alternatively, if a hypothesis is not disconfirmed or rejected after several independent tests, then a powerful argument can be made for its validity. Finally, it should be stated that while the test hypotheses themselves cannot mitigate or control for threats to validity, poor definition of the test hypotheses can threaten statistical conclusion validity, since threats to validity represent plausible rival hypotheses. Selection Scheme. The purpose of this component is to develop a scheme for the selection and identification of test groups and, if applicable, control groups, using appropriate sampling and randomization techniques. The selection process involves several related tasks, including the identification of a general sample of units from a well-designated universe; the assignment of these (perhaps matched) units to at least two groups; the identification of at least one of these groups to be the test group; and the determination of the time(s) that the intervention and, if applicable, the placebo are to be applied to the test and control groups, respectively. A more valid evaluation design can be achieved if random assignment is employed in carrying out each task. Thus, random assignment of units to test and control groups increases the comparability or equivalency of the two groups, at least before the program intervention. There is a range of selection schemes or research designs, including experimental designs (e.g., pretest-posttest equivalent design, Solomon four-group equivalent design, posttestonly equivalent design, factorial designs), quasi-experimental designs (e.g., pretest-posttest nonequivalent design, posttestonly nonequivalent design, interrupted time-series nonequivalent design, regression-discontinuity design, ex post facto designs), and nonexperimental designs (e.g., case study, survey study, cohort study). In general, it can be stated that nonexperimental designs do not have a control group or time period, while experimental and quasi-experimental designs do have such controls even if it is just a before-after control. The difference between experimental and quasi-experimental designs is that the former set of designs have comparable or equivalent test and control groups (i.e., through randomization) while the latter set of designs do not. Although it is always recommended that an experimental design be employed, there are a host of reasons that may prevent or confound the establishment—through random assignment—of equivalent test and control groups. One key reason is that randomization creates a focused inequity because some persons receive the (presumably desirable) program intervention while others do not. Whatever the reason, the inability to establish equivalent test and control groups should not preclude the conduct of an evaluation. Despite their inherent limitations, some quasi-experimental designs are adequate. In fact, some designs (e.g., regression-discontinuity designs) are explicitly nonrandom in their establishment of test and control groups. On the other hand, other quasiexperimental designs should be employed only if absolutely necessary and if great care is taken in their employment. Expost-facto designs belong in this category. Likewise, nonexperimental designs should only be employed if it is not possible to employ an experimental or quasi-experimental design. The longitudinal or cohort study approach, which is a nonexperimental design, is becoming increasingly popular.
SYSTEMS ANALYSIS
In terms of selection scheme factors that could mitigate or control for the various threats to validity, it can be stated that randomization is the key factor. In particular, most, if not all, of the internal and external threats to validity can be mitigated by the experimental designs, which, in turn, can only be achieved through randomization. Thus, random assignment of units—especially matched units—to test and control groups can control for all the threats to internal validity except, perhaps, extraneous events, random identification of a group to be the test group and random determination of time(s) that the intervention is to be applied can control for selection-related interaction threats to internal validity, and random sampling can allow for generalization to the universe from which the sample is drawn. Measures Framework. There are two parts to the measures framework component. First, it is necessary to specify the set of evaluation measures that is to be the focus of the particular evaluation. Second, a model reflecting the linkages among these measures must be constructed. In terms of evaluation measures, Tien (3) has identified four sets of measures: input, process, outcome, and systemic measures. The input measures include program rationale (objectives, assumptions, hypotheses), program responsibility (principal participants, participant roles), program funding (funding level, sources, uses), program constraints (technological, political, institutional, environmental, legal, economic, methodological), and program plan (performance specifications, system design, implementation schedule). The process measures include program implementation (design verification, implementation cost), program operation (system performance, system maintenance, system security, system vulnerability, system reliability, operating cost), and concurrent programs (technological, physical, social). The outcome measures include attitudinal, behavioral, and other impact considerations. The systemic measures include organizational (intraorganizational, interorganizational), longitudinal (input, process, outcome), programmatic (derived performance measures, comparability, transferability, generalizability), and policy (implications, alternatives) considerations. In general, the input and process measures serve to ‘‘explain’’ the resultant outcome measures. Input measures alone are of limited usefulness since they only indicate a program’s potential, not actual, performance. On the other hand, the process measures do identify the program’s performance but do not consider the impact of that performance. Finally, the outcome measures are the most meaningful observations since they reflect the ultimate results of the program. In practice and as might be expected, most of the available evaluations are fairly explicit about the input measures, less explicit about the process measures, and somewhat fragmentary about the outcome measures. The fourth set of evaluation measures, the systemic measures, can also be regarded as impact measures but have been overlooked to a large extent in the evaluation literature. The systemic measures allow the program’s impact to be viewed from at least four systemic perspectives. First, it is important to view the program in terms of the organizational context within which it functions. Thus, the program’s impact on the immediate organization and on other organizations must be assessed. Second, the pertinent input, process, and outcome measures must be viewed over time, from a longitudinal per-
311
spective. That is, the impact of the program on a particular system must be assessed not only in comparison to an immediate ‘‘before’’ period but also in the context of a longer time horizon. Thus, it is important to look at a process measure like, for example, average response time over a five-to-tenyear period to ascertain a trend line, since a perceived impact of the program on the response time may be just a regression artifact. Third, in an overall programmatic context, the evaluator should (1) derive second-order, systems performance measures (e.g., benefit cost and productivity measures) based on the first-order input, process, and outcome measures; (2) compare the program results with findings of other similar programs; (3) assess the potential of transferring the program to other locales or jurisdictions; and (4) determine the extent to which the program results can be generalized. In terms of generalization, it is important not only to recommend that the program be promulgated, but also to define the limits of such a recommendation. Fourth, the first three systemic perspectives can be regarded as program oriented in focus as compared to the fourth perspective, which assesses the program results from a broader policy oriented perspective. In addition to assessing the policy implications, it is important to address other feasible and beneficial alternatives to the program. The alternatives could range from slight improvements to the existing program to recommendations for new and different programs. The second part of the measures framework concerns the linkages among the various evaluation measures. A model of these linkages should contain the hypothesized relationships, including cause-and-effect relationships, among the measures. The model should help in identifying plausible test and rival hypotheses, as well as in identifying critical points of measurement and analysis. In practice, the model could simply reflect a systematic thought process undertaken by the evaluator, or it could be explicitly expressed in terms of a table, a block diagram, a flow diagram, or a matrix. In conclusion, concise and measurable measures can mitigate the measures-related threats to validity. Additionally, the linkage model can help to avert some of the other threats to validity. Measurement Methods. The list of issues and elements that constitute the measurement methods component includes measurement time frame (i.e., evaluation period, measurement points, and measurement durations), measurement scales (i.e., nominal, ordinal, interval, and ratio), measurement instruments (i.e., questionnaires, data collection forms, data collection algorithms, and electromechanical devices), measurement procedures (i.e., administered questionnaires, implemented data collection instruments, telephone interviews, face-to-face interviews, and observations), measurement samples (i.e., target population, sample sizes, sampling technique, and sample representativeness), measurement quality (i.e., reliability, validity, accuracy, and precision), and measurement steps (i.e., data collection, data privacy, data codification, and data verification). Clearly, each of the above indicated measurement elements has been the subject matter of one or more theses, journal articles, and/or books. For example, data sampling, a technique for increasing the efficiency of data gathering by the identification of a smaller sample that is representative of the larger target data set, remains a continuing hot research
312
SYSTEMS ANALYSIS
area in statistics. The dilemma in sampling is that the larger the sample, the greater the likelihood of representativeness but, likewise, the greater the cost of data collection. Measurement methods that could mitigate or control for threats to validity include a multimeasurement focus, a long evaluation period (which, while controlling for regression artifacts, might aggravate the other threats to internal validity), large sample sizes, random sampling, pretest measurement, and, of course, techniques that enhance the reliability, validity, accuracy, and precision of the measurements. Further, judicious measurement methods can control for the test-setting sensitivity threat to external validity, while practical measurement methods that take into account the political and economic constraints can control for the conduct conclusion threats to validity. Analytic Techniques. Analytic techniques are employed in evaluation or analysis for a number of reasons: to conduct statistical tests of significance; to combine, relate, or derive measures; to assist in the evaluation conduct (e.g., sample size analysis, Bayesian decision models); to provide data adjustments for nonequivalent test and control groups; and to model test and/or control situations. Next to randomization (which is usually not implementable), perhaps the single most important evaluation design element (i.e., the one that can best mitigate or control for the various threats to validity) is, as alluded to above, modeling. Unfortunately, most evaluation efforts to date have made minimal use of this simple but yet powerful tool. Larson (8), for example, developed some simple structural models to show that the integrity of the Kansas City Preventive Patrol Experiment was not upheld during the course of the experiment—thus casting doubt on the validity of the resultant findings. As another example, Tien (9) employed a linear statistical model to characterize a retrospective ‘‘split area’’ research design or selection scheme, which was then used to evaluate the program’s impact. The next section further underscores the importance of evaluation modeling.
EVALUATION MODELING An important area in which evaluation modeling has played a critical role is criminal recidivism, which can be defined as the reversion of a person to criminal behavior after he or she has been convicted of a prior offense, sentenced, and (presumably) corrected. In particular, there have been many evaluations of correctional programs to determine if they work— more specifically, do they reduce the rate of recidivism? Maltz and Pollack (10), for example, show how a population of youths, whose delinquent activity is represented by a stationary stochastic process, can be selected (using reasonable selection rules) to form a cohort that has an inflated rate of delinquent activity before selection. When the activity rate returns to its uninflated rate after the youths are released from the program, an apparent reduction results. Based on this analysis, they conclude that the reductions noted in delinquent activity may be largely due to the way delinquents are selected for correction rather than to the effect of the programs. Thus, they modeled the impact of the regression artifact threat to internal validity.
Ellerman, Sullo, and Tien (11), on the other hand, offer an alternative approach to modeling recidivism by first determining empirical estimates of quantile residual life (QRL) functions, which highlight the properties of the data and serve as an exploratory aid to screening parametric mixture models. The QRL function can be defined as follows. Let T be a random variable (rv) which represents time-to-recidivism and F be its distribution function (df); thus, F(t) = P(T ≤ t)
(1)
Assume that F is absolutely continuous on its interval of support so that its derivative, denoted by f, is the probability density function (pdf) of T. The reliability or survivorship function (sf), denoted by F, is F(t) = P(T > t) = 1 − F (t)
(2)
F(t) is the probability that an ex-prisoner will not recidivate before time t. Let Tx be the time remaining to recidivism given that an ex-prisoner has not yet recidivated at time x or, alternatively, the residual life at time x, that is, Tx ≡ (T − x)|{T > x}
(3)
Tx is a conditional rv with sf Fx defined by
F x (t) ≡ P(T − x > t|T > x) ≡ F (t + x)/F (x)
(4)
The df of Tx is then given by Fx ⫽ 1 ⫺ Fx. For any m in (0, 1), let Qm (x) = Fx−1 (m)
(5)
where F ⫺1 x ( ⭈ ) denotes the inverse function of Fx. Since the df F is assumed to be absolutely continuous, its inverse exists uniquely and hence so does that of the residual life df Fx. The function Qm( ⭈ ) is called the m-quantile residual life (QRL) function; Qm(x) is the m-quantile of the df Fx. For example, while F ⫺1(0.5) is the median of the underlying distribution F, Q0.5(x) is the median of the residual life distribution Fx. Simple distinctions such as whether Qm(x) is increasing or decreasing in x for any m is tantamount to a statement concerning recidivism dynamics. It follows from Eqs. (4) and (5) that in terms of the unconditional df F, Qm (x) = F −1 [(1 − m)F (x) + m] − x for x ≥ 0
(6)
If Qm ⬅ Qm(0) ⫽ F ⫺1(m) denotes the m-quantile of the original distribution, then Eq. (6) is equivalent to Qm (x) = Qm − x
(7)
m = (1 − m)F (x) + m
(8)
where
For any distribution for which the inverse function F ⫺1 exists in closed form, Eq. (6) will yield closed-form expressions for the QRL functions. It can be shown, under some mild condi-
SYSTEMS ANALYSIS
tions, that the mean of the distribution is infinite if for any m lim dQm (x)/dx > m/(1 − m)
x→∞
(9)
A common notion in the stochastic recidivism literature is, either by explicit assumption or by inference from the fitted distributions, that recidivism rates decline over time. Applied to social systems or processes, this phenomenon has been called inertia. The concept of inertia is that for y ⱖ x, Ty ⱖ Tx in some sense. The strongest such sense is that of a decreasing hazard rate (DHR). The most fundamental way of stating this condition is F x (t) = P(T − x > t|T > x) ↑ in x for all t > 0
(10)
where 앖 means nondecreasing. The property in Eq. (10) is equivalent to the statement, for y ⱖ x, Ty is stochastically greater than Tx. It is also equivalent to the nonincreasingness of the hazard rate (t) ⫽ f(t)/F(t). The DHR property of Eq. (10) states that the longer an ex-prisoner remains free, the more probable is it that he remains free for an additional time t. It can be shown that Eq. (10) holds if, and only if, Qm(x)앖 in x for all m, that is, the DHR property is equivalent to the nondecreasingness of every QRL function. Thus, it is conceivable that Qm(x)앖 in x for some m, say the median, so that inertia would be present with respect to the median residual life even when the underlying distribution is not DHR. The DHR property has a dual, namely, the increasing hazard rate (IHR) property defined by the substitution of 앗 for 앖 in Eq. (10). It should be noted that while the terminology and applications contained in this section pertain to the criminal justice area, the proposed QRL approach can be applied in other contexts as well, for example, nursing home length of stay, disease latency and survivability, and reliability engineering. Empirical estimates of quantile residual life functions can be employed not only to obtain properties of recidivism, but also to help screen parametric mixture models. In this manner, the Burr model is demonstrated to be an appropriate model for characterizing recidivism. The Burr is actually a mixture of Weibull distributions; its sf is F (t) = (1 + βt ρ )−α , t > 0
(11)
Where 움, 웁, ⬎ 0. The hazard rate of the Burr is λ(t) = αβρt ρ−1 /(1 + βt ρ ), t > 0
(12)
which is strictly decreasing for ⱕ 1 (as expected because then the Weibulls being mixed are DHR), while for ⬎ 1, that is, for a mixture of IHR Weibulls, (t) is increasing on [0, xr], where xr = [β −1 (ρ − 1)]1/ρ
(13)
and then decreases on (xr, 앝). Thus, while mixtures of nonincreasing hazard rate distributions are strictly DHR, mixtures of IHR distributions are not necessarily IHR. As applied to criminal recidivism, then, the Burr model suggests that although the observed declining recidivism rate can be explained by population heterogeneity, individual recidivism rates may in fact be increasing. This understanding
313
can have a significant impact on public policy. For example, the observation that there is an initial high recidivism rate among cohorts of prison releasees has led some criminologists to contend that adjustment problems (i.e., ‘‘postrelease trauma’’) during a ‘‘critical period’’ soon after release result in intensified criminal activity. This observation has resulted in various correctional programs, such as halfway houses, intensified parole supervision, and prison furloughs, designed to alleviate postrelease stress and minimize recidivism. But these programs may be based on a misinterpretation of recidivism data. There may very well be a ‘‘critical period’’ after release; however, the observed high recidivism rate during the critical period, followed by what seems to be a declining rate, may be an artifact of population heterogeneity and, if so, should not be associated with individual patterns of recidivism. Thus, basing postrelease supervision programs on inferences made from aggregate data—when such inferences concern individual behavior—is risky business. In sum, evaluation modeling is critical to any system or program evaluation. In many situations, as is the case above for criminal recidivism, it provides for a ‘‘control’’ framework within which the system or program performance is analyzed or understood.
OBSERVATION In the continued development and operation of a system or program, it is obvious that broad evaluation efforts cannot be continuously carried out; indeed, such efforts should only be undertaken on an intermittent—and as needed—basis. The question then arises: What, if anything, should be done in between these system evaluations? The answer can perhaps be found in the health field; in particular, in the way a physician conducts a physical examination. At the start of the examination, the doctor checks a basic set of indicators (e.g., blood pressure, temperature, heart rate). If any of these primary indicators signals a potential problem, measurements of other indicators that dig deeper into the body’s systems are taken (e.g., blood test, xray, CT scan). If any of these secondary indicators suggests a problem, then other tertiary indicators (e.g., colonoscopy, biopsy) may be ascertained. As the doctor digs deeper and deeper, the root cause of the problem or symptom is discovered and appropriate actions are taken to correct the problem. In other words, a layered approach is taken toward monitoring, diagnosing, and improving a person’s physical health. Similarly, for example, in assessing the ‘‘health’’ or performance of a system, a layered approach could be employed, starting with broad, easy-to-obtain measures and continuing, if necessary, with more focused measures. In fact, as suggested in Fig. 1, three layers of measures, primary, secondary, and tertiary, would probably be sufficient. This approach should also include a method for combining at least the primary measures into, say, a system performance index (SPI) that could be used to help assess the system status on an ongoing, continuous basis, just as the Dow Jones Industrial Average serves to gauge stock market performance on a continuous basis. A system with a low SPI would need to acquire secondary and/or tertiary measures in order to identify appropriate strategies for improving its SPI.
314
SYSTEMS ARCHITECTURE
Implement appropriate improvement action(s)
Primary level
Monitor system performance index (SPI)
Potential problem?
Yes
Check primary performance indicators
No
Problem identified?
No
Check secondary performance indicators
Specify problem and identify appropriate improvement action(s)
Yes
Secondary level
Problem identified?
Yes Yes Tertiary level
No
Check tertiary performance indicators
Problem identified?
No Figure 1. A continuous layered approach to the monitoring, diagnosis, and improvement of systems.
The continuous layered approach to the monitoring, diagnosis and improvement of systems (CLAMDIS) can only be promulgated if the initial system evaluation yields a pertinent set of primary, secondary, and teritary indicators and demonstrates how the primary indicators can be combined into an overall SPI. An effective system evaluation must identify such pertinent indicators; otherwise, the evaluation would be, at best, a one-time assessment of the system’s performance. In this regard, CLAMDIS would be complementary to the evaluation approach presented herein. BIBLIOGRAPHY 1. E. Chelimsky and W. R. Shadish, Jr. (eds.), Evaluation for the 21st Century, Thousand Oaks, CA: Sage Publications, Inc., 1997. 2. J. M. Tien, Program evaluation: A systems and model-based approach, in A. P. Sage (ed.), Concise Encyclopedia of Information Processing in Systems and Organizations, New York: Pergamon, 1990. 3. J. M. Tien, Toward a systematic approach to program evaluation design, IEEE Trans. Syst. Man. Cybern., 9: 494–515, 1979. 4. K. W. Colton, M. L. Brandeau, and J. M. Tien, A National Assessment of Command, Control, and Communications Systems, Washington, DC: National Institute of Justice, 1982. 5. J. M. Tien and M. F. Cahn, Commercial security field test program: A systematic evaluation of the impact of security surveys, in D. P. Rosenbaum (ed.), Preventing Crime in Residential and Commercial Areas, Beverly Hills, CA: Sage Publications, 1986.
6. J. M. Tien and T. F. Rich, Early Experiences with Criminal History Records Improvement, Washington, DC: Bureau of Justice Assistance, 1997. 7. D. T. Campbell and J. C. Stanley, Experimental and Quasi-Experimental Designs for Research, Chicago: Rand McNally, 1966. 8. R. C. Larson, What happened to patrol operations in Kansas City? A review of the Kansas City preventive patrol experiment, J. Crim. Just., 3: 267–297, 1975. 9. J. M. Tien, Evaluation design: Systems and models approach, in M. G. Singh (ed.), Systems and Control Encyclopedia, New York: Pergamon Press, 1988. 10. M. D. Maltz and S. M. Pollock, Artificial inflation of a delinquency rate by a selection artifact, Op. Res., 28 (3): 547–559, 1980. 11. R. Ellerman, P. Sullo, and J. M. Tien, An alternative approach to modeling recidivism using quantile residual life functions, Op. Res., 40 (3): 485–504, 1992.
JAMES M. TIEN Rensselaer Polytechnic Institute
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7114.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Systems Architecture Standard Article Alexander H. Levis1 1George Mason University, Fairfax, VA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7114 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (106K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Definition of Architectures Structured Analysis Approach Object Oriented Approach Summary About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7114.htm15.06.2008 14:30:44
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
314
SYSTEMS ARCHITECTURE
SYSTEMS ARCHITECTURE Rapid changes in information technology, uncertainty regarding future requirements, and increasing complexity of systems have led to the use of systems architectures as a key step in the systems engineering process. While hardware and software components of a system can change over time, the J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
SYSTEMS ARCHITECTURE
underlying architecture remains invariant. This allows graceful upgrading of systems through the use of components from many manufacturers whose products conform to the architecture. The use of systems architectures has been very effective in telecommunication systems, in software development, and in computers. It is now being extended to large-scale information systems. Two basic paradigms are available for designing and evaluating systems and architectures: the structured analysis and the object oriented approaches. Both require multiple models to represent them and both lead, through different routes, to executable models. The latter are appropriate for analyzing the behavior of the architecture and for evaluating performance prior to system implementation. Systems architecting has been defined as the process of creating complex, unprecedented systems (1,2). This description fits the computer-based systems that are being created or planned today, whether in industry, government, or academia. The requirements of the marketplace are illdefined and rapidly changing with evolving technology making possible the offering of new services at a global level. At the same time, there is increasing uncertainty as to the way in which they will be used, what components will be incorporated, and the interconnections that will be made. Generating a system architecture as part of the systems engineering process can be seen as a deliberate approach to manage the uncertainty that characterizes these complex, unprecedented systems. The word architecture derives from the Greek word architecton which means master mason or master builder. The architect, now as then, is a member of the team that is responsible for designing and building a system; then the systems were edifices, now they are computer-based and software intensive. Indeed, the system architect’s contribution comes in the very early stages of the systems engineering process, at the time when the operational concept is defined and the conceptual model of the system is developed. Consequently, the design of a system’s architecture is a top-down process, going from the abstract and general to the concrete and specific. Furthermore, it is an iterative process. The process of developing an architecture in response to requirements (that are ill-structured because of multiple uncertainties) forces their re-examination. Ambiguities are identified and resolved and, when inconsistencies are discovered, the requirements themselves are reformulated. One thinks of system architectures when the system in question consists of many diverse components. A system architect, while knowledgeable about the individual components, needs to have a good understanding of the inter-relationships among the components. While there are many tools and techniques to aid the architect and there is a well-defined architecture development process, architecting requires creativity and vision because of the unprecedented nature of the systems being developed and the ill-defined requirements. For detailed discussions on the need for systems architecting, see Refs. 1–7. Many of the methodologies for systems engineering have been designed to support a traditional system development model. A set of requirements is defined; several options are considered and, through a process of elimination, a final design emerges that is well defined. This approach, based on
315
structured analysis and design, has served the needs of systems engineers and has produced many of the complex systems in use today. It is effective when the requirements are well-defined and remain essentially constant during the system development period. However, this well-focused approach cannot handle change well; its strength lies in its efficiency in designing a system that meets a set of fixed requirements. An alternative approach with roots in software systems engineering is emerging better able to deal with uncertainty in requirements and in technology, especially for systems with long development time and expected long life cycle during which both requirements and technology will change. This approach is based on object oriented constructs. The problem is formulated in general terms and the requirements are more abstract and, therefore, subject to interpretation. The key advantage of the object oriented approach is that it allows flexibility in the design as it evolves over time.
DEFINITION OF ARCHITECTURES In defining an architecture, especially of an information system, the following items need to be described. First, there are processes that need to take place in order that the system accomplish its intended functions; the individual processes transform either data or materials that flow between them. These processes or activities or operations follow some rules that establish the conditions under which they occur; furthermore, they occur in some order that need not be deterministic and depends on the initial conditions. There is also need to describe the components that will implement the design: the hardware, software, personnel, and facilities that will be the system. This fundamental notion leads to the definition of two architectural constructs: the functional architecture and the physical architecture. A functional architecture is a set of activities or functions, arranged in a specified partial order that, when activated, achieves a set of requirements. Similarly, a physical architecture is a representation of the physical resources, expressed as nodes, that constitute the system and their connectivity, expressed in the form of links. Both definitions should be interpreted broadly to cover a wide range of applications; furthermore, each may require multiple representations or views to describe all aspects. Before even attempting to develop these representations, the operational concept must be defined. This is the first step in the architecture development process. An operational concept is a concise statement that describes how the goal will be met. There are no analytical procedures for deriving an operational concept for complex, unprecedented systems. On the contrary, given a set of goals, experience, and expertise, humans invent operational concepts. It has often been stated (1) that the development of an architecture is both an art and a science. The development of the conceptual model that represents an operational concept falls clearly on the art side. A good operational concept is based on a simple idea of how the over-riding goal is to be met. For example, ‘‘centralized decision making and distributed execution’’ represents a very abstract operational concept that lends itself to many possible implementations, while an operational concept such as the ‘‘client-server’’ one is much more limiting. As the architecture development process unfolds, it becomes necessary to elabo-
316
SYSTEMS ARCHITECTURE
Operational concept
Analysis phase
Functional architecture
Synthesis phase
Physical architecture
Figure 1. The three phase process of architecture development.
Technical architecture
rate on the operational concept and make it more specific. The clear definition and understanding of the operational concept is central to the development of compatible functional and physical architectures. Analogous to the close relationship between the operational concept and the functional architecture is the relationship between the technical architecture and the physical one. A technical architecture is a minimal set of rules governing the arrangement, interaction, and interdependence of the parts or elements whose purpose is to ensure that a conformant system satisfies a specified set of requirements. It provides the framework upon which engineering specifications can be derived, guiding the implementation of the system. It has often been compared to the building code that provides guidance for new buildings to be able to connect to the existing infrastructure by characterizing the attributes of that infrastructure. All these representations are static ones; they consist of diagrams. In order to analyze the behavior of the architecture and evaluate the performance characteristics, an executable model is needed. An executable model is a dynamic model; it can be used to analyze the properties of the architecture and it can also be used to carry out simulations. Both methodologies, whether structured analysis based or object oriented based, become rigorous when an executable model is derived and the condition is imposed that all information contained in that model must be traced back to one or more of the static diagrams. This dynamic model of the architecture is called the operational-X architecture where the X stands for the executable property. The architecture development process can be characterized as consisting of three phases: the analysis phase in which the static representations of the functional and physical architectures are obtained using the operational concept to drive the process and the technical architecture to guide it; the synthesis phase in which these static constructs are used, together with descriptions of the dynamic behavior of the architecture (often referred to as the dynamics model), to obtain the executable model of the architecture, and the evaluation phase in which measures of performance (MOP) and measures of effectiveness (MOE) are obtained. This three phase process is shown schematically in Fig. 1. STRUCTURED ANALYSIS APPROACH The structured analysis approach has its roots in the structured analysis and design technique (SADT) that originated
Dynamics model
Evaluation phase
Executable model
MOP MOE
in the 1950s (8) and encompasses structured design (9), structured development (10), the structured analysis approach of DeMarco (11), structured systems analysis (12), and the many variants that have appeared since then, often embodied in software packages for computer-aided requirements generation and analysis. This approach can be characterized as a process-oriented one (12) in that it considers as the starting point the functions or activities that the system must perform. A second characterizing feature is the use of functional decompositions and the resulting hierarchically structured diagrams. However, to obtain the complete specification of the architecture, as reflected in the executable model, in addition to the process or activity model, a data model, a rule model, and a dynamics model are required. Each one of these models contains inter-related aspects of the architecture description. For example, in the case of an information system, the activities or processes receive data as input, transform them in accordance with some conditions, and produce data as output. The associated data model describes the relationships between these same data elements. The conditions that must be satisfied are expressed as rules associated with the activities. But for the rules to be evaluated, they require data that must be available at that particular activity with which the rule is associated; the output of the rule also consists of data that control the execution of the process. Furthermore, given that the architecture is for a dynamic system, the states of the system need to be defined and the transitions between states identified to describe the dynamic behavior. State transition diagrams are but one way of representing this information. Underlying these four models is a data dictionary or, more properly, a system dictionary, in which all data elements, activities, and flows are defined. The construct that emerges from this description is that a set of inter-related views, or models, are needed to describe an architecture using the structured analysis approach. The activity model, the data model, the rule model and the supporting system dictionary, taken together, constitute the functional architecture of the system. The term functional architecture has been used to describe a range of representations—from a simple activity model to the set of models defined here. The structure of the functional architecture is shown in Fig. 2. At this time, the architect must use a suite of tools and, cognizant of the inter-relationships among the four models and the features of the tools chosen to depict them, work across models to make the various views consistent and co-
SYSTEMS ARCHITECTURE
S y s t e m
d i c t i o n a r y
Activity model
Functional architecture
Data model
Rule model Figure 2. Structure of the functional architecture. The functional architecture contains an activity model, a data model, and a rule model. The three models must be in concordance with each other and have a common data dictionary.
herent, that is, to achieve model concordance. The architect must obtain a single, integrated system dictionary from the individual dictionaries produced by the various tools that generate the different views. What a functional architecture does not contain is the specification of the physical resources that will be used to implement the functions or the structure of the human organization that is supported by the information system. These descriptions are contained in the physical architecture. Activity Model A method in wide use for the representation of an activity model is IDEF0 which has systems engineering roots; for its history, see (8). The National Institute of Standards and Technology (NIST) published Draft Federal Information Processing Standard #183 for IDEF0. IDEF0 is a modeling language for developing structured graphical representations of the activities or functions of a system. It is a static representation, designed to address a specific question from a single point of view. It has two graphical elements: a box, which represents an activity, and a directed arc that represents the conveyance of data or objects related to the activity. A distinguishing characteristic of IDEF0 is that the sides of the activity box have a standard meaning, as shown in Fig. 3. Arcs entering the left side of the activity box are inputs, the top side are controls, and the bottom side are mechanisms or resources used to perform the activity. Arcs leaving the right side are outputs—the data or objects generated by the activity. When IDEF0 is used to represent the process model in a functional architecture, mechanisms are not needed; they are part of the physical architecture. Verbs or verb phrases are inscribed in the activity boxes to define the function represented. Similarly, arc inscriptions
Controls
Inputs
Outputs A0 Mechanisms
Figure 3. Box and arrow semantics in IDEF0.
317
are used to identify the data or objects represented by the arcs. There are detailed rules for handling the branching and the joining of the arcs. A key feature of IDEF0 is that it supports hierarchical decomposition. At the highest level, the A0 level, there is a single activity that contains the root verb of the functional decomposition. This is called the context diagram and also includes a statement of the purpose of the model and the point of view taken. The next level down, the A0 level, contains the first level decomposition of the system function and the interrelationships between these functions. Each one of the activity boxes on the A0 page can be further decomposed into the A1, A2, A3, . . . page, respectively. Associated with IDEF0 is a data dictionary which includes the definitions and descriptions of the activities, listing and description of the inputs, controls, and outputs, and, if entered, a set of activation rules of the form ‘‘preconditions 씮 postconditions.’’ These are the rules that indicate the conditions under which the associated function can be carried out. Data Model The purpose of a data model is to analyze the data structures and their relationships independently of the processing that takes place, already depicted in the activity model. There are two main approaches with associated tools for data modeling: IDEF1x and entity-relationship (E-R) diagrams. Both approaches are used widely. The National Institute of Standards and Technology has published Draft Federal Information Processing Standard #184 in which IDEF1x is specified. There are many books that describe E-R diagrams: Sanders (14), Yourdon (15), McLeod (16). IDEF1x (IDEF1 extended) is a modeling language for representing the structure and semantics of the information in a system. The elements of IDEF1x are the entities, their relationships or associations, and the attributes or keys. An IDEF1x model is comprised of one or more views, definitions of the entities, and the domains of the attributes used in the views. An entity is the representation of a set of real or abstract objects that share the same characteristics and can participate in the same relationships. An individual member of the set is called an entity instance. An entity is depicted by a box; it has a unique name and a unique identifier. If an instance of an entity is identifiable with reference to its relationship to other entities, it is called identifier dependent. The box depicting the entity instance is divided into two parts: the top part contains the primary key attributes; the lower one the nonprimary key attributes. Every attribute must have a name (expressed as a noun or noun phrase) that is unique among all attributes across the entities in the model. The attributes take values from their specified domains. Relationships between entities are depicted in the form of lines that connect entities; a verb or verb phrase is placed beside the relationship line. The connection relationship is directed—it establishes a parent–child association—and has cardinality. Special symbols are used at the ends of the lines to indicate the cardinality. The relationships can be classified into types such as identifying or non-identifying, specific and nonspecific, and categorization relationships. The latter, for example, is a generalization/specialization relationship in which an attribute of the generic entity is used as the discriminator for the categories.
318
SYSTEMS ARCHITECTURE
Rule Model In a rule oriented model, knowledge about the behavior of the architecture is represented by a set of assertions that describe what is to be done when a set of conditions evaluates as true. These assertions, or rules, apply to specific functions defined in the activity model and are formulated as relationships among data elements. There are several specification methods that are used depending on the application. They include decision trees, decision tables, structured english, and mathematical logic. Each one has advantages and disadvantages; the choice often depends on the way that knowledge about rules has been elicited and on the complexity of the rules themselves. Dynamics Model The fourth type of model that is needed is one that characterizes the dynamic behavior of the architecture. This is not an executable model, but one that shows the transition of the system state as a result of events that take place. The state of a system can be defined as all the information that is needed at some time to so that knowledge of the system and its inputs from that time on determines the outputs. The state space is the set of all possible values that the state can take. There is a wide variety of tools for depicting the dynamics, with some tools being more formal than others: state transition diagrams, state charts, event traces, key threads, and so on. Each one serves a particular purpose and has unique advantages. For example, a state transition diagram is a representation of a sequence of transitions from one state to another—as a result of the occurrence of a set of events—when starting from a particular initial state or condition. The states are represented by nodes (e.g., a box) while the transitions are shown as directed arcs. The event that causes the transition is shown as an arc annotation, while the name of the state is inscribed in the node symbol. If an action is associated with the change of state, then this is shown on the connecting arc, next to the event. Often, the conditions that must be satisfied in order for a transition to occur are shown on the arcs. This is an alternative approach for documenting the rule model. System Dictionary and Concordance of Models Underlying all these four models is the system dictionary. Since the individual models contain overlapping information, it becomes necessary to integrate the dictionaries developed for each one of them. Such a dictionary must contain descriptions of all the functions or activities including what inputs they require and what outputs they produce. These functions appear in the activity model (IDEF0), the rule model (as actions), and the state transition diagrams. The rules, in turn, are associated with activities; they specify the conditions that must hold for the activity to take place. For the conditions to be evaluated, the corresponding data must be available at the specific activity—there must be an input or control in the IDEF0 diagram that makes that data available to the corresponding activity. Of course, the system dictionary contains definitions of all the data elements as well as the data flows that appear in the activity model.
The process of developing a consistent and comprehensive dictionary provides the best opportunity for ensuring concordance among the four models. Since each model has a different root and was developed to serve a different purpose, together they do not constitute a well integrated set. Rather, they can be seen as a suite of tools that collectively depict sufficient information to specify the architecture. The interrelationships among models are complex. For example, rules should be associated with the functions at the leaves of the functional decomposition tree. This implies that, if changes are made in the IDEF0 diagram, then the rule model should be examined to determine whether rules should be reallocated and whether they need to be restructured to reflect the availability of data in the revised activity model. A further implication is that the four models cannot be developed in sequence. Rather, the development of all four should be planned at the beginning with ample opportunity provided for iteration, because if changes are made in one, they need to be reflected in the other models. Once concordance of these models has been achieved, it is possible to construct an executable model. Since the physical architecture has not been constructed yet, the executable model can only be used to address logical and behavioral issues, but not performance issues. The Executable Model Colored petri nets (17) are an example of a mathematically rigorous approach but with a graphical interface designed to represent and analyze discrete event dynamical systems. They can be used directly to model an architecture. The problem, however, is to derive a dynamic representation of the system from the four static representations. The solution to this problem using the structured analysis models can be described as follows. One starts with the activity model. Each IDEF0 activity is converted into a petri net transition; each IDEF0 arrow connecting two boxes is replaced by an arc-place-arc construct, and the label of the IDEF0 arc becomes the color set associated with the place. All these derived names of color sets are gathered in the global declaration node of the petri net. From this point on, a substantial modeling effort is required to make the colored petri net model a dynamic representation of the system. The information contained in the data model is used to specify the color sets and their respective domains, while the rules in the rule model result in arc inscriptions, guard functions, and code segments. The executable model becomes the integrator of all the information; its ability to execute tests some of the logic of the model. Given the colored petri net model, a number of analytical tools from petri net theory can be used to evaluate the structure of the model, for example, to determine the presence of deadlocks, or obtain its occurrence graph. The occurrence graph represents a generalization of the state transition diagram model. By obtaining the occurrence graph of the petri net model, which depicts the sequence of states that can be reached from an initial marking (state) with feasible firing sequences, one has obtained a representation of a set of state transition diagrams. This can be thought as a first step in the validation of the model at the behavioral level. Of course, the model can be executed to check its logical consistency, that is, to check whether the functions are executed in the appro-
SYSTEMS ARCHITECTURE
priate sequence and that the data needed by each function are appropriately provided. Performance measures cannot be obtained until the physical architecture is introduced; it provides the information needed to compute performance measures. Physical Architecture To complete the analysis phase of the procedure, the physical architecture needs to be developed. There is no standardized way to represent the physical systems, existing ones as well as planned ones that will be used to implement the architecture. They range from wiring diagrams of systems to block diagram representations to node models to organization charts. While there is not much difficulty in describing in a precise manner physical subsystems using the terminology and notation of the particular domain (communication systems, computers, displays, data bases), a problem arises on how to depict the human organization that is an integral part of the information system. The humans in the organization can not be thought simply as users; they are active participants in the workings of the information system and their organizational structure that includes task allocations, authority, responsibility, reporting requirements, and so on, must be taken into account and be a part of the physical model description. This is an issue of current research, since traditional organizational models do not address explicitly the need to include the human organization as part of the physical system description.
319
Performance Evaluation The executable model can be used both at the logical and behavioral level as well as the performance level. The latter requires the inclusion of the physical architecture. In one consistent architectural framework supported by a set of models, both requirement analysis, design, and evaluation can be performed. Furthermore, the process provides a documented set of models that collectively contain all the necessary information. Note that any changes made during the construction of the executable model must be fed back and shown in the static models. Measures of performance (MOP) are obtained either analytically or by executing the model in simulation mode. For example, if deterministic or stochastic time delays are associated with the various activities, it is possible to compute the overall delay or to obtain it through simulation. Depending on the questions to be answered, realistic scenarios of inputs need to be defined that are consistent with the operational concept. This phase allows for functional and performance requirements to be validated, if the results obtained from the simulations show that the measures of performance are within the required range. If not, the systems may need to be modified to address the issues that account for the encountered problems. However, the structured analysis approach is not very flexible; it cannot handle major changes that may occur during the development and implementation process. An alternative approach, that uses many of the same tools, has began to be used in an exploratory manner.
Synthesis Once the physical architecture is available, then the executable model of the architecture shown in Fig. 1 can be obtained. The process is described in Fig. 4 as the synthesis phase. The required inter-relationship between the functional and the physical architectures is shown by the bold two-way arrow. It is critical that the granularity of the two architectures be comparable and that the partitions in the hierarchical decompositions allows functions or activities to be assigned unambiguously to resources and vice versa. Once the parameter values and properties of the physical systems have become part of the data base of the executable model, performance evaluation can take place.
Operation concept
Functional architecture
Dynamics model
Physical architecture
Executable model
Figure 4. The synthesis phase. The executable model is obtained by assigning resources to functions and using the dynamics model to specify behavior.
OBJECT ORIENTED APPROACH This approach allows for the graceful migration from one option to another, in a rapid and low cost manner; it places emphasis on system integration rather than on doing one-of-akind designs. The fundamental notion in object oriented design is that of an object, an abstraction that captures a set of variables which correspond to actual real world behavior (18). The boundary of the object that hides the inner workings of the object from other objects is clearly defined. Interactions between objects occur only at the boundary through the clearly stated relationships with the other objects. The selection of objects is domain specific. A class is a template, description, or pattern for a group of very similar objects, namely, objects that have similar attributes, common behavior, common semantics, and common relationship to other objects. In that sense, an object is an instance of a class. For example, ‘‘air traffic controller’’ is an object class; the specific individual that controls air traffic during a particular shift at an Air Traffic Control center is an object—an instantiation of the abstraction ‘‘air traffic controller.’’ The concept of object class is particularly relevant in the design of information systems, where it is possible to have hardware, software, or humans perform some tasks. At the higher levels of abstraction, it is not necessary to specify whether some tasks will be performed by humans or by software running on a particular platform. Encapsulation is the process of separating the external aspects of an object from the internal ones; in engineering terms, this is defining the boundary and the interactions that
320
SYSTEMS ARCHITECTURE
cross the boundary—the black-box paradigm. This is a very natural concept in information system design; it allows the separation of the internal processes from the interactions with other objects, either directly or through communication systems. Modularity is another key concept that has a direct, intuitive meaning. Modularity, according to Booch (19) is the property of a system that has been decomposed into a set of cohesive and loosely coupled modules. Consider, for example, the corporate staff, the line organization, and the marketing organization of a company. Each module consists of objects and their interactions; the assumption here is that the objects within a module have a higher level of interaction than there is across the modules. In the context of object oriented design, hierarchy refers to the ranking or ordering of abstractions, with the more general one at the top and the most specific one at the bottom. An ordering is induced by a relation and the ordering can be strict or partial. In the object oriented paradigm, two types of ordering relations are recognized: aggregation and inheritance. Aggregation refers to the ability to create objects composed of other objects, each part of the aggregate object. The concept of aggregation provides the means of incorporating functional decompositions from structured analysis in the object oriented approach. Inheritance is the means by which an object acquires characteristics (attributes, behaviors, semantics, and relationships) from one or more other objects (20). In single inheritance, an object inherits characteristics from only one other object; in multiple inheritance, from more than one object. Inheritance is a way of representing generalization and specialization. The navigator in an air crew inherits all the attributes of the air crew member object class, but has additional attributes that specialize the object class. The pilot and the copilot are different siblings of the air crew object class. The object modeling technique (OMT) of Rumbaugh et al. (21) requires three views of the system: the object view, the functional view, and the dynamic view. The object view is represented by the object model that describes the structure of the system—it is a static description of the objects and it shows the various object classes and their hierarchical relationships. The functional view is represented in terms of data flow diagrams, an alternative to IDEF0, that depict the dependencies between input data and computed values in the system. The dynamic view is represented in terms of state transition diagrams. While these three views are adequate for object oriented software system design, they are not sufficient to represent an architecture and answer user’s questions. As in the structured analysis approach, an executable model is needed to bring them all together and to provide a means for performance evaluation. The Object View The object view presents the static structure of the object classes and their relationships. The object view is a diagram that is similar to the data model, but in place of the data entities there are object classes. An object class is depicted by a box divided into three parts: the top part contains the name of the class; the second part contains the attributes (they are the data values held by all the objects in the class); and the third part contains the class operations. These are the func-
tions or transformations of the class that can be applied to the class or by it. The lines connecting the object classes represent relationships. These relationships have cardinality (one to one, one to many, etc.). In addition to the generalization and inheritance relationships, the lines also represent associations: they show how one class accesses the attributes or invokes the operations of another. The Functional View The functional view consists of a set of data flow diagrams that are analogous to the activity models in structured analysis. A data flow diagram, as used in the object modeling technique, depicts the functional relationships of the values computed by the system; it specifies the meaning of the operations defined in the object model without indicating where these operations reside or how they are implemented. The functions or operations or transformations, as they are often called, are depicted by ovals with the name of the transformation inscribed in them, preferably as a verb phrase. The directed arcs connecting transformations represent data flows; the arc inscriptions define what flows between the transformations. Flows can converge (join) and diverge (branch). A unique feature of data flow diagrams is the inclusion of data stores which represent data at rest—a data base or a buffer. Stores are connected by data flows to transformations with an arc from a store to a transformation denoting that the data in the store is accessible to the transformation, while an arc from a transformation to a store indicates an operation (write, update, delete) on the data contained by the store. Entities that are external to the system, but with which the system interacts, are called terminators or actors. The arcs connecting the actors to the transformations in the data flow diagram represent the interfaces of the system with the external world. Clearly, data flow diagrams can be decomposed hierarchically, in the same manner that the IDEF0 diagram was multileveled. While data flow diagrams have many strengths such as simplicity of representation, ease of use, hierarchical decomposition, and the use of stores and actors, they also have weaknesses. The most important one is the inability to show the flow of control. For this reason, enhancements exist that include the flow of control, but at the cost of reducing the clarity and simplicity of the approach. The Dynamics View The dynamics view in OMT is similar to the one in structured analysis—state transition diagrams are used to show how events change the state of the system. The rules that govern the operations of the system are not shown as an independent model, but are integrated in the dynamics model. A final construct that describes the trajectories of the system using events and objects is the event trace. In this diagram, each object in the object view is depicted as a vertical line and each event as a directed line from one object to another. The sequencing of the events is depicted from top to bottom, with the initiating event as the topmost one. The event traces characterize behaviors of the architecture; if given, they provide behavioral requirements; if obtained from the executable model, they indicate behavior.
SYSTEMS ARCHITECTURE
The Executable Model The three views, when enhanced by the rule model embedded in the state transition diagrams, provide sufficient information for the generation of an executable model. Colored petri nets can be used to implement the executable model, although the procedure this time is not based on the functional model. Instead, the object classes are represented by subnets that are contained in ‘‘pages’’ of the hierarchical colored petri net. These pages have port nodes for connecting the object classes with other classes. Data read through those ports can instantiate a particular class to a specific object. The operations of the pages/object subnets are activated in accordance with the rules and, again, the marking of the net denotes the state of the system. Once the colored petri net is obtained, the evaluation phase is identical to that of structured analysis. The same analytical tools (invariants, deadlocks, occurrence graphs) and the same simulations can be run to assess the performance of the architecture.
321
one requires that a library of object classes be defined and implemented prior to the actual design. If this is a new domain, there may not exist prior libraries populated with suitable object classes; they may have to be defined. Of course, with time, more object class implementations will become available, but at this time, the start-up cost is not insignificant. On the other hand, if the requirements are expected to change and new technology insertions are anticipated, it may be more effective to create the class library in order to avail the systems engineering team with the requisite flexibility in modifying the system architecture. BIBLIOGRAPHY 1. E. Rechtin, Architecting Information Systems, Englewood Cliffs, NJ: Prentice-Hall, 1991. 2. E. Rechtin and M. Maier, The Art of Systems Architecting, Boca Raton, FL: CRC Press, 1996. 3. E. Rechtin, The art of systems architecting, IEEE Spectrum., 29 (10): 66–69, 1992.
UML
4. E. Rechtin, Foundations of systems architecting, J. NCOSE, 1: 35–42, 1992.
More recently, the Unified Modeling Language (UML) has been put forward as a standard modeling language for objectoriented software systems engineering (22). The language incorporates many of the best practices of industry. What is particularly relevant to its future extension beyond software systems-to-systems engineering is the inclusion of a large number of diagrams or views of the architecture. It includes use cases which describe the interaction of the user with the system, class diagrams that correspond to the object view in OMT, interaction diagrams which describe the behavior of a use case, package diagrams which are class diagrams that depict the dependencies between groups of classes, and state transition diagrams. The proposed standardization may provide the necessary impetus for developing system architectures using object-oriented methods.
5. D. N. Chorafas, Systems Architecture and Systems Design, New York: McGraw-Hill, 1989.
SUMMARY The problem of developing system architectures, particularly for information systems, has been discussed. Two main approaches, the structured analysis one with roots in systems engineering, and the object oriented one with roots in software system engineering, have been described. Both of them are shown to lead to an executable model, if a coherent set of models or views is used. The executable model, whether obtained from the structured analysis approach or the object oriented one should exhibit the same behavior and lead to the same performance measures. This does not imply that the structure of the colored petri net will be the same. Indeed, the one obtained from structured analysis has a strong structural resemblance to the IDEF0 (functional) diagram, while the one obtained from the object oriented approach has a structure similar to the object view. The difference in the structure of the two models is the basis for the observations that the two approaches are significantly different in effectiveness depending on the nature of the problem being addressed. When the requirements are well defined and stable, the structured analysis approach is direct and efficient. The object oriented
6. A. P. Sage, Systems Engineering, New York: Wiley, 1992. 7. A. H. Levis, Lecture notes on architecting information systems, Rep. GMU/C3I-165-R, Fairfax, VA: C3I Center, George Mason Univ. 8. D. A. Marca and C. L. McGowan, Structured Analysis and Design Technique, New York: McGraw-Hill, 1987. 9. E. Yourdon and L. Constantine, Structured Design, New York: Yourdon Press, 1975. 10. P. Ward and S. Mellor, Structured Development of Real-time Systems, New York: Yourdon Press, 1986. 11. T. DeMarco, Structured Analysis and Systems Specification, Englewood Cliffs, NJ: Prentice-Hall, 1979. 12. C. Gane and T. Sarson, Structured Systems Analysis: Tools and Techniques, Englewood Cliffs, NJ: Prentice-Hall, 1978. 13. A. Solvberg and D. C. Kung, Information Systems Engineering, New York: Springer-Verlag, 1993. 14. G. L. Sanders, Data Modeling, Danvers, MA: Boyd & Fraser, 1995. 15. E. Yourdon, Modern Structured Analysis, Englewood Cliffs, NJ: Yourdon Press, 1989. 16. R. McLeod, Jr., Systems Analysis and Design, Fort Worth, TX: Dryden, 1994. 17. K. Jensen, Coloured Petri Nets, New York: Springer-Verlag, 1992. 18. A. P. Sage, Object oriented methodologies in decision and information technologies, IEEE Trans. Syst., Man, Cybern., SMC-19: 31–54, 1993. 19. G. Booch, Object-oriented Analysis and Design, Redwood City, CA: Benjamin/Cummings, 1994. 20. E. V. Berard, Essays on Object-oriented Software Engineering, Englewood Cliffs, NJ: Prentice-Hall, 1993. 21. J. Rumbaugh et al., Object-oriented Modeling and Design, Englewood Cliffs, NJ: Prentice-Hall, 1991. 22. M. Fowler and K. Scott, UML Distilled, Reading, MA: AddisonWesley, 1997.
ALEXANDER H. LEVIS George Mason University
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7115.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Systems Engineering Trends Standard Article H. Raghav Rao1 and S. Park1 1State University of New York, Buffalo, NY Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7115 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (180K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are What Is a System? Systems Engineering Systems Development Configuration Management, Metrics, and Quality Assurance Systems Management Object-Oriented Paradigm Systems Engineering and The Intranet About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7115.htm15.06.2008 14:31:06
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
322
SYSTEMS ENGINEERING TRENDS
SYSTEMS ENGINEERING TRENDS Systems engineering as a discipline has existed for about a half century; it is considered to have originated in the blending of the theoretical foundations of systems science, operations research, and the World War II production experience. In its early stages, the concerns of systems engineering were how to engineer, conceptualize, develop, and evaluate systems using operational tools and methods. Because of the necessity to adapt to advanced and rapidly changing technology, systems engineering has also started to include a managerial role. In this article, we will discuss the past approaches to systems engineering, the current focus of systems engineering, and the emerging technology that can be applied to systems engineering, particularly from an information systems technology perspective. WHAT IS A SYSTEM? What is a system? Ackoff (1) briefly defined a system as a set of interrelated elements. According to this definition, a system has more than one element and can be treated as one entity or set. The word interrelated implies that a system means more than a collection of elements. From the perspective of interrelatedness or mutual interaction, O’Connor (2) points out the differences between a system and a heap. For example, simply adding pieces to a system or cutting a system in half does not make a redoubled system or two smaller systems, whereas two divided smaller heaps have the same properties as the original heap. This is because of the properties of systems known as emergent properties; from the mutual interaction of the elements of a system there arise characteristics which cannot be found as characteristics of any of the individual elements. Examples of systems, moving from the least complex and smallest to most complex and largest, are as follows: cell, organ, person, community, state, nation, world, solar system, galaxy, and universe. Classes of Systems There are many ways to classify systems. We discuss two of the most typical and useful classifications here. The first deals with the categories of closed and open systems. The second deals with the categories of systems based on the theory of evolution. A closed system is a system that does not need to interact with its environment to continue to exist; that is, it tends to be self-contained. As a result, the interactions of elements tend to be more stable and predictable. Examples are most mechanical systems. It is important to note here that it is an accepted principle that no system can continue to operate well without interacting with its environment. However, it is possible to treat a system as a closed system for study or design when there is a situation where inputs and outputs are known, defined, and predictable. Open systems, on the other hand, are organic and must interact with their environment in order to maintain their existence. In an open system, both internal and external elements are of concern. Examples of open systems are business organizations. An open system is adaptive and self-organizing in that it can change its internal organization in response to changing conditions. (A system that fails to do so is a malfunctioning system—these are be-
yond the scope of this article.) In other words, it is one of the purposes of systems engineering to guide a system to work properly. Bellinger (3) provides another classification of systems based on the theory of evolutionary hierarchy. These are the parasitic system, prey–predator system, threat system, exchange system, integrative system, and generative system. In a parasitic system, an element that positively influences another is in turn influenced negatively by the second. In a prey–predator system, the elements are essentially dependent on each other from the perspective that the existence of one element determines the existence of the other element. If all preys die out, the predators will also die. In a threat system, for example, the United States–Soviet Union Arms Race, one element’s actions are contingent on the actions of the other. The threat of action by each element is an essential deterent to the action of the other element. An exchange system is a system in which elements of the system provide goods and services to other elements in exchange for money or other goods and services. An integrative system is a system in which elements work together to accomplish some common desired objective or goal. A generative system is a system where, for example, people come together to create something neither of them had any idea about when they began. Complex System We often refer to a large system as a complex system because it has many elements and interactions which increase in an exponential manner as the number of elements increases. There are some important points that are needed to clearly describe a complex system, so that they can serve as a basis for any investigation of a complex system. First, a complex system consists of many independent components. Second, these components interact locally in the system. Only two components interact with each other at a time. However, this does not exclude the possibility of interaction with a third (or more) component within very short time frames. Third, the overall behavior is independent of the internal structure of the components. It is possible to have multiple systems that perform equally regardless of its internal structure. That is, two different systems with the same emergent properties can exist. Thus, we are able to observe a complex system without the knowledge of individual components. (As we shall see later, these characteristics of complex systems map easily onto the object-oriented problem-solving approach.) Systems Approaches The systems approach is one of the forms of methodological knowledge and is essentially an interdisciplinary approach. Three approaches that had a profound influence on systems theory are the approach of general systems theory founded by Ludwig von Bertalanffy (4), the cybernetics approach founded by Norbert Wiener (5), and the systems dynamics approach founded by Jay W. Forrester (6). Among these, general systems theory and cybernetics are the main streams that have arguably influenced systems research and development the most. First, general systems theory (or systems science) (7) is the ‘‘transdisciplinary study of the abstract organization of phenomena, independent of their substance, type, or spatial or temporal scale of existence.’’ Bertalanffy emphasized open-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
SYSTEMS ENGINEERING TRENDS
ness of system, interaction with the environment, and evolution resulting from emergent properties. He considered systems as mathematical entities and used mainly mathematical methods to describe and classify systems. Thus, systems theory is often criticized because of its abstractness, but its direction toward interdisciplinary study and unity of science is considered to be one of the important aims to the scientists in many areas related to engineering. The second approach, cybernetics, is defined by Wiener (5) as ‘‘the study of control and communication in the animal and the machine.’’ Wiener focused on the importance of maintenance of system parameters dealing with control and communication (information systems). In particular, homeostasis or adaptation and interaction with system and environment was his concern. In fact, cybernetics and systems theory deals with the same problem—that is, the system as a whole instead of as a collection of parts. The major difference is that cybernetics focuses more on the functional side of system characteristics (mostly on self-regulation), whereas systems theory focuses more on the structural side of system characteristics (mostly on relations between parts). While these systems approaches produced theoretical constructions mainly in terms of mathematical advances, systems engineering arose as a result of practical applications of systems approaches. Eventually, scientific knowledge obtained from these systems approaches contributed to the theoretical basis of systems engineering, which can be considered a technology rather than a science.
SYSTEMS ENGINEERING What is Systems Engineering? As systems became larger and more complex, the responsibility for systems design could not be conferred on one person or a few people in a group. The principle that was applied to solve this was division of labor; this principle was applied to systems design by decomposing a large system into smaller subsystems or components, if necessary. After the subsystems were designed, they were combined together to make a complete system. Such efforts have been successful in many ways. New systems pursuing various goals have been developed, with one of the major successes being the impressive systems development project of trips to the moon. Dealing with the process related to decomposition and combination toward efficiency and effectiveness was the systems engineer’s job. Yet, there was another problem that had to be considered. Simply connecting together individual subsystems does make a system, but this possibly haphazard system cannot guarantee the working system and sometimes results in the system exhibiting counterproductive behavior with respect to the goal. Chase (8) pointed out this aspect very clearly: ‘‘Systems engineering deals with the process of selecting and synthesizing the application of the appropriate scientific and technological knowledge in order to demonstrate that they can be effectively employed as a coherent whole to achieve some stated goal or purpose.’’ Sage (9,10) also notes that systems engineering is a management technology that emphasizes the interaction between science, the organization, and its environment, with information serving as a catalyst that facilitates the interactions.
323
In order to deal with knowledge concerning systems engineering, we need to understand the three perspectives of systems engineering. As is often the case, we may define systems engineering according to structure, function, or purpose. In Table 1, we adapt several pertinent definitions of systems engineering from Refs. 9 and 10. Throughout this article, we will use three hierarchical levels of systems engineering, those that can be derived from functional definitions, structural definitions, and purposeful definitions, respectively; these in turn give rise to systems engineering methods and tools, systems methodology, and systems management. Three Levels of Systems Engineering The first level, systems engineering methods and tools, can be considered to be the lowest level of systems engineering. At this level, product-oriented systems engineering approaches are used. Most of the operational methods and tools at this level were developed throughout the early stage of systems engineering within operations research and systems science. The combination of these methods and tools contributed to the development of systems of high quality and effective costs. This level supports the process level of systems engineering, systems methodology, which refers to process level and in turn supports systems management, which refers to the strategic level. Nowadays, due to the effect of computers and information technology, systems engineering faces the new challenge of integration of its operational tools and methods with automated tools such as computer-aided software engineering (CASE), computer-aided design (CAD), and computer-aided engineering (CAE). The second level, systems methodology, is the process-oriented level. In this perspective, a system is often achieved through appropriate systems development life cycles, which will be discussed later. So far the dominant methodologies for system analysis and design have ranged from traditional systems analysis and design during the 1960s, to structured analysis and design in the mid-1970s, to information engi-
Table 1. Definitions Used in Systems Engineering Structural:
Functional:
Purposeful:
Systems engineering is management technology that can be used to assist clients through the formulation, analysis, and interpretation of the impacts of proposed policies, controls, or complete systems, based upon the perceived needs, values, and institutional transactions of stakeholders. Systems engineering is an appropriate combination of theories and tools, carried out through the use of a suitable methodology and the set of systems management procedures, in a useful setting appropriate for the resolution of real-world problems that are often of large scale and scope. The purpose of systems engineering is information and knowledge organization that will assist clients who desire to develop policies for management, direction, control, and regulation activities relative to forecasting planning, development, production, and operation of total systems to maintain overall integrity and integration as related to performance and reliability.
Adapted from Sage (9).
324
SYSTEMS ENGINEERING TRENDS
neering in the 1980s, and to object-oriented analysis and design from the mid- to late 1990s. In addition to systems development life-cycle methodology, other important aspects of the systems engineering process at this level are quality assurance, configuration control, and structural economic analysis. The highest level, systems management, is at the organizational or strategic level. By definition, systems management provides products or processes with technical and administrative direction and control (10). The functions of systems management includes planning, allocating resources, organizing, directing, and controlling work. Thus, systems management comprises the technical direction and the efforts needed for management of systems definition, development, and deployment. Organizational environment, organizational cultures, strategic quality, strategic cost and effectiveness, process reengineering, and process maturity are some of the concerns of systems management. The pursuit of speed and betterment and the quest for implementing projects inexpensively are some of the issues that lead systems engineering to focus on organizational and strategic levels based on the process-oriented view of system. Systems Engineering Life Cycles Similar to plants, animals, and humans, systems have a life cycle, and they evolve to the next generation by changing over time and adapting to their environment. From a systems engineering point of view, life cycle is defined as ‘‘the scope of the system or product evolution beginning with the identification of a perceived customer need, addressing development, test, manufacturing, operation, support and training activities, continuing through various upgrades or evolutions, until the product and its related process are disposed of ’’ (11). The use of life cycles may also be considered as an adoption of functional decomposition to systems engineering tasks in order to identify a systems engineering process easily. A life cycle often requires clear understanding of what a systems engineering product is, as well as how efficiently it can be produced. A systems engineering product, often called end product, is not confined to hardware or software. For example, it can refer to personnel, services, or even processes themselves. A pilot may be produced in an airforce academy, a new telephone service in a telephone company, or a new filtering process in an oil company. Depending on stakeholders’ needs, the end product of a system can vary drastically; however, the life cycle of systems engineering tends to be similar in any system. The following basic steps and phases used in systems engineering life cycle will explain this clearly. There are three fundamental steps (9) needed for problem resolution: issue formulation, issue analysis, and issue interpretation. Issue formulation is an effort to identify the needs to be fulfilled and the requirements to be satisfied, constraints that affect issue resolution, and generation of potential alternative courses of action. Issue analysis is performed to determine the impacts of the identified alternatives. At this step, by comparing alternatives, we are able to select one alternative for implementation or further study. These three steps are the ingredients for the phases of systems engineering life cycle. Many life-cycle models have, in general, at least five phases—for example, initiation, design, development, implementation, and operation management. Sage (9) identified three basic phases for simplicity, which are systems defini-
tion, systems development, and systems deployment. The systems definition phase entails requirements, specifications, and a conceptual design so that systems development can begin. At the stage of systems development, a logical and detailed design of the system, along with operational implementation and testing are involved. The next is the system deployment phase, at which time the product is fielded and implemented in an operational setting for evaluation and modification and maintenance. This phase continues until another new system development initiative appears to substitute for the existing system. Software Development Life-Cycle Models Many software development life-cycle models are utilized in order to acquire and develop trustworthy software. Because the systems engineering discipline has had a close relationship with computer and information technology, many of the software systems engineering life-cycle models are based on systems engineering lifecycle. Conversely, the life-cycle models in software systems engineering are applicable to the general systems engineering life-cycle model. The first systems-engineering-based life-cycle model for software development was the one introduced by Royce (12), known as the waterfall model. Royce described the pattern of downward flow of information and development effort. Based on this model, many modified waterfall life-cycle models have appeared. Boehm (13) also defined seven phases for his waterfall life-cycle model as shown in Fig. 1(a). The Department of Defense (14) extended the approaches, as shown in Fig. 1(b), to split the systems development effort in two ways: hardware effort and software effort. These life-cycle models are typical examples of traditional waterfall models. The major advantage is the capability to manage complexity of system development by splitting it into activities or phases. Although the waterfall model is criticized as being slow, inflexible, and costly, later models do not seem to throw away the advantage of manageability, and we see that most of these also follow similar phases, as shown in Fig. 1. Sequential development, the method used in the waterfall model, is often not possible to carry out, especially when one or more phases needs to be repeated due to the possible omissions in each phases. In order to handle such cases, Boehm (15) created a somewhat different life-cycle model (called the spiral life cycle) that emphasizes the need for iterative development. Figure 2 depicts the comprehensive spiral model, and it shows how formulation, analysis, interpretation 1, and interpretation 2 steps in each quadrant repeat until the final product, software, is developed (10). This representation of the life-cycle model contains the three steps and three phases in the systems life cycle discussed above. The advantage of this is that instead of having one step for interpretation, the spiral model dissociates interpretation of the software engineering plan for the next phase from the first interpretation by emphasizing the iterative and evolutionary characteristic of the life-cycle model.
SYSTEMS DEVELOPMENT In a broad sense, systems development encompasses all tools and techniques, as well as efforts to manage them, in order
SYSTEMS ENGINEERING TRENDS
325
System requirement
Software requirement and specification
Preliminary conceptual design
Detailed design
Code and debug
Integrate and test
Operations and maintenance (a)
System requirement analysis
Systems design architecture Software requirement analysis
Hardware requirements analysis
Preliminary design
Preliminary design
Detailed design
Detailed design
Code, unit test, SW integration
Fabricate, unit test, HW integration Hardware test
Software test System integration and test System test and evaluation Production and deployment (b)
to achieve an effective and efficient system. Systems development is related to the diverse areas of human systems, economic systems, and business systems. In most modern systems, information is the critical factor that allows the interactions among the various areas. In this section we focus on information systems in an organization. As we discuss systems development, we will see many systems development methodologies that have evolved. Research done by Mahmood (16), which compared the traditional systems life-cycle approach to the prototyping approach, showed that neither of the methods is preferred unanimously by both the system designer and system user. Some methods performed better in some areas than in others, thus implying that methods should be selected depending on project, environment, and decision characteristics. In a comprehensive article, Wetherbe and Vitalari (17) discussed key issues governing systems development. They con-
Figure 1. (a) Waterfall software development life-cycle process model of Boehm. (b) Software development life cycle by US Department of Defense. [From Sage (10).]
ducted a survey about key issues that included philosophy, project management, and tools and techniques. Forty-two directors of systems development were polled. Most responded that they were strongly moving toward the new trends in those issues (presented in Table 2). These trends in systems development philosophy reflect the characteristics of strategic rather than operational, process rather than product, and distributed rather than central. In the area of tools and techniques, there is much emphasis on new methodologies such as objected-oriented methods, joint application development (or joint application design), and rapid application development. Each methodology gives us general guidelines about which steps to take and how to model—that is, provide the overall framework that enables the system designer to organize problem-solving work. Thus, methodology is critical when designing a large and complex system.
SYSTEMS ENGINEERING TRENDS
Quadrant 1: Formulation Identify needs, objectives, constraints, and alterables
Quadrant 2: Analysis Identify alternatives and risks Risk analysis
Risk analysis Operational prototype
Risk analysis Risk analysis
1
Prototype 2
Prototype 3
Prototype 4
Software Lifecycle Management requirements control control plan plan plan
Quadrant 4: Interpretation Plan for next phase of spiral life cycle
Conceptual design
Unit test
Code
Detailed design
Design validation and verification Integrate and test
Software development completed
Requirements validation
Acceptance testing
Software development plan 1
Implementation
Final development integration and test plan
Plan for the next phase
326
Quadrant 3: Interpretation Develop and evaluate
Figure 2. Spiral model for software systems engineering. [From Sage (10).]
Table 2. Key Issues in Systems Development
Systems development philosophy
Project management
Tools and techniques
The Old
The New
Operational efficiency Automation Quality control Cost justification Charge out FIFO/bottom-up Mainframes User/IS dependency Multiyear project Matrix management Project control Interviews Life-cycle methodologies CASE Data dictionaries Code-level maintenance Code redundancy Contracting
Strategic/customer-driven Business reengineering Total quality Value added Allocation Information architecture Multiple platforms/networking End-user self-sufficiency Time box Small self-directed teams General contractor model Cross-functional JADs Projection prototyping I-CASE Repositories Model-level maintenance Reuse Outsourcing
FIFO: first in, first out; JADs: joint application designs; CASE: computer-aided software engineering; I-CASE: integrated systems for computer-aided software engineering; IS: Information Systems. Adapted from Wetherbe and Vitalari (17).
SYSTEMS ENGINEERING TRENDS
Whitten and Bentley (18) provide an in-depth definition of methodology incorporating various aspects that can occur in systems development. Methodology is the physical implementation of the logical life cycle that incorporates (1) step-bystep activities for each phase, (2) individual and group roles to be played in each activity, (3) deliverables and quality standards for each activity, and (4) tools and techniques to be used for each activity. Thus, it is a set of tools and techniques with a systematic description of the sequence of activities, and it also includes systems management activities such as configuration management and quality assurance issues in each phase of system development life cycle. Most literature focuses on two approaches for systems development: traditional systems development and object-oriented systems development. For example, Dewitz (19) points out that traditional systems development focuses on what a systems does (i.e., on the verbs that describe the system), while object-oriented systems development focuses on what a system is made of (i.e., on the nouns that describe the system). Since objected-oriented development is an emerging trend, it will be covered later and we start with the traditional systems development. Traditional systems development began with the framework that focuses on the functional decomposition emphasizing the process or functions that a system performs. The framework of waterfall systems development methodology is valid with minor modifications across most of the methodologies. For example, Whitten and Bentley (18) defined five phases for systems development: systems planning, systems analysis, systems design, systems implementation, and systems support. Two major general methodologies for business information systems are (a) structured analysis and design by DeMarco (20) and Yourdon (21) and (b) information engineering by Martin (22). Structured analysis and design methodology emphasizes processes in systems development and is often referred to as the data flow modeling methodology. In structured analysis and design methodology, models such as system flowchart and hierarchical input–process–output chart (HIPO) models, even before the use of CASE tools, clearly contributed to systems development since they played
a role of replacing inadequate English texts with graphical views to identify system functions. In order to cope with large systems, modern structured analysis and design (22) has emerged with the models of data flow diagram, data-dictionary, entity-relationship diagram, structured English, and structure chart. Information engineering is often referred to as data modeling methodology because of the emphasis on data. In addition, information engineering is process-sensitive and has the characteristic that it can be extended to strategic plans. This characteristic has resulted in the popular use of this method in many business areas. This methodology consists of the four phases of information strategy planning, business area analysis, systems design, and construction. Traditional waterfall systems development methodology based on life cycle, as shown in Fig. 1, has three major problems (16). First, systems development delays the delivery of systems to users until the last stages of system development. Second, it requires specified systems outputs at the outset because an ambiguous system output may result in redevelopment, which is costly and time-consuming. Third, it creates communications problems because system users are often involved only in the systems requirement phase and because communication between nonadjacent phases would be difficult in case those phases were performed by different functional departments or groups. The effort to solve these problems led to the approaches of joint application design, prototyping, and rapid application development. Joint Application Design Joint application design was originally developed at IBM to reduce communication problems between designers and users through the use of structured workshops, called JAD sessions. Additional benefits such as early detection of design problems, reduced time and effort, and user satisfaction made JAD applicable to many methodologies, and thus many versions of JAD are available to system designers. JAD includes complete specifications throughout five phases (23) of project definition, research, preparation, the JAD session and the final document. Table 3 shows the five phases, the steps in
Table 3. Five Phases for Joint Application Design Phases
Step
Project definition
Interview management Produce the management definition guide
Research
Get familiar with the existing system Document work flow Research data elements, screens, and reports Prepare the session agenda Prepare the working document Prepare the JAD session script Prepare overheads, flip charts, and magnetics Hold the session Produce the final document Participants review the document Hold the review session Update and distribute the final document
Preparation
The JAD session Final document
JAD: joint application design. Adapted from Wood and Silver (23).
327
Work Days 1–3 1–3 1 1–4 1–5 2–4 1 2 3–5 1 3–5 3–10 2 1 2
Resulting Output Management definition guide
Work flow Preliminary specifications JAD session agenda Working document JAD session script Overheads, flip charts, magnetics Completed scribe forms JAD design document Signed approval form
328
SYSTEMS ENGINEERING TRENDS
each phase, the time required, and the resulting outputs. Since JAD can be used only at the initiation, analysis, and design phases in the systems development life cycle, it is often combined with other methods to fit all phases of the systems development life cycle. The selection of the right people for the JAD team is the most important factor for the success of the JAD session. The JAD team includes an executive sponsor, JAD leader or facilitator, scribe(s), and full-time participants. The critical role of executive sponsor is to define the purpose, scope, and objective of the project. During the session, the executive sponsor should at least be accessible by phone to resolve possible cross-departmental conflicts that might delay the JAD session. The JAD leader should be an impartial business person or professional who has experiences in group facilitation, JAD methodology, group dynamics, and the products being developed. The JAD leader is responsible for making an agenda by gathering information on workflow and system requirements before the session. During the session, the role of JAD leader includes that of impartial referee, discussion leader, and negotiator. Once the session is over, the JAD leader’s concern moves to creation, review, and distribution of final document. A scribe’s role is critical to a successful JAD because the notes taken by the scribe evolve into the final document and the scribe is usually selected from management information systems (MIS) personnel. The development of convenient word processor and CASE tools made it easier for the scribe to apply notes directly to the final document. JAD participants include users and MIS people who are involved in making decisions about the system design. Users range from end-users to supervisors who can provide valuable input on design issues and prototypes, specify training needs and acceptance criteria, and ensure that the system captures critical success factors. As observers, MIS personnel attend the session hoping to obtain a clear understanding of user needs and systems requirements, which will be reflected in actual system development. JAD is appropriate for generative systems, since traditional waterfall methodology requires specific goals before the initiation of systems development. Prototyping Prototyping is another conceptual technique that can be applied to many systems development methodologies. Conceptually, prototyping approaches utilize two enhancements to traditional systems development. One is incremental development, and the other is evolutionary development. Incremental development as shown in Fig. 3(a) is similar to the spiral life cycle which tries to correct the problems of the traditional waterfall life-cycle model. In this approach the product is delivered at the end of each iteration with add-ons from previous products. The first product is made of only a kernel (10) that has minimal functions of a system, called a prototype; and as iteration goes on, the product advances toward full functionality. Evolutionary development as shown in Fig. 3(b) is similar to the incremental model, but the product in each cycle is a complete product in terms of functionality. Another difference is that fundamental changes in the product after each cycle are possible. This model is often implemented in many object-oriented systems development scenarios where object classes can be easily redesigned.
Rapid Application Development Rapid application development (RAD) is a variation of incremental and evolutionary development models that use the prototype approach. RAD (24) is defined as a systems development methodology that employs evolutionary prototyping and incremental prototyping techniques to deliver limited functionality in a short timeframe. The timeframe, called a timebox, is usually 3 to 6 months, and is nonextendable. Incremental development models are applied within the timebox. The major difference between JAD and RAD is that RAD covers all the phases in systems development. Through JAD sessions, RAD balances the use of prototyping approaches with modeling and continual feedback and determines what will be achieved for each timebox. Similar to prototyping, RAD focuses on the most important system functions at the beginning of an iteration. The philosophy of the ‘‘timebox’’ approach is that it is better to have a working system of limited functionality quickly than to wait years for a more comprehensive system (25). The goals of systems development—fast, better, and cheaper development—are realized in RAD. Since the measure of quality often depends on the system user, the request for change by users in RAD activities always shapes a highquality system. In order to achieve cost goals, many software tools are heavily used throughout development stages. Examples are the use of graphical user interface (GUI), frontware, fourth-generation languages (4GLs) including relational database management system (RDBMS), and various CASE tools. Methods, Models, and Tools The terms methodology, method, model, tool, and technique are considerably interrelated. Methodology in systems development, as we defined earlier, is the most overarching concept, but tends to be somewhat conceptual. It often comes with various models, tools, and techniques in order to solve problems arising in systems development. In general, methodology refers to a general principle or theory to be used in problem solving regarding a specific area of study. Hence, we may refer to traditional (waterfall) methodology, structured methodology, information engineering methodology, and so on. In our definition, a method refers to a specific instance of a methodology. In that sense, many methods may exist in one methodology, or a methodology may be composed of many methods with a common principle. A model, a representation of the real world, or a system in the process of development can be a method bundled together with tools or techniques. Again, this method (or model) can be an exemplar of a certain methodology, and, in turn, from this methodology many revised methods can appear later. The distinction between tool and technique is not well defined due to their close relationship with each other; however, following the definition of Hackathorn and Karimi (26), we refer to the term technique as a procedure for accomplishing a desired outcome and the term tool as an instrument for performing a procedure. A data flow diagram is a good example of a technique, and the software to draw a data flow diagram can be considered as a tool. CASE tools, for example, were originally considered as tools. However, nowadays CASE tools tend to be called method or methodology as they grow to handle integrated system development as well.
SYSTEMS ENGINEERING TRENDS
329
Definition Development Deployment Life cycle 1
Life cycle 2
Life cycle 3
Life cycle 4
Build 1
Build 3
Build 2
Build 4
(a)
Definition Development Deployment Life cycle 1
Life cycle 2
Life cycle 3
Life cycle 4
b
b
a
a Kernel
c
a Kernel
b
c
d
a Kernel
(b)
Hackathorn and Karimi (26) conducted a comprehensive research survey comparing many information systems development methods. (They used the term method referring to CASE tools.) Even though their research covers only traditional systems development methods developed up to 1986, it provides a framework for comparing current systems development methods and shows the trends of methods, so that we can select appropriate method in real practice. Two dimensions, breadth and depth, were utilized for analysis, and twenty-six systems development methods were located on two-dimensional space. For the breadth dimension, they used
Kernel Figure 3. (a) Evolutionary life-cycle model and (b) iterative life-cycle model. [Adapted from Sage (10).]
five phases of systems development, and for the depth dimension they used the terms tool, technique, and methodology, depending on how practical or conceptual the method was. Not surprisingly, their conclusion was that no method was perfect for the whole range of breadth and depth, hence employment of a set of methods to cover the whole life-cycle phase was recommended. Further, they described the three stages of method evolution. The first consists of tools and techniques for application development that corresponds to lower or back-end CASE tools focused on systems implementation such as code generator or application generator. The
330
SYSTEMS ENGINEERING TRENDS
second stage shows two trends—broader systems development techniques and emergence of methodologies for organizational analysis that correspond to upper or front-end CASE tools. The third stage shows the emergence of information engineering methodology to link the first two stages. The trends tend to merge lower CASE with upper CASE.
CONFIGURATION MANAGEMENT, METRICS, AND QUALITY ASSURANCE So far, we have dealt with the information systems development aspect of systems engineering from the viewpoint of the life-cycle approach. This overall framework is of direct use in developing systems, but other issues such as quality assurance, configuration management, and metrics throughout the life cycle are important in supporting life cycles. Quality assurance, configuration management, and metrics are closely related to each other. Quality assurance is defined as a planned and systematic means for assuring management that defined standards, practices, procedures, and methods of the process will be applied. We can differentiate quality assurance from quality control: Quality assurance occurs during all the phases of the systems development life cycle with a focus on system processes, whereas quality control occurs at the end of the systems development life cycle with the focus on the end-product or system. Hence, quality assurance needs and is often achieved through configuration management. Configuration management is defined (10) as a systems management activity that identifies needed functional characteristics and nonfunctional characteristics of systems or products early in the life cycle, controls changes to those characteristics in a planned manner, and documents system changes and implementation status. Configuration Management While expensive, large, and complex systems are developed through systems engineering, it is likely that changes in product and process take place at the same time. Configuration management started with the idea of dealing with many problems that come from these changes. Back in the 1950s during the arms race, the Department of Defense (DoD) had many supporting and associate contractors, yet had weak control and minimal documentation of changes. DoD found that only the original manufacturer could supply systems or components because of inaccurate information resulting from changes in product design. Starting with ANA (Army, Navy and Airforce) Bulletin No. 390, which gave industry uniform guidelines for proposing aircraft changes, many government organizations such as the Air Force, Army, Navy, and NASA published their own document for the techniques of configuration management. Later, DoD incorporated the guidelines into the standard MIL-STD-483 (27) defining configuration management procedures and policies. An alternate standard available for configuration management is the IEEE software configuration management standard (28) which describes what activities are to be done, how they are to be done, who is responsible for specific activities, and what resources are required. Major activities in configuration management, both in the DoD standard and IEEE standard, are grouped into four functions: configuration iden-
tification, configuration control, status accounting, and configuration audits and reviews. Configuration identification activities identify names and describe physical and functional characteristics of the configuration items (CIs) to be controlled throughout the life cycle. A configuration item can be an element of the support environment as well as intermediate subsystem or the final deliverable. For example, operating systems used in systems development can be a configuration item in software configuration management. Naming methods of CIs include serialization, labeling, version marking, and so forth. Configuration control activities request, evaluate, approve or disapprove, and implement changes to CIs serving as a primary driver in configuration management. Configuration control does not require really new disciplines; instead, existing practices should be extended and systemized within the given policies and objectives of configuration management. For example, established committees like the configuration control boards (CCB) with the adequate level of authority to approve or disapprove change requests are usually recommended. Multiple levels of CCBs may be specified depending on the system or project complexity. For the projects that are not complex in structure, a central CCB may be installed and assume the responsibilities over several projects. Configuration status accounting activities record and report the status of CIs to ensure traceability and tractability of the configuration baseline. Baseline is defined as an approved reference point for control of future changes to a product’s performance, construction, and design. Thus, configuration accounting activities should include information on what data elements in CIs are to be tracked and reported, what types of reports are generated, when those reports are generated, how the information is processed and reported, and how access to the status data is controlled. Configuration audits and reviews validate achievement of overall system or product requirements. Thus, configuration audits and reviews enable the integrity of the various baselines by determining to what extent the actual CI reflects the required physical and functional characteristics. In addition to four basic functions, IEEE software configuration management standard addresses requirements for interface control and subcontractor–vendor control. They are intended to support four functions by reducing the risk associated with items outside the scope of configuration management plans and items developed outside the project environment by contract. Configuration management will continue to be a major factor in the definition, development, and deployment of a system as long as changes are inevitable during system life cycle. In particular, use of configuration management is becoming an essential activity when developing a software system where changes are more extensive and change faster than any other hardware systems. Metrics Metrics are the instruments needed for quality assurance and configuration control since without the measurement of product or process, no one can say an improvement has been achieved. Metrics are so important that a whole system is subject to failure if metrics are not able to measure system quality or cost. It is important to identify where metrics should be used, the appropriate time for them to be applied,
SYSTEMS ENGINEERING TRENDS Table 4. Four Types of Measurement Inactive:
Reactive:
Interactive:
Proactive:
This denotes an organization that does not use metrics or does not measure at all except perhaps in an intuitive and qualitative manner. This denotes an organization that will perform an outcome assessment and after it has detected a problem or failure will diagnose the cause of the problem and often will get rid of the symptoms that produce the problem. This denotes an organization that will measure an evolving product as it moves through various phases of the life-cycle process in order to detect problems as soon as they occur, diagnose their causes, and correct the difficulty through recycling, feedback, and retrofit to and through that portion of the life-cycle process in which the problem occurred. These measurements are designed to predict the potential for errors and synthesis of an appropriate life-cycle process that is sufficiently mature such that the potential for errors is minimized.
Adapted from Sage (9).
and their purpose. Determining the object and moment to be measured is included in the task of measurement. Misuses of measure are often encountered, for example, confusing the scale (27), which can be classified into nominal scale (determination of equality), ordinal scale (determination of greater or less), interval scale (determination of equality of intervals or differences), and ratio scale (determination of equality of ratios). For example, matching the number of defects directly with the quality of a production system may not be appropriate when the interval or ratio scale is needed. Measurements are often classified into four categories— inactive, reactive, interactive, and proactive (10)—as listed in Table 4. Measurement is needed at every systems engineering level: systems tools and methods, systems methodology, and systems management. Although all types of measurement are possible at each level, we can see a correspondence between the types of measurement at the levels of systems engineering. At the systems methods and tools level, product-ori-
ISO9003
ented measurements—for example, metrics in inspections or quality control—are often reactive and metrics are used after the product appears. At the systems methodology level, process-oriented measurements—for example, metrics in configuration management or operational quality assurance— have interactive characteristics and generally measure a system’s functionality through the whole life cycle. At the highest level, systems or process-management-oriented measurement—for example, metrics for strategic quality assurance or process improvement—are often proactive so that process improvement will occur. It is true that the higher the level, the more difficult it is to get a clear metrics system, yet ISO9000 and the capability maturity model have done yeoman service in this area. ISO9000 series standards are considered to be most comprehensive quality assurance and management standards. These have been adopted by the European Community and many companies worldwide. ISO9000 series—ISO9000, ISO9001, ISO9002, ISO9003, and ISO9004—have their own purposes. ISO9000 is an overview document of suggestions and guidelines. ISO9000-3 specifies quality management and quality assurance standards for software and provides 20 guidelines for the development, supply, and maintenance of software. For quality assurance for the general systems engineering life cycle, ISO9001 is comprised of design, development, production, installation, and servicing. ISO9002 explicitly focuses on production and installation of products and systems. ISO9003 deals with the standards for final inspection and test. Finally, in ISO9004, the detailed list of what should be done is presented instead of general suggestions or guidelines. Figure 4 illustrates the twenty requirements and their relationship to ISO9000 family. Cost Metrics A systems engineer’s interest in metrics has two aspects. One is cost-related, and the other is quality-related. For the estimation of software cost, often dealing with estimations of effort and schedule, the most common metric is lines of code (LOC). There are many ways of measuring LOC. Some include comments and data definitions in the measurement, but others only include executable lines or only newly developed
ISO9002
Required in
Management responsibility Quality management system Document and change control Product identification and traceability Inspection and testing Inspection, measuring, and test equipment Inspection and test status Control of nonconforming product Handling, storage, package, and delivery Quality records Training Statistical techniques
331
ISO9001
Required in
Contract review Purchasing Purchaser supplied products Process control Corrective action Internal quality audit
Design control Customer servicing
Required in
Figure 4. The 20 requirements and their relationship to ISO9001, ISO9002, and ISO9003. [Adapted from Sage (10).]
332
SYSTEMS ENGINEERING TRENDS
codes. Starting with the size of software, there are many factors to be measured. For example, Boehm (28) grouped many factors measured in various cost estimation models by size, program, computer, personnel, and project attributes. The effort to generalize the process of cost estimation has resulted in a flood of models, some of which examine only the software product (e.g., Wolverton’s software cost–effort model), while others examine the process (e.g., COCOMO by Boehm). Wolverton’s software cost–effort model (29) is based on expert judgment on software complexity, the model estimates for types of software, and the development difficulty. The types are categorized as follows: control, I/O, pre/post-processor, algorithm, data management, time critical. The difficulty—determined by whether the problem is old (O) or new (N) and whether it is easy (E), moderate (M), or hard (H)—are categorized into OE, OM, OH, NE, NM, and NH. Thus, a 6 ⫻ 6 matrix for software type and software difficulty can be developed. To use this matrix, the software system is partitioned into modules i, where i ⫽ 1, 2, . . ., n. Then, the cost of the kth module becomes Ct (k) = SkCt(k)d(k)
n
Ct (k) =
k=1
n
SkCt(k)d(k)
k=1
Wolverton’s model can estimate the dollar-valued total cost based on the size, type, and difficulty of the software system. However, it is criticized because it has little consideration of process-related factors—for example, the effect of the position in software development life cycle on cost estimation. Too much reliance on one expert’s experience could be another weakness of this model. As opposed to the expert judgment approach, many empirical models based on regression have appeared for software cost estimation. Although it is a simple regression model, Nelson’s cost estimation model (30) has been often utilized. It sets software system cost as a dependent variable and attributes or quantities of software cost as independent variables. An alternate nonlinear model has been proposed in many cost estimation studies. The Bailey–Basili model (31) is one of the examples. This model uses three basic estimator equations, which are E = b + aS,
E = aSb ,
SEE =
N
i=1
1−
(c + aSbi ) Ei
2
Bailey and Basili attempt to improve on the estimator by calculating error range based on software complexity factors. They categorize complexity factors from 18 projects into three categories: total methodology (METH), cumulative complexity (CMPLX), and cumulative experience (EXP). At the next step, a least-squares estimation is used to calculate coefficients in the regression model that regress effort ratio (ratio between actual effort expended and the amount predicted by the background equation) on the three software complexity factors ER = α + β1 METH + β2 CMPLX + β3 EXP Based on ER value, the effort can now be adjusted to either Eadj = (1 + ERadj )E or
where Ct(k)d(k) represents the cost per line of code for that particular type and difficulty of software, and Sk denotes a size estimate of the software, which is measured in the number of lines of uncommented code for kth module. To get the total cost of producing the software system, we sum this cost over all modules to get Total cost =
example, The SEE for the third equation is
E = c + aSb
where E denotes effort in man-months of programming and management time and S denotes number of developed lines of source code with comments. The model parameters were determined to minimize the standard error estimate (SEE). The SEEs are obtained by summing the squares of estimation error ratio (the difference between estimated effort and observed effort, divided by observed effort) over 18 projects. For
Eadj =
E (1 + ERadj )
according to the effect of software complexity. The adjusted effort can be interpreted as the effort for the life-cycle development. Because the Bailey–Basili model is only based on the FORTRAN software product in NASA Goddard Space Center, the database is very homogeneous. Boehm (32) developed the constructive cost model (COCOMO) based on heterogeneous databases that include programs such as FORTRAN, COBOL, PL/1, Jovial, and assembly language ranging from 2000 to 1 million lines of code, exclusive of comments. His model has three estimates: basic, intermediate, and detailed. Each form of COCOMO model uses an estimate of the form E = aSδ M(xx ) where M(x) represents an adjustment multiplier, which is a composite function of 15 cost drivers x1 through x15. In the basic COCOMO, M(x) ⫽ 1 for all xi. For intermediate COCOMO, the adjustment multiplier is calculated as the products of individual cost drivers: M(x) ⫽ m(x1)m(x2) . . . m(x15). In detailed COCOMO, phase-sensitive effort multipliers are introduced to the model. Phase-sensitive effort components play an important role that reflect process-related costs. The parameter values are not obtained from least-squares regression as opposed to many other cost estimation models. They are obtained from subjective opinion and experience of software experts and from the results of other cost estimation models. There is a different approach to cost estimation that tries to overcome the lack of standardization of LOC count which is used as a size measure. The function measure, originated by Albrecht (33), is more macro level than LOC, capturing information like the number of distinct input data items and the number of output screens or reports. The possibility of estimating cost early in the life cycle, along with the availabil-
SYSTEMS ENGINEERING TRENDS
ity to nontechnical project managers, makes this model useful. In the initial function point developments by Albrecht (33), four elements—external inputs, external outputs, number of logical internal files, and number of external inquiries—were representative of the functionality of a software product. Soon, it is recognized that number of external interface types is to be added (10). To calculate the function count, the number of function points xij at each complexity level (low, average, high) is multiplied by appropriate weight wij, and is summed over function point type and complexity level. This results in
FC =
5 3
wij xij
i=1 j=1
where xij represents the number of function points of type j and weight level i. There are some problems with these models, as pointed out by Sage and Palmer (34). These are that it is difficult to build a model independent of either user or developer, to correctly measure cost factors, difficulty to incorporate the change in technology, tools, and methods, and difficulty to assign process-based factors to software cost. Quality Metrics Quality is so intangible that no one is able to measure it, whereas cost is expressed as a dollar value. However, by using quality metrics it is possible to say that ‘‘a system has high quality since it has not failed for 10 years.’’ Thus, quality metrics such as defect rate enable us to determine quality. Just as LOC is the base metric for cost-related software metrics, quality metrics are also based on LOC. Kan (35) gives us three categories: end-product quality metrics, in-process quality metrics, and maintenance quality metrics. Examples and metrics are summarized in Table 5.
333
Information systems development methodology alone cannot satisfy an organization’s goal of quality improvement. As we have seen, as a result of the importance of user satisfaction in development methodology that is incorporated into JAD and RAD, there is a big movement toward quality assurance that can be adapted to development methodology levels and ultimately to organizational levels. The Japanese philosophy about quality is characterized by the total quality control (TQC) system, which uses seven basic tools (36): checklist, Pareto diagram, histogram, scatter diagram, run chart, control chart, and cause-and-effect diagram. A checklist is a paper form with items to be checked. It is used to gather and arrange data easily for later use. A Pareto diagram is a frequency bar chart in which the x axis and the y axis usually represent the causes and counts, respectively. A histogram is similar to a Pareto diagram, but in a histogram the x axis represents the unit interval of a parameter (e.g., time unit) by the order of which the frequency bars are shown. A scatter diagram shows the relationship between two variables by plotting points with respect to the x axis and the y axis values. It is useful when the linear relationship exists between two variables. A run chart tracks down the parameter of interest over time by using the x axis as time. It is very useful to capture the process characteristics of a variable. A control chart is similar to a run chart, but it is often used to control the outliers that are outside the upper control limit (UCL) or lower control limit (LCL). The cause-and-effect diagram, also known as the fishbone diagram because of its shape, is used to show the relationship between a quality characteristic and factors affecting that characteristic. These statistical quality control tools often play a role in operational quality assurance, but strategic quality assurance seemed to be missing in the American implementations. Hence, the American view of the Japanese quality implementation produced the term total quality management (TQM), which identifies and adds factors hidden in Japanese culture
Table 5. Software Quality Metrics Category Product quality metrics
Example Mean time to failure Defect density Customer-reported problems
Customer satisfaction
In-process quality metrics
Maintenance quality metrics
Phase-based defect removal pattern Defect removal effectiveness (DRE) Defect density during testing Defect arrival pattern during testing Fix backlog Fix response time Percent delinquent fixes Defective fixes
Possible Metric Amount of time before encountering crash Number of bugs/KLOC (thousand lines of code) PUM (problem per user month) ⫽ Total problems that customers reported for a time period/Total number of license-months of the software during the period Percentage of very satisfied customers from a customer survey data via the five point scale: very satisfied, satisfied, neutral, dissatisfied, very dissatisfied Bar graph of defects removal with the index of development phases DRE ⫽ (Defects removed during a development phase/Defects latent in the product) ⫻ 100 (Number of bugs/KLOC) during testing Weekly Plotted cumulative defect rate during test BMI (backlog management index) ⫽ (Number of problems closed during the month/Number of problem arrivals during the month) ⫻ 100 Mean time of all problems from open to closed (Number of fixes that exceeded the fix response time criteria by severity level/Total number of fixed delivered in a specified time) ⫻ 100 Number of defective fixes
334
SYSTEMS ENGINEERING TRENDS
(e.g., human side of quality). As strategic use of TQM increased, TQM became an aspect of the methodology of systems development. TQM (37) is defined as a structured system for satisfying internal and external customers and suppliers by integrating the business environment, continuous improvement, and breakthroughs with development, improvement, and maintenance cycles while changing organizational culture. SYSTEMS MANAGEMENT Today’s major concern in systems engineering can be said to be the concern about systems management issues, especially for process improvement. TQM is the one of the efforts to provide the guidelines for process improvement. Another effort is business process reengineering (BPR). Although the theme of these two is the same with regard to process management, differences between TQM and BPR (38) do exist and are presented in Table 6. In addition to the TQM and BPR efforts for process improvement, the capability maturity model (CMM) by the Software Engineering Institute (SEI) at Carnegie Mellon University is an example of an alternate initiative. In this section, we will discuss BPR further, focusing on what it is and how to implement it, and subsequently discuss details of the CMM. Business Process Reengineering Business process reengineering (BPR) is defined by Hammer and Champy (39) as the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical contemporary measures of performance. BPR needs radical change. Thus, many BPR advocates argue that an organization should think about business processes as if it is starting a new business in order to identify processes that would result in dramatic improvement. BPR often deals with the entire organization and requires a large amount of time. Davenport and Short’s (40) five steps in process redesign comprise the first methodological approach: (1) Develop business vision and process objectives, (2) identify processes to be redesigned, (3) understand and measure existing processes, (4) identify information technology levers, and (5) design and build a prototype of the process. Many consulting companies have developed BPR methodologies. While these are proprietary, Grover and Malhotra (41) pro-
Table 6. Total Quality Management Versus Business Process Reengineering Total Quality Management Level of change Starting point Frequency of change Time required Participation Typical scope Risk Primary enabler Type of change
Business Process Reengineering
Incremental Existing process One-time/continuous
Radical Clean slate One-time
Short Bottom-up Narrow, within functions Moderate Statistical control Cultural
Long Top-down Broad, cross-functional High Information technology Cultural/structural
vide a generic reengineering methodology, which is summarized in Table 7. The success of CIGNA corporation (42)—saving more than $100 million, $2 to $3 returned benefits resulted from each $1 invested in reengineering, operating expenses reduced by 42%, cycle times improved by 100%, customer satisfaction up by 50%, and quality improvements of 75%—has increased the popularity of BPR. Most companies to date have thought about a BPR exercise. However, there are often innumerable obstacles—such as unrealistic scope and expectation, lack of management support, and resistance to change—that have caused a great number of BPR projects to fail. Such failures led to the modification of the BPR concept that focuses only on radical redesign to ensure dramatic improvement. The new term, business process change (BPC), reflects this future trend of BPR by keeping the focus on importance of process but weakening the necessity for radical change. Although it is still a question whether BPR should be incremental or radical, BPR and TQM seemed to be converging and assisting each other. Capability Maturity Model The capability maturity model (CMM) is concerned with the systems development process. As opposed to BPR, which is often fit to mature organizations, the CMM provides guidelines about how to manage processes depending on the maturity level of organization. The CMM is not a competitive model because it encompasses all approaches such as TQM’s ISO9000 based on the level of maturity. The CMM’s underlying assumption is that certain process models can perform better for certain types under certain environments. The CMM begins with defining the five capability levels, and at each level the CMM suggests (a) common features to assess the current situation of an organization and (b) key process areas to follow in order to evolve to higher levels. The key process areas (43) are presented in Table 8. At level 1—the initial level—the software process is ad hoc, even chaotic. Hence, few processes are defined due to unpredictable cost, schedule, and quality performance. Success depends on having an exceptional manager or effective software team. At level 2—the repeatable level—project management processes are established, but are so basic and variable because planning and managing new projects is based on experience with prior projects. At level 3—the defined level—the software process for both management and engineering activities is documented, standardized, and integrated. Reliable cost and schedule is achieved, but quality measure is still qualitative. At level 4—the managed level—both product and process quality are measured quantitatively. Statistical quality control is often used to manage quality. At level 5—the optimizing level—the entire organization is focused on continuous process improvement. Process improvement is essential to reduce cost and enhance quality. The CMM’s assumption that lower level of maturity should be experienced to proceed to higher level is somewhat contradictory because it might imply that organization should be in level 4 or 5 for continuous process improvement. However, many successful stories of software companies (e.g., in Refs. 44 and 45) that applied key CMM practices support the CMM’s validity. In addition, in the CMM version for systems engineering (46), SEI provides classified do-
SYSTEMS ENGINEERING TRENDS
335
Table 7. A Generic Reengineering Methodology Key Activities
Types of Tools/Techniques
Preparation
Evaluate organization and environment, recognize need, set corporate and reengineering goals, identify and motivate team, train team on reengineering concepts, develop a change plan; develop project scope, components, and approximate time frames.
Process-think
Model processes, model customers and suppliers, define and measure performance, define entities or ‘‘things’’ that require information collection, identify activities, map organization, map resources, prioritize processes.
Creation
Understand process structure, understand process flow, identify value-adding activities, identify benchmark performance, brainstorm information technology possibilities, estimate opportunity, envision the ideal process, integrate visions, define components of visions.
Technical design
Examine process linkages, model entity relationships, develop performance metrics, consolidate interfaces, consolidate information, design technical systems, modularize, plan implementation.
Social design
Empower customer contact personnel, identify job clusters, define jobs/teams, define skills/staffing, specify organizational structures, design transitional organization, design incentives, manage change, plan implementation.
Implementation
Develop test and rollout plans, construct system, monitor progress, evaluate personnel, train staff, pilot new process, refine, implement full rollout, ensure continuous improvement.
Planning Team building Goal seeking Motivation Change management Project management Customer modeling Performance measurement Cycle time analysis Cost analysis Process modeling Process value analysis Value chain analysis Workflow analysis Organizational mapping Activity-based cost accounting Work flow analysis Process value analysis Benchmarking Cycle time analysis Brainstorming Visioning Documentation Information engineering Work flow analysis Performance measurement Process modeling Project management Employee empowerment Skill matrices Team building Self-managed work teams Case managers Organizational restructuring Change management Incentive systems Project management Process modeling Information engineering Skill matrices Performance measurement Just-in-time training Project management
Adapted from Grover and Malhotra (41).
mains—project, engineering, and organization—depending on the responsibility of key practice areas. For example, it assigns allocation of requirements to the engineering domain, assigns quality assurance to the project domain, and provides ongoing knowledge and skills to the organization domain. This kind of specification of the role of each domain at each maturity level allows the systems engineering domain to advance to the next higher level.
OBJECT-ORIENTED PARADIGM The object-oriented paradigm is not new if we recall the history of programming languages. Simula in the 1960s was the first language that implemented the object-oriented concept. However, the systems engineering community became inter-
ested in the object-oreiented approach only recently. One reason was the popularity of other approaches, such as traditional waterfall methodology, a structured methodology that has been successful for a good deal of time, so that systems engineering did not need object-oriented methodology. The data-oriented thinking might have prevented object thinking, and it made us consider object-oriented thinking as difficult. It is only recently that attention has been given to the new approach since systems have become larger and more complex with increasing difficulty of problem-solving. Object-oriented system methodology has some useful characteristics that can be utilized in systems development. Eight key characteristics (25) that help in systems development are (1) common methods of organization, (2) abstraction, (3) encapsulation, (4) inheritance, (5) polymorphism, (6) message communication, (7) association, and (8) reuse. Common meth-
336
SYSTEMS ENGINEERING TRENDS
Table 8. The Key Process Areas in Capability Maturity Model Level 1. Initial 2. Repeatable
3. Defined
4. Managed 5. Optimizing
Key Process Areas No specific areas Software configuration management Software quality assurance Software subcontract management Software project tracking and oversight Software project planning Requirement management Peer reviews Intergroup coordination Software product engineering Integrated software engineering Training program Organization process definition Organization process focus Software quality management Quantitative process management Process change management Technology change management Defect prevention
Adapted from Paulk et al. (43).
ods imply information systems can be developed similar ways. The concepts such as objects, attributes, class, and message used in object-oriented methodology can be applied in the design of a system. For example, anything can be an object in the object-oriented paradigm, hence both product-oriented and process-oriented design can be carried out by simply varying the object focus. Abstraction characteristics, encapsulation, and information hiding help the system developer to concentrate more on current issues by removing unnecessary details. Inheritance characteristics have two implications: generalization or specialization. When classes have some common attributes, we can generalize them to make a superclass, whereas specialization is possible when we need subclasses. Polymorphism, which means ‘‘many forms,’’ is an advantage that can only be implemented in the object-oriented approach. Implementing polymorphism is simple in the object-oriented approach due to class hierarchy. An instruction such as displaying the defect rate of product A and product B which are not in the same subclass is conducted by searching the procedure (code) from the lowest level to higher levels (superclasses). Message communication deals with communication between objects. For example, a customer’s request to display an order number n is a message to the order object, and the order object fulfills this request by telling itself (calling its member function) to display order number n. Association is the procedure of setting relationships between objects after the identification of all objects. Well-designed associations result in well-designed processes and enhance reusability. Reusability is an object-oriented approach that is more advanced than the module subroutine used in structured methodologies, both in terms of reliability and contribution to rapid development. Under the assumption of correct implementation of other characteristics, object-oriented approaches ensure high-quality systems with less cost. Applying object concepts is a challenging job to a system developer, but once established, systems become faster, better, and cheaper and systems development becomes a routine task.
Object-oriented methodology is still in a relatively immature stage. Most of the methodologies focus on systems analysis—up to the logical design phase in the whole life-cycle phase, adding information engineering methodology at the systems design and implementation phases. Coad and Yourdon’s object-oriented analysis (OOA) methodology (47) consists of a five-step procedure: (1) Define objects and classes, (2) define structures, (3) define subject areas, (4) define attributes, and (5) define services. Major tools utilized here are class and object diagram, object-state diagram, and service chart. A class and object diagram is a diagram consisting of five layers: (1) class and object layer, which shows classes and objects, (2) structures layer, which connects classes and objects with arcs to show generalization–specialization and whole-part inheritance relationships, (3) subjects layer, which groups closely related classes by adding borders, (4) attributes layer, which adds a list of attributes, and (5) service layer, which adds a list of services inside the class and object boxes and provides arcs showing message connections between boxes. An object-state diagram is a simple diagram that shows all the possible states of an object and the allowed transitions between states. A service chart is a diagram that depicts the detailed logic within an individual service, including object-state changes that trigger or result from the service. While the above discussion is pertinent to a centralized object environment, different aspects become important in a distributed and heterogeneous environment (48). It is often the case that a company has (a) legacy systems that are costly to replace and (b) different platforms that are used depending on the task. To provide a standard for distributed objects, the object management group (OMG) (49) developed the common object request broker architecture (CORBA). Basically, CORBA follows the client–server architecture, facilitating the communication between clients and objects. The object request broker (ORB) is a software product that intercepts messages from an object, translates them for different languages or different machines in heterogeneous distributed environments, and routes them to the correct object. Since current business situations require at least some distributed processes implementation, an effort like CORBA to integrate heterogeneous systems using the object-oriented approach will support the technical side of business process reengineering. An object-oriented approach is often considered to be parallel to the BPR movement. In other words, because BPR needs a project manager to change to process thinking, the objectoriented approach needs the system developer to change to object thinking. As of now, hundreds of object-oriented methods and tools exist. Currently there is a trend (50) to combine the advantage of BPR and the object-oriented method. However, application of this paradigm to organizations is not an easy matter. Perhaps BPR’s track record of frequent failures may be the trigger to adopt object-oriented technology in a large way.
SYSTEMS ENGINEERING AND THE INTRANET The Internet has proliferated with the advent of World Wide Web (WWW) technology. It is surprising that a system like the Internet can sustain its existence without any organized or intended controls. However, it is clear that the Internet
SYSTEMS ENGINEERING TRENDS
is working as one complex system, leaving all subsystems to interact with each other. The Internet is a collection of networks not under any formal control mechanism, but the Internet as a whole contains emergent properties that happened to make the whole stable. Aside from the physical structure of the Internet that can be considered as the medium, people using the Internet are linked by means of information. The popularity of this medium has been achieved primarily because of its user-friendly interfaces like WWW and by the simplicity of open and standard protocols like Transmission Control Protocol/Internet Protocol (TCP/IP). An Intranet is an information system within an organization based on Internet technologies such as WWW, HyperText Markup Language (HTML), TCP/IP and HyperText Transfer Protocol (HTTP). A more practical description of intranet is given by Hinrichs (51): ‘‘the Intranet is a technology that permits the organization to define itself as a whole entity, a group, a family, where everyone knows their roles, and everyone is working on the improvement and health of the organization.’’ Because ‘‘systems engineering considers both the business and the technical needs of all customers with the goal of providing a quality product that meets the user needs (52), engineering the Intranet is definitely the subject of systems engineering, and we believe that this will be the future assignment for the information systems engineer. Before we define the role of systems engineer in planning, developing, and deploying an Intranet, it would be better to understand the advantages and caveats of using Intranet. As is often the case in BPR projects, the bandwagon effect has resulted in failures due to the rush decision without careful analysis. Because Intranets are still growing, it is difficult to identify other dangers, but it is clear that this will be the essential tool that future organizations eventually must have whatever variation is added. Traditionally, Intranets have been used to reduce the costs of printing materials such as company newsletters, employee benefits handbooks, and training materials. The other major advantage would be enhanced speed of communication. Web-server-based communication has the ability to provide up-to-date information, and even a basic e-mail communication can ensure at most a oneday lag of communication. To address the use of Intranet, we can think of the Intranet as (1) decision-making tool via offthe-shelf information, (2) learning organization tool with faster analysis of business processes, opportunities, and goals, (3) a complete communication tool that integrates all the information into one place on the Web, (4) a collaboration tool with the form of forum, even with the use of video conferencing, electronic whiteboard, and single shared document, (5) an expert’s tool by storing sharing tips, tricks, pitfalls, and analysis about any topic in a threaded database, (6) an single invention tool with a common Web interface, (7) a process identification and process improvement tool with understanding the cross-functional information in a single place, (8) a partnering tool via the exchange of intranet information between organizations, which is now termed as Extranet, (9) a customer tool by opening information to the internet, (10) an International Organization for Standardization (ISO) tool via a singular repository which enables many of the ISO requirements, (11) a target marketing tool with the emphasis from mass market to market segment by storing two-way information flow, and (12) a human resource tool by letting employees
337
learn new skills and access the various human resource materials online (48). The Intranet is poised to become a candidate to make BPR initiatives successful in terms of process innovation and improvement. Intranets could help implement BPR philosophy in an autonomous way encouraged by the easiness of communication, especially in terms of process improvement. The Intranet gives employees support to think of the process in a natural way, so that resistance can be diminished. As for the modification of BPR, James (53) pointed out the problems of BPR that can be solved by Intranets and further ensure the success of BPR. Three reasons why reengineering failed and the reasons why Intranets can solve the problem are presented in Table 9. Now comes the question about what the role of systems engineer is when building an Intranet. Clearly, the Intranet is also a system that we have already had experience in how to build. The difference will be the necessity for more detailed requirement specification and planning. Intranet is a highly complex system because of many interactions between users and information. One role for the systems engineer would be the role of physical designer for the infrastructure of the Intranet. It is true that employees can develop contents in the Intranet without difficulty, but determining bandwidth, leveraging off the existing system is the area for the systems engineers. From the strategic point of view, decisions about business plan, cost and time of development, and the objective of the Intranet should be made before the enrollment. One thing that should be considered is that the Intranet will consistently change as technology improves. Building the Intranet is not a one-time development, but an ongoing development process. Hence, a highly adaptive system, which has the ability to expand when necessary, should be the framework of the Intranet, and this framework could be obtained from prototyping approach. The most important and difficult job would be the prediction of sociotechnical structure throughout the life cycle of Intranet. However, we may not need to worry because the Internet has been successful in spite of the complex nature of system. In other words, it is natural that useroriented development be successful. All the contents in Intra-
Table 9. Intranets and Reengineering Reasons for Reengineering Failure
How Intranets Can Help
Top-down efforts gather little Intranets have typically been imsupport from employee and plemented as a result of grassmiddle managers, who tend to roots, bottom-up efforts. equate reengineering with layoff. Massive personnel retraining is Intranets are an excellent vehiusually necessary. cle for employees to use to share information on new processes and procedures, because of easy-to-use browser technology. Projects require the participa- Web technology supports many tion of multiple departments, different and diverse platmany of which have diverse forms. and incompatible computer systems. Adapted from James (53).
338
SYSTEMS ENGINEERING TRENDS
net cannot be controlled in an organized fashion as the size increases. What we have to do is create the environment in which the users and developers assist each other and make ceaseless improvement of the Intranet. The cost of having its own Intranet is relatively low if an organization has facilities of the Internet, although the benefit is exceptional. According to IDC/Link—a subsidiary of International Data Corporation (IDC), by the year 2000 the annual shipments of Internet servers will approximate 450,000, while the shipment of Intranet servers will approximate 4,500,000. The emerging interests regarding the virtual organization is not so surprising once we understand the phenomenon that coordination cost declines notably as accessible information increases because of the Intranet. The trend toward process management or systems management by narrowing the communication gap is expected to continue. BIBLIOGRAPHY 1. R. L. Ackoff, Toward a system of systems concepts, Manage. Sci., 17: 661–671, 1971. 2. J. O’Connor, What is a system? [Online]. Available www: http:// www.radix.net/앑crbnblu/assoc/oconnor/chapt1.htm 3. G. Bellinger, Systemic University on the Net (SUN) [Online]. Available http://www.outsights.com/systems 4. L. von Bertalanffy, General System Theory; Foundations, Development, Applications, New York: Braziller, 1969. 5. N. Wiener, Cybernetics: or Control and Communication in the Animal and the Machine, New York: Wiley, 1948. 6. J. W. Forrester, Industrial Dynamics, Boston: MIT Press, 1961. 7. F. Heylighen et al., What are cybernetics and systems science? [Online]. Available www: http://pespmcl.vub.ac.be/SYSTHEOR.html 8. W. P. Chase, Management of System Engineering, New York: Wiley, 1982. 9. A. P. Sage, Systems Engineering, New York: Wiley, 1992. 10. A. P. Sage, Systems Management: For Information Technology and Software Engineering, New York: Wiley, 1995. 11. IEEE P1220, IEEE Standard for Systems Engineering, Preliminary, 1993. 12. W. W. Royce, Managing the development of large software systems: concepts and techniques, Proc. WESCON, 1970, pp. 1–70. 13. B. W. Boehm, Software engineering, IEEE Trans. Comput., 25: 1126–1241, 1976. 14. U.S. Dept. of Defense, Defense System Software Development, DoD STD-2167A, June 1985. 15. B. W. Boehm, A spiral model of software development and enhancement, IEEE Comput., 21 (5): 61–72, 1988. 16. M. A. Mahmood, Systems development methods—a comparative investigation, MIS Quart., 11 (3): 293–311, 1987. 17. J. C. Wetherbe and N. P. Vitalari, Systems Analysis and Design Best Practices, 4th ed., St. Paul: West, 1994. 18. J. L. Whitten and L. D. Bentley, Systems Analysis and Design Methods, 4th ed., Boston: Irwin/McGraw-Hill, 1998. 19. S. D. Dewitz, Systems Analysis and Design and the Transition to Objects, New York: McGraw-Hill, 1996. 20. T. DeMarco, Structured Analysis and System Specification, New York: Yourdon Press, 1978. 21. E. Yourdon, Modern Structured Analysis, Englewood Cliffs, NJ: Yourdon Press, 1989. 22. J. Martin, Information Engineering, Vols. 1–3, Englewood Cliffs, NJ: Prentice-Hall, 1989 (Vol. 1), 1990 (Vols. 2 and 3).
23. J. Wood and D. Silver, Joint Application Design, New York: Wiley, 1989. 24. R. J. Norman, Object-Oriented Systems Analysis and Design, Englewood Cliffs, NJ: Prentice-Hall, 1996. 25. J. Martin, Timebox methodology, Syst. Builder, April/May: 22– 25, 1990. 26. R. D. Hackathorn and J. Karimi, A framework for comparing information engineering methods, MIS Quart., 12 (2): 203–220, 1988. 27. U.S. Dept. of Defense, Configuration management practices for systems, equipment, munitions, and computer programs (MILSTD-483), Washington, DC, 1970. 28. ANSI/IEEE Std 828-1990, IEEE standard for software configuration management plans, New York, 1990. 29. R. W. Wolverton, The cost of developing large-scale software, IEEE Trans. Comput., C-23: 615–636, 1974. 30. E. A. Nelson, Management Handbook for Estimation of Computer Programming Costs, AD-A648750, Santa Monica: Systems Development Corporation, 1966. 31. J. W. Bailey and V. R. Basili, A meta-model for software development resource expenditures, Proc. 5th Int. Conf. Softw. Eng., 1981, pp. 107–116. 32. B. W. Boehm, Software Engineering Economics, Englewood-Cliffs, NJ: Prentice-Hall, 1981. 33. A. J. Albrecht, Measuring application development productivity, Proc. IBM Appl. Dev. Symp., Monterey, CA, 1979, pp. 83–92. 34. A. P. Sage and J. D. Palmer, Software Systems Engineering, New York: Wiley, 1990. 35. S. H. Kan, Metrics and Models in Software Quality Engineering, Reading, MA: Addison-Wesley, 1995. 36. K. Ishikawa, Guide to Quality Control, New York: Quality Resource, 1989. 37. Integrated Quality Dynamics, TQM: Definition of total quality management [Online]. Available http://www.iqd.com/ tqmdefn.htm 38. T. H. Davenport, Process Innovation: Reengineering Work Through Information Technology, Boston: Harvard Business School Press, 1993. 39. M. Hammer and J. Champy, Reengineering the Corporation: A Manifesto for Business Revolution, London: HarperCollins, 1993. 40. T. H. Daveport and J. E. Short, The new industrial engineering: Information technology and business process redesign, Sloan Manage. Rev., Summer: 11–27, 1990. 41. V. Grover and M. K. Malhotra, Business process reengineering: A tutorial on the concept, evaluation method, technology and application, J. Oper. Manage., 15: 193–213, 1997. 42. J. R. Caron, S. L. Jarvenpaa, and D. B. Stoddard, Business reengineering at CIGNA corporation: experiences and lessons learned from the first five years, MIS Quart., 18 (3): 233–250, 1994. 43. M. C. Paulk et al., Capability maturity model for software, Version 1.1, CMU/SEI-93-TR-24, Software Engineering Institute, 1993. 44. S. Girish, Continuous process improvement—why wait till level 5?, Proc. 29th Annu. Hawaii Int. Conf. Syst. Sci., 1996, pp. 681–692. 45. C. D. Buchman, Software process improvement at Allied Signal Aerospace, Proc. 29th Annu. Hawaii Int. Conf. Syst. Sci., 1996, pp. 673–680. 46. R. Bate et al., A systems engineering capability maturity model, Version 1.1, CMU/SEI-95-MM-003, Software Engineering Institute, 1995. 47. P. Coad and E. Yourdon, Object-Oriented Analysis, 2nd ed., Englewood Cliffs, NJ: Prentice-Hall, 1991.
SYSTEMS REENGINEERING 48. S. Vinoski, CORBA: Integrating diverse applications within distributed heterogeneous environments, IEEE Commun. Mag., 14 (2): 46–55, 1997. 49. The Object Management Group (OMG) Homepage [Online]. Available www:http://www.omg.org 50. I. Jacobson, M. Ericsson, and A. Jacobson, The Object Advantage: Business Process Reengineering with Object Technology, Workingham, UK: Addison-Wesley, 1995. 51. R. J. Hinrichs, Intranets: What’s The Bottom Line?, Mountain View: Sun Microsystems Press, 1997. 52. INCOSE (The International Council on Systems Engineering) Homepage [Online]. Available www:http://www.incose.org/ 53. G. James, Intranets rescue reengineering, Datamation, 42 (18): 38–45, 1996.
H. RAGHAV RAO S. PARK State University of New York
SYSTEMS, SYSTEMS, SYSTEMS, SYSTEMS,
KNOWLEDGE-BASED. See EXPERT SYSTEMS. LINEAR. See MULTIVARIABLE SYSTEMS. MULTIMEDIA. See AUTHORING SYSTEMS. MULTIVARIABLE. See MULTIVARIABLE
SYSTEMS.
SYSTEMS OF POLYNOMIAL EQUATIONS. See POLYNOMIALS.
339
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...EERING/59.%20Systems,%20Man,%20and%20Cybernetics/W7116.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Systems Reengineering Standard Article Andrew P. Sage1 1George Mason University, Fairfax, VA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7116 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (426K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Product Reengineering Process Reengineering Reengineering at The Level of Systems Management Perspectives on Reengineering About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...59.%20Systems,%20Man,%20and%20Cybernetics/W7116.htm15.06.2008 14:31:41
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
SYSTEMS REENGINEERING Industrial, organizational, and enterprise responsiveness to continuing challenges is very clearly a critical need today. One of these challenges is change of all sorts. This is accomplished by continually providing products and services of demonstrable value to customers. To do this requires efficiently and effectively employing leadership and empowering employees such that systems engineering and management strategies, organizational processes, human resources, and appropriate technologies are each brought to bear to produce high-quality, trustworthy, and sustainable products and services. There is an ongoing need for continual revitalization in the way organizations and enterprises do things, so that they are always done better. This would be true even if the external environment were static and unchanging. However, in a period of high-velocity changes, continual change and associated change in processes and products must be considered a fundamental rule of the game for progress. Change has become a very popular word today in management and in technology. There are a variety of change models and change theories. Some seek to change to survive. Others seek to change to retain competitive advantage. This article examines change in the form of reengineering. There are a variety of names given to the number of change-related terms now in use: reengineering, restructuring, downsizing, rightsizing, redesign, enterprise transformation, and many others. Reengineering is probably the most often used word, and systems reengineering is the title chosen here. There are many approaches to reengineering, and some of these are briefly examined here. Expansion of these discussions may be found in Refs. 1 and 2. Figure 1 represents a generic view of reengineering. Reengineering can be discussed from several perspectives: from the structural, functional, and purposeful aspects of reengineering or at the level of systems management, process, or product. Reengineering issues may be examined at any, or all, of the three fundamental systems engineering life cycles: research, development, test, and evaluation (RDT&E); systems acquisition, procurement, or production; or systems planning and marketing; all discussed in Refs. 1 and 2, on which this article is based. Within each of these life cycles, reengineering can be considered at any or all of the three generic phases of definition, development, or deployment. The level of systems management examines the enterprise as a whole and considers all organizational processes within the organization for improvement through change. At the level of process reengineering, only a single process is redesigned, with no fundamental or radical changes in the structure or purpose of the organization as a whole. When changes occur, they may be radical and revolutionary or incremental and evolutionary at the level of systems management, process, product, or any combination. The scale of improvement efforts may vary from incremental and continuous improvement, such as generally advocated by quality management efforts, to radical change efforts that affect organizational strategy and scope, and systems management itself.
One fundamental notion of reengineering, however, is that it must be directed top down if it is to achieve significant and long-lasting effects. Thus, there should be a strong, purposeful, systems management orientation to reengineering, even though it may have major implications for such lower-level concerns as the structural facets of a particular product. A major objective of reengineering is to enhance the performance of a legacy system or a legacy product or service. Thus, reengineering may support a variety of other desirable objectives, such as better integration of a product with other products and improved maintainability. This article is organized as follows. First, some definitions of reengineering are provided. Then, some of the many perspectives that have been taken relative to reengineering are viewed at the levels of
product process or product line systems management PRODUCT REENGINEERING The term reengineering could be used to mean a reworking or retrofit of an existing product. This could be interpreted as maintenance or refurbishment. Or reengineering could be interpreted as reverse engineering, in which the characteristics of an already engineered product are identified, so that the original product can be subsequently modified and reused or so that a new product with the same purpose and functions may be obtained through a forward engineering process. Generally the term product can also refer to service and we can reengineer at the level of products and/or services. Inherent in these notions are two major facets of reengineering: 1. Reengineering improves the product or system delivered to the user in enhanced reliability or maintainability or for an evolving user need. 2. Reengineering increases understanding of the system or product itself. This interpretation of reengineering is almost totally product focused. Product reengineering is the examination, study, capture, and modification of the internal mechanisms or function of an existing product to reconstitute it in a new form that has new functional and nonfunctional features, often to take advantage of newly emerged technologies, but without major change in the inherent purpose of the product. This definition indicates that product reengineering is basically structural reengineering with, at most, minor changes in purpose and function of the product that is reengineered. This reengineered product could be integrated with other products that have function rather different from that in the initial deployment. Thus, reengineered products could be used, together with this augmentation, to provide new functions and serve new purposes. A number of synonyms for product reengineering easily come to mind: among these are renewal, refurbishing, rework, re-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.
2
Systems Reengineering
Figure 1. Generic implementation of reengineering at the level of product, process, or systems management.
pair, maintenance, modernization, reuse, redevelopment, and retrofit. A specific example of a product reengineering effort is taking a legacy system written in COBOL or FORTRAN, reverse engineering it to determine the system definition, and then reengineering it in C++ or some other high-level language. Depending on whether any modified user requirements are to be incorporated into the reengineered product, it would be forward engineered after the initial development (technical) system specifications or determine user requirements and user specifications had been determined, and these would be updated. This reverse engineering concept (3), in which salient aspects of user requirements or technological specifications are recovered by examining characteristics of the product predates the term product reengineering and occurs before the forward engineering that comprised the latter portions of product reengineering. Figure 2 illustrates product reengineering conceptually. An IEEE software standards reference (4) states that “reengineering is a complete process that encompasses an analysis of existing applications, restructuring, reverse, and forward engineering.”The IEEE Standard for Software Maintenance (5) suggests that reengineering is a subset of software engineering composed of reverse engineering and forward engineering. We do not disagree at all with the definition, but prefer to call it product reengineering for the reasons just stated. There are two other very important forms of reengineering, and it is necessary to consider reengineering at the levels of processes and systems management to take full advantages of the major opportunities offered by generic reengineering concepts. Thus, the qualifier product appears appropriate in this context. Reengineering at the product level has received much attention in recent times, especially in information technology and software engineering areas. This is not a subject that is truly independent of reengineering at the levels of either systems management or of a single life cycle process. It is also, related to notions of systems integration. Product
reengineering is generally needed whenever development of an entirely new product is too expensive, when there is no suitable and available commercial product, and when the current system does not fulfill some of the functional requirements or such nonfunctional requirements as trustworthiness. Much product reengineering is closely associated with reverse engineering to recover either design specifications or user requirements, followed by refining these requirements or specifications and forward engineering to create an improved product. The term reverse engineering, rather than reengineering, was used in one of the early seminal papers in this area (6) concerned with software product reengineering. In this work and in a related chapter on the subject (7), the following activities represent both the taxonomy of and phases for is denoted here as product reengineering: 1. Forward engineering is the original process of defining, developing, and deploying a product or realizing a system concept as a product. 2. Reverse engineering, sometimes called inverse engineering, is the process through which a given system or product is examined to identify or specify the definition of the product at the level either of technological design specifications or of system or user requirements. a. Redocumentation is a subset of reverse engineering in which a representation of the subject system or product is re-created to generate functional explanations of original system behavior and, perhaps more important, to aid the reverse engineering team in better understanding the system at a functional and structural level. There are a number of redocumentation tools available for software, and some of these are cited in these works. One of the major purposes of redocumentation is to produce new documentation for an existing product whose existing documentation
Systems Reengineering
3
Figure 2. Basic notions of product reengineering as a sequence of forward, reverse, forward engineering.
is faulty and perhaps virtually absent. b. Design Recovery is a subset of reverse engineering in which the redocumentation knowledge is combined with other efforts, often involving the personal experiences and knowledge of others about the system, that lead to functional abstractions and enhanced product or system understanding at the level of function, structure, and even purpose. We prefer to call this deployment recovery, development recovery (which includes design recovery), and definition recovery, depending on the phase in the reverse engineering life cycle at which the recovery knowledge is obtained. 3. Restructuring involves transforming the information about the original system structure into another representational form. This generally preserves the initial functions of the original system or slightly modifies them purposefully in accord with changes in user requirements for the reengineered system. The terms deployment restructuring, development restructuring, and definition restructuring are appropriate disaggregations of the restructuring notion. 4. As defined here, reengineering is equivalent to redevelopment engineering, renovation engineering, and reclamation engineering. Thus, it is closely related to maintenance and reuse. Product reengineering is the re-creation of the original system in a new form that has improved structure but generally not much altered purpose and function. The nonfunctional aspects of the new system may differ considerably from those of the original system, especially with respect to quality and reliability. Figure 2, which illustrates product reengineering, involves these activities.
We can recast this by considering a single phase for definition, development, and deployment that is exercised three times. Then we see that there is a need for recovery, redocumentation, and restructuring as a result of the reverse engineering product obtained at each of the three phases. This leads us to suggest Fig. 3 as an alternative way to represent Fig. 2 and as our interpretation of the representations generally used for product reengineering. Many discussions, such as those just referenced, use a threephase generic life cycle of requirements, design, and implementation. Implementation would generally contain some of the detailed design and production efforts of our development phase and potentially less of the maintenance efforts that follow initial fielding of the system. The restructuring effort, based on recovery and redocumentation knowledge obtained in reverse engineering, is used to effect deployment restructuring, development restructuring, and definition restructuring. To these restructured products, which might be considered reusable products, we augment the knowledge and results obtained by detailed consideration of potentially augmented requirements. These augmented requirements are translated, together with the results of the restructuring efforts, into the outputs of the reengineering effort at the various phases to result ultimately in the reengineered product. There are a number of objectives in, potential uses for, and characteristics of product reengineering, which are neither mutually exclusive nor collectively exhaustive and include the following (8, 9): 1. Reengineering may help reduce an organization’s risk of product evolution through what effectively amounts to reuse of proven subproducts. 2. Reengineering may help an organization recoup its product development expenses by constructing new products based on existing products.
4
Systems Reengineering
Figure 3. Expanded notion of product reengineering.
3. Reengineering may make products easier to modify to accommodate evolving customer needs. 4. Reengineering may make it possible to move the product, especially a software product, to a less expensive operational environment, such as from COBOL to an object-oriented language or from a mainframe to a server. 5. Reengineering may be a catalyst for automating and improving product maintenance, especially by obtaining smaller subsystems with better defined interfaces. 6. Reengineering a product may result in a product with much greater reliability. 7. Reengineering may be a catalyst for applying new technologies, such as CASE tools and artificial intelligence. 8. Reengineering may prepare a reengineered product for functional enhancement. 9. Reengineering is big business, especially considering the major investment in legacy systems that need to be updated, maintained, and improved in functions. In short, reengineering provides a mechanism that enhances an understanding of systems so that this knowledge can be applied to produce new and better systems and products. Planning for product reengineering is essential, just as it is for other engineering efforts. Product engineering planning involves the standard systems engineering phases (1–10): 1. Definition Phase. a. Formulation of the reengineering issue to determine the need for and requirements to be satisfied by the reengineered product, and identification of potential alternative candidates for reengineering.
b. Analysis of the alternatives to enable determining costs and benefits of the various alternatives. c. Interpretation and selection of a preferred plan for reengineering. 2. Development Phase, in which the detailed specifications for implementing the reengineering plan are determined. 3. Deployment Phase, in which operational plans, including contracting, are set forth to enable reengineering the product in a cost-effective and trustworthy manner. Although reengineering has proven to be a successful way to improve product systems, it requires a demonstration that there will be benefits associated with the effort that justify the costs. Usually, it is necessary to compare the costs and benefits of reengineering a product with developing an entirely new one. Unfortunately, it is generally not easy to estimate the cost of a reengineering effort or the benefits that will follow from it. In some cases, this is easier for a reengineered product than for a totally new product because the existing legacy product often provides a baseline for these estimations. Sneed (11) suggests 16 relevant attributes for such an analysis; they are listed in Table 1. A number of authors have suggested specific life cycles that lead to a decision whether to reengineer a product and, in support of a positive decision, support a product reengineering life cycles (12, 13). The following are some of the accomplishments needed. 1. Initially, there exists a need to formulate, assess, and implement definitional issues associated with the technical and organizational environment. These issues include organizational needs relative to the area under consideration and the extent to which technology and the product or system under reengineering consideration supports these organizational needs.
Systems Reengineering
2. Identification and evaluation of options for continued development and maintenance of the product under consideration, including options for potentially outsourcing this activity. 3. Formulation and evaluation of options for the composition of the reengineering team, including insourcing and outsourcing possibilities. 4. Identification and selection of a program of systematic measurements that determine the cost efficiency of the identified reengineering options and facilitate selection of a set of options. 5. The legacy systems in the organizations need to be examined to determine the extent to which these existing systems are currently functionally useless and in need of total replacement, functionally useful but with functional and nonfunctional defects that could be remedied through product reengineering, or fully appropriate for the current and intended future uses. 6. A suite of tools and methods to allow reengineering needs to be established. Method and tool analysis and integration are needed to provide for multiple views across various abstraction levels (procedural, pseudoprocedural, and nonprocedural) encountered in reengineering. 7. A process for product reengineering needs to be created on the basis of the results of these earlier steps that provides for reengineering complete products reengineering systems, and for incremental reengineering efforts that are phased in over time. 8. Major provisions for education and training must be made. This is more of a checklist of needed requirements for a reengineering process than a specification of a life cycle for the process itself. Through perusal of this checklist, we should be able to establish an appropriate process for reengineering in the form of Figs. 2 or 3. This article does not describe the large number of tools available to assist and support the product reengineering process. These vary considerably depending on the type of product reengineered. A number of tools for software reengineering are described in Muller (14) and the bibliography in this article. There are several needs that must be considered if a product reengineering process is to yield useful results: 1. A need to consider long-range organizational and technological issues in developing a product reengineering strategy. 2. A need to consider human, leadership, and cultural issues, and how these will be affected by the development and deployment of a reengineered product. 3. It must be possible to demonstrate that the reengineering process and product are cost effective and of high quality and that they support continued evolution of future capabilities. 4. Reengineered products must be considered within a larger framework that also considers the poten-
5.
6.
7. 8.
5
tial need for reengineering at the levels of systems management and organizational processes because it is generally a mistake to assume that technological fixes can resolve organizational difficulties at these levels. Product reengineering for improved postdeployment maintainability must consider maintainability at the process level rather than at the product level only, such as would result from rewriting source code statements. Use of model-based management systems or code generators should yield much greater productivity in this connection than rewriting source code. Product reengineering must consider the need for reintegrating the reengineered product with legacy systems that have not been reengineered. Product reengineering should increase conformance to standards as a result of the reengineering process. Product reengineering must consider legal issues associated with reverse engineering.
The importance of most of these issues is relatively selfevident. Issues surrounding legality are in a state of flux in product reengineering, in much the same way as they are for benchmarking. They deserve special commentary here. It is clearly legal for an organization to reverse engineer a product that it owns. Also, little debate exists at this time as to whether it is legal to infer purpose from the analysis of a product without any attempt to examine its architectural structure or detailed components and then to recapture that purpose through a new development effort (the so-called black-box approach). Major questions, however, surround the legality of white-box reverse engineering, in which the detailed architectural structure and components of a system, including software, are examined to reverse engineer and reengineer it. The major difficulty stems from the fair-use provisions in copyright law and the fact that fair-use provisions differ from those associated with the use of trade secrets for illicit gain. Copyrighted material cannot be secret because copyright law requires open disclosure of the copyrighted material. Because software is copyrighted, not patented, trade secret restrictions do not apply. There is a pragmatic group that says white-box reengineering is legal, and a constructionist group that says it is illegal (15, 16). Those who suggest that it is illegal argue that obtaining trade secrets is not illegal, but the subsequent use of these for illicit gain. These issues will continue to be the subject of much debate. Many of the ethical issues in product engineering are similar to those in benchmarking and other approaches to process reengineering. Some useful guidelines applicable primarily to product reengineering follows: 1. Reverse engineering procedures can be performed only on products that are the property of the reengineering organization or that have come into its possession legally. 2. No patent exists that would be infringed by a functional clone of a computer program, and no one can
6
Systems Reengineering
be under a contractual obligation not to reverse engineer the original product. 3. A justifiable procedure for reverse engineering is to apply an input signal to the system or product being reengineered, observe the operation of the product in response to these inputs, and characterize the product functionally based on operation. Then an original product (computer code in the case of software) should be written to achieve the functional characteristics that have been observed. In the case of computer programs, it is permissible to disassemble programs available in object code form to understand the functional characteristics of the programs. Disassembly is used only to discover how the program operates, and it may be used only for this purpose. The functional operating characteristics of the disassembled computer program may be obtained, but original computer code should be prepared from these functional characteristics. This new code must serve this functional purpose. Reengineering is accompanied by a variety of risks associated with processes, people, tools, strategies, and the application area. These can be managed through risk management methodologies (1–17). These risks derive from a variety of factors:
Integration risk that a reengineered product cannot be satisfactorily integrated with legacy systems.
Maintenance improvement risk that the reengineered
product will exacerbate, rather than ameliorate, maintenance difficulties. Systems management risk that the reengineered product attempts to impose a technological fix on a situation whose major difficulties derive from problems at the level of systems management. Process risk that a reengineered product that represents an improvement in a situation where the specific organizational process is to be used is defective. Cost risk that major cost overruns are required to obtain a reengineered product that meets specifications. Schedule risk that delays are encountered to obtain a deployed reengineered product that meets specifications. Human acceptance risk of obtaining a reengineered product that is not suitable for human interaction or is unacceptable to the user organization for other reasons. Application supportability risk that the reengineered product does not really support its intended application or purpose. Tool and method availability risk of proceeding with product reengineering based on promises for a method or tool needed to complete the effort that does not become available or that is faulty. Leadership, strategy, and culture risks arising from imposing a technological fix in the form of a reengineered product on an organizational environment that cannot adapt to the reengineered product.
Clearly, these risks are not mutually exclusive, the risk attributes are not independent, and the listing is incomplete. For example, legal risks could be included. There is clearly a very close relationship between product reengineering and product reuse. The reengineering of legacy software and the reuse-based production of new software are closely related concepts. Often, the cost of developing software for one or a few applications is almost the same as the cost of developing domain reuse components and reengineering approaches to legacy software. Ahrens and Prywes (18) describe some of these relationships in an insightful work. PROCESS REENGINEERING Reengineering can also be instituted at the process and systems management levels. At the level of processes only, the effort would be almost totally internal. It would consist of modifications to existing life cycle processes to better accommodate new and emerging technologies or new customer requirements for a system. For example, an explicit risk management capability might be incorporated at several different phases of a given life cycle and accommodated by a revised management process. This could be implemented into the processes for RDT&E, acquisition, and systems planning and marketing. Basically, reengineering at the level of processes consists of determining or synthesizing an efficacious process for fielding a product based on knowledge of generic customer requirements and the objectives and critical capabilities of the systems engineering organization. Figure 4 illustrates some of the facets of process reengineering. Process reengineering may be instituted to obtain better products or a better organization. There are three ways for attempting process improvement:
new process development, process redevelopment, or process reengineering, or continuous process improvement over time. New process development is necessary because of a strategic level change, such as when a previously outsourced development effort is insourced and there is no present process on which to base the new one. Benchmarking, discussed here, is one way of accomplishing this. Process redevelopment, or reengineering, should be implement when the existing process is dysfunctional or when the organization wishes to keep abreast of changing technology or changing customer requirements. Continuous process improvement is less radical and can be carried out incrementally over time. Each of these involves leadership, strategy, and a team to accomplish the effort. In accordance with this discussion and analogous to our definition of product reengineering, we offer the following definition. Process reengineering is the examination, study, capture, and modification of the internal mechanisms or functions of an existing process or systems engineering life cycle to reconstitute it in a new form with new functional and nonfunctional features, often to take advantage of newly emerged or desired organizational or technological capabilities, but without changing the inherent
Systems Reengineering
7
Figure 4. Conceptual illustration of process reengineering.
purpose of the process itself. Concurrent Engineering Often, it is desired to produce and field a system relatively rapidly. The life cycle processes needed to achieve this could be accelerated if it were possible to accomplish phases of the relevant life cycles more or less concurrently. Concurrent engineering is a systems engineering approach to the integrated coincident design and development of products, systems, and processes (19–22). The basic tasks in concurrent engineering are much the same as the basic tasks in systems engineering and management. The first step is that of determining customer requirements; these are then translated into a set of technical specifications. The next phase involves program planning to develop a product. Often, especially in current engineering, this involves examining the current process and generally refining existing processes to deliver a quality product that meets both customer needs and cost and schedule requirements. In concurrent engineering, the very early and effective configuration of the systems life cycle process takes on special significance because the simultaneous development efforts need to be carefully coordinated and managed to forestall significant increases in cost and product time or significant deterioration in product quality. The use of coordinated product design teams, improved design approaches, and stringent standards are among the aids that can enhance concurrent engineering efforts. Achieving a controlled environment in concurrent engineering and system integration requires the following: 1. Information integration and management. It must be possible to access information of all types easily and to share design information across the levels of concurrent design effectively and with control. Design information, dependencies, and alterations must be tracked effectively. The entire configuration of the concurrent life cycle process must be effectively mon-
itored and managed. 2. Data and tool integration and management. It must be possible to integrate and manage tools and data so that there is interoperability of hardware and software across several layers of concurrency. 3. Environment and framework integration, or total systems engineering. It must be possible to ensure that process is directed at evolution of a high-quality product and that this product is directed to meet the needs of the user in a trustworthy manner that is endorsed by the customer. This requires integrating the environment and framework, or the processes, for the systems engineering and management efforts.
There is a close relationship between concurrent engineering and systems integration; Andrews and Leventhal (23), Kronloff (24), and Schefstrom and van den Broek (25) provide several details concerning the method, tool, and environment integration needed to implement concurrent engineering and other systems engineering efforts. Compression of the life cycle phases that occurs in concurrent engineering poses more of a problem. The macroenhancement approaches to systems engineering, especially software systems engineering (26), are particularly useful in this regard. These include prototyping for system development, use of reusable systems and subsystems, and expert systems and automated program generation. Use of these can compress the overall time needed by parallel subsystem life cycles in a manner that is compatible with the engineering of a trustworthy product or service. Formally, very little is new in the subject of concurrent engineering. Development phases are simply accelerated through their concurrent implementation, at least on the surface. Concurrent engineering, however, places a much greater reliance on strategic planning and systems management and requires greater attention to processes to ensure that they are well deployed and to the resulting integration to ensure success.
8
Systems Reengineering
Carter and Baker (27) indicate that success in concurrent engineering depends very much on maintaining a proper balance between four important dimensions:
Organizational culture and leadership and the necessary roles for product development teams.
Communications infrastructure for empowered multidisciplinary teams.
Careful identification of all functional and nonfunctional customer requirements, including those product and process facets that affect customer satisfaction. Integrated process and product development. They identify approaches at the levels of task, project, program, and enterprise to enable realizing the proper environment for concurrent engineering across each of these four dimensions. Each of the four dimensions has a number of critical factors, and these may be approached at any or all of the levels suggested. A matrix is suggested to enable identifying the needed development areas to ensure definition, development, and deployment of an appropriate concurrent engineering process and process environment. The reference cited provides a wealth of pragmatic details for determining concurrent engineering process needs. It is also noted (27) that five major roadblocks often impede development of a concurrent engineering process environment: 1. The currently available tools are not adequate for the new environment. 2. There are a plethora of noninteroperable computers, networks, interfaces, operating systems, and software in the organization. 3. There is a need for appropriate data and information management across the organization. 4. Needed information is not communicated across horizontal levels in the organization. 5. When correct decisions are made, they are not made in a timely manner. Approaches are suggested to remove each of these roadblocks to enable developing a concurrent engineering process. Presumably this needs to be implemented continuously, as appropriate for a given organization, rather than attempting a revolutionary change in organizational behavior. Several worthwhile suggestions for implementation are provided. Integrated Product and Process Development In many ways, integrated product development (IPD) is an extension of concurrent engineering. In a work that focuses on the importance of requirements management, Fiksel (28) states that concurrent engineering is more accurately called integrated product development. It is also closely related to the other reengineering approaches described here. The notion of integrated product development really cannot be carried out and orchestrated effectively without simultaneously considering integrated process de-
velopment. Thus, the concept is more commonly called integrated product and process development (IPPD). The following definition of integrated product and process development is appropriate. Integrated product and process development is a systems engineering and management philosophy and approach that uses functional and cross-functional work teams to produce an efficient, effective process to deploy a product or service that satisfies customer needs through concurrent application and integration of all necessary life cycle processes. Integrated product and process development involves systems management, leadership, systems engineering processes, the products of the process, concurrent engineering, and integration of all necessary functions and processes throughout the organization to create a cost-effective product or service that provides total quality and satisfies customer needs. Thus, IPPD is an organization’s product and process development strategy. It addresses the organizational need for continual enhancement of efficiency and effectiveness in all of its processes that lead to a product or service. There are many focal points for IPPD. Twelve are particularly important.
1. A customer satisfaction focus is needed as a key part of competitive strategy. 2. A focus on results and a product or service are needed to bring about total customer satisfaction. 3. A process focus is needed because high-quality competitive products that satisfy customers and result in organizational success come from efficient and effective processes. This necessarily requires process understanding. 4. A strategic planning and marketing focus is needed to ensure that product and process life cycles are fully integrated throughout all organizational functions, external suppliers, and customers. 5. A concurrent engineering focus is needed to ensure that all functions and structures associated with fulfilling customer requirements are applied throughout the life cycle of the product to ensure correct people, correct place, correct product, and correct time deployment. 6. An integrative engineering focus is needed to ensure that relevant processes and the resulting processes fit together seamlessly. 7. A teamwork and communications focus is needed to ensure that all functional and multifunctional teams function synergistically and for the good of the customer and the organization. 8. A people empowerment focus is needed so that all decisions are made by qualified people at the lowest possible level consistent with authority and responsibility. Empowerment is a responsibility that entails commitment and appropriate resource allocation to support this commitment. 9. A systems management reengineering focus is needed for both revolutionary change and evolutionary changes in processes and product.
Systems Reengineering
10. An organizational culture and leadership focus is needed to accommodate changed perspectives relative to customers, total quality, results and products, processes, employees, and organizational structures. 11. A methods, tools, and techniques focus is needed because methods, tools and techniques are needed throughout all aspects of an IPPD effort, even though they alone do not create success. 12. A systematic measurements focus, primarily on proactive measurements but also on interactive and reactive measurements, is needed because the organization needs to know where to go and where it is now to make progress. All of this should bring about high quality, continual, evolutionary, and perhaps even revolutionary improvement for customer satisfaction. Each of these could be expanded into a series of questions or a checklist used to evaluate the potential effectiveness of a proposed integrated product development process and team. Although this discussion of IPPD makes it look like an approach particularly and perhaps even uniquely suitable for system acquisition, production, or procurement, it is equally applicable to the products of the RDT&E and marketing life cycles. IPPD is a people, organizational, and technologically focused effort that is tightly linked together through a number of life cycle processes through systems management. These are major ingredients for all systems engineering and management efforts, as suggested in the information ecology (29) web of Fig. 5. The major result of IPPD is the ability to make optimum decisions with available resources and to execute them efficiently and effectively to achieve three causally linked objectives: 1. To integrate people, organizations, and technology into a set of multifunctional and networked product development teams. 2. To increase the quality and timeliness of decisions through centrally controlled, decentralized, and networked operations. 3. To satisfy customers completely through quality products and services that fulfill their expectations and meet their needs. The bottom line is clearly customer satisfaction through quality, short product delivery time, reduced cost, improved performance, and increased capabilities. Equally supported by IPPD are organizational objectives for enhanced profit, well-being of management, and a decisive and clear focus on risk and risk management and amelioration. Figure 6 is a suggested sequence of steps and phases to establish an integrated product and process development endeavor. The approach is not entirely different from that suggested for successful product and process development by Bowen et al. (30). 1. Understand the core capabilities and core rigidities of the organization.
9
2. Develop a guiding vision in terms of the product or service, the project and process, and the organization that ensures and understands the relationships between organization, customers, and process and product. 3. Push the frontiers of the organization, process, and product or service to identify and achieve the ultimate performance capabilities for each. 4. Develop leadership and an appropriate structure to manage the resulting process and product or service engineering. 5. Develop commitment at the level of organizational management, the integrated product team (IPT), and the individual team members to ensure appropriate ownership of the IPPD effort. 6. Use prototypes to achieve rapid learning and early evolution and testing of the IPPD concept. 7. Ensure integration of people, organizations, and technologies to attain success of the IPPD concept. As with other efforts, this embodies the definition, development, and deployment triage that is the simplest representation of a generic systems engineering life cycle. IPPD is a relatively new concept used very often within the US Department of Defense (31, 32). In the latter document (32) ten key tenets of IPPD are identified. 1. Customer focus. The primary objective of IPPD is to satisfy the needs of the customer more efficiently and effectively. Customer needs are the major determining influence on the product or service definition and the associated product lines. 2. Concurrent development of products and processes. It is necessary to develop processes concurrently with the products or services that they support. 3. Early and continuous life cycle planning. Planning for both the product or service and process begins early and extends throughout the IPPD life cycle. 4. Maximize flexibility for optimization and use of contractor-unique approaches. Requests for proposals should provide flexibility for optimizing and using contractor-unique processes and commercial specifications, standards, and practices. 5. Encourage robust design and improved process capability. Advanced robust design and manufacturing techniques that promote total quality and continuous process improvement should be emphasized. 6. Event-driven scheduling. The scheduling framework should relate program events to their desired accomplishments and should reduce risk by ensuring product and process maturity before actual development is undertaken. 7. Multidisciplinary teamwork. Multidisciplinary teamwork is essential to the integrated and concurrent development of product and process. 8. Empowerment. Decisions should be taken at the lowest level commensurate with appropriate risk management, and resources should be allocated at levels
10
Systems Reengineering
Figure 5. Information ecology web.
Figure 6. A simplified process to implement IPPD.
consistent with authority, responsibility, and ability. Teams should be given authority, responsibility, and resources. They should accept responsibility, manage risk appropriately, and be held accountable for the results. 9. Seamless management tools. A single management system should be established to relate requirements, planning, resource allocation, execution, and program tracking over the entire life cycle. 10. Proactive identification and management of risk. Critical cost, schedule, and technical specifications should be identified from user requirements. Systems management of risk, using appropriate metrics, should be established to provide continuing verification of achievements relative to appropriate product and process standards. The objectives in this are to reduce time to deliver operationally functional products and services, to reduce the
costs and risks of deploying systems, and to improve their quality. Redevelopment of processes only, without attention to reengineering at a level higher than processes, may represent an incomplete and not fully satisfactory way to improve organizational capabilities if they are otherwise deficient. Thus, the processes considered as candidates for reengineering should be high-level managerial ones as well as operational processes.
REENGINEERING AT THE LEVEL OF SYSTEMS MANAGEMENT Reengineering at the level of systems management is directed at potential change in all business or organizational processes and thereby also the various organizational life cycle processes. Many authors have discussed reengineering the corporation. The earliest use of the term business reengineering by Hammer (33), more fully documented in
Systems Reengineering
a more recent work on Reengineering the Corporation (34). Hammer’s definition of reengineering, Reengineering is the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality, service and speed, is a definition of what we will call reengineering at the level of systems management. There are four major terms in this definition:
Fundamental refers to a large-scale and broad examination of virtually everything about an organization and how it operates. The purpose is to identify weaknesses that need diagnosis and correction. Radical redesign suggests disregarding existing organizational processes and structures and inventing totally new ways of accomplishing work. Dramatic improvements suggests that, in Hammer’s view, reengineering is not about making marginal and incremental improvements in the status quo. It is about making quantum leaps in organizational performance. Processes represent the collection of activities used to take input materials, including intellectual inputs, and transform them into outputs and services of value to the customer.
Hammer suggests that reengineering and revolution are almost synonymous. He identifies three types of firms that attempt reengineering: those in trouble, those that see trouble coming, and those that are ambitious and seek to avoid impending troubles. Clearly, it is better to be proactive and be in this last category, rather than to be reactive and seek to emerge from a crisis. He indicates that one major catalyst for reengineering is the creative use of information technology. Reengineering is not just automation, however. It is an ambitious and rule-breaking study of everything about the organization to effect designing, more effective and efficient organizational processes. We share this view of reengineering at the level of systems management. Our definition is similar: Systems management reengineering is the examination, study, capture, and modification of the internal mechanisms or capability of existing system management processes and practices in an organization to reconstitute them in a new form with new features, often to take advantage of newly emerged organizational competitive requirements but without changing the inherent purpose of the organization itself. Figure 7 represents this concept. Life cycle process reengineering occurs as a natural by-product of reengineering at the level of systems management. This may or may not result in reengineering existing products. Generally it does. New products and new competitive strategies are each major underlying objectives of reengineering at the level of systems management, or organizational reengineering as it is more commonly called. The work by Hammer and Champy (34) defines the forces of the three Cs,
11
Customers, who demand customized products and services that are of high quality and trustworthy.
Competition, which has intensified on a global scale in almost all areas.
Change, which now becomes continuous. These combine to require massive, discrete-level transformations in the way organizations do business. Radical and dramatic reengineering of fundamental organizational strategy and of all organizational processes is suggested as the only path to change for many organizations. The authors are much concerned with organizational processes that have several common characteristics. Our interpretation of these is as follows: 1. The steps and phases in the process are sequenced logically in terms of earlier phase results needed for later activities. The phases are not necessarily linear. They are sequenced concurrently whenever possible to obtaining results in minimum time. 2. The various business processes are integrated throughout the organization, and often a number of formerly distinct efforts are combined to produce savings in costs and increase effectiveness. 3. Multiple versions of many processes make mass customization possible. 4. Work is shifted across organizational boundaries to include potential outsourcing and is performed in the most appropriate setting. 5. Decision making efforts become part of the normal work environment, and work is compressed both horizontally and vertically. 6. Reactive checks, controls, and measurements are reduced in frequency and importance in favor of greater use of interactive and proactive approaches. 7. There is always a point of contact, or case manager, empowered to provide service to each individual customer, and a customer need never go beyond this point of contact. 8. Organizational operations are a hybrid of centralized and decentralized structures best suited to the particular task at hand. It is claimed that several benefits result from this. Work units change from functional departments to multifunctional, process-oriented teams. Now performers of simple tasks accomplish multidimensional work. People become empowered rather than controlled. The major needed job preparation changes, and it becomes education rather than training. The focus of measures and performance shifts to results rather than activities. Promotion or transfer to a new organizational assignment is based on ability for the new assignment and not performance in a former assignment. Values change from reactive and protective to proactive and productive. Managers become coaches as well as supervisors, executives become leaders and not just scorekeepers, and organizational structures shift away from the hierarchical to the flat. Information technology is represented as a major enabler of all of this.
12
Systems Reengineering
Figure 7. Conceptual illustration of reengineering at the level of systems management.
In Reengineering the Corporation, Hammer and Champy (34) describe a revolution in the way that organizations in the US and other developed nations generally accomplish work. This first book defined reengineering and its process components. It also suggested how jobs differ in the reengineered organization. Hammer and Stanton (35) have written The Reengineering Revolution: A Handbook, Champy (36) has authored Reengineering Management: The Mandate for New Leadership, and Hammer (37) has authored Beyond Reengineering: How the Process Centered Organization Is Changing Our Work and Our Lives. Each of these works extends the original efforts of Hammer and his colleagues. All three works address the potential difficulties in implementing organizational change. Each acknowledges reengineering failures and presents strategies to overcome them. The experience of many suggest that a radical reengineering effort, or process innovation, is not always successful. The major difficulty is failure to cope with the impact of reengineering on people and their potential resistance to change. Other potential difficulties are inadequate team building and the failure of senior management to appropriately convey the need for change and to be fully aware of the human element. Hammer and Stanton’s book (35) is a handbook of techniques and practical advice, and Champy (36) focuses on management as the single critical influence of reengineering success. His focus is on reengineering processes through innovation and on reengineering at the level of systems management. The major thrusts of the Hammer and Stanton work (35) is that only senior-level managers have the breadth of perspective, knowledge, and authority required to oversee the effort from beginning to end and to overcome the resistance that occurs along the way. Senior managers must make decisions to reengineer and then create a supportive
environment that results in transforming organizational culture. The reengineering team is also a major ingredient in success or failure. This team accomplishes the following:
develops an understanding of the existing process and customer requirements to provide a definition of the reengineering requirements. identifies new process architectures and undertakes development of the new processes. provides for deployment of the process and the new way of doing work. The environment of reengineering is one of uncertainty, experimentation, and pressure, and, based on these characteristics, the essential characteristics for the success of a reengineering team are identified. These include a process orientation, creativity, enthusiasm, persistence, communication skills, diplomacy, holistic perspective, and teamwork. Dealing with the human element in an organization disoriented by the immense changes brought about by reengineering is important, and several strategies are suggested. Resistance to change is acknowledged as natural and inevitable. The imposition of a new process on people who have become attached to a familiar process creates natural resistance unless a five-step process for implanting new values is adopted. 1. Articulate and communicate the new values effectively. 2. Demonstrate commitment of the organizational leadership to the new values. 3. Hold to these values consistently.
Systems Reengineering
4. Ensure that the desired values are designed into the process. 5. Measure and reward the values that the organization wants to install. Thus, the advocacy here is centered on customers and on the end-to-end processes that create value for them. By adhering to these principles, the organization should operate with high quality, tremendous flexibility, low cost, and exceptional speed. Champy (36) also examines the successes and failures of contemporary process reengineering innovations. He suggests that the failure of management to change appropriately is the greatest threat to successful reengineering efforts and that managers must change the way they work if they hope to realize the full benefits of reengineering. In other words, reengineering of the lower-level work details is the focus of many contemporary efforts. However, reengineering of management itself is at least as significant, and this has not yet been explored sufficiently. Such exploration and subsequent action are the major objectives in reengineering at the level of systems management. The intent here, as In Reengineering Management, is to identify concepts and methods that organizational administrators, managers, and leaders may use to reengineer their own executive functions for enhanced efficiency and effectiveness. Champy begins with the impact of reengineering on managers and suggests that the greatest fear of executives is loss of control. The role of executives in a knowledgebased society is not to command or manipulate but to share information, educate, and empower. They must have faith in human beings and their ability, if led properly, to do a better job for the customer. This is called existential authority. To bring this about requires a change in purpose, culture, processes, and attitudes toward people. Champy suggests that managers must focus on the answers to four questions to enable these changes. 1. 2. 3. 4.
What is the purpose of the organization? What kind of organizational culture is desired? How does the organization go about its work? What are the appropriate kinds of people for the organization?
He suggests that management processes provide support for management reengineering and defines new core management processes for the reengineered executive. As a consequence of reengineering at the level of systems management, hierarchies are flattened. Culture rather than structure is more of a determinant of the way the organization runs. A major need is for managers to organize high-performance, cross-functional teams around the needs of changing product lines or processes. Profit and principle must be congruent. Five core management processes are identified. Each of them potentially needs to be reengineered to harmonize with the core capabilities and mission of the organization. 1. Mobilizing is the process through which an organization, including its human element, is led to accept
2.
3.
4.
5.
13
the changes brought about by reengineering. Enabling, or empowering, involves redesigning work so that humans can use their capabilities as much as possible and must foster a culture that motivates people to behave the way the organization needs them to behave. Defining is the process of leadership through continual experimentation and empirical efforts. This includes the development of experiential learning from these efforts and learning to act on what is learned from them. Measuring is focused on identifying important process results, or metrics, that accurately evaluate organizational performance. Communicating involves continually making the case for changes that lead to organizational improvement and being concerned with the “what” and “how” and also with the impacts of actions on employee lives. As suggested by many, managers are now coaches and must provide tools needed to accomplish tasks, remove obstacles hindering team performance, and challenge imaginations by sharing of information. This relates strongly to empowerment and to trust building, which is a goal of communications.
This “people-focused management” requires “deep generalists,” who respond to changing work demands, changing market opportunities, evolving and changing products and services, and changing demands of customers. Of course, it is necessary that these generalists also bring deep expertise in a specialty area and well-established skills to the organization.
PERSPECTIVES ON REENGINEERING This discussion of reengineering suggests that reengineering can be considered at three levels: systems management, life cycle processes, and product. These three levels, when associated with appropriate methods, tools, and metrics, constitute a relatively complete conceptual picture of systems engineering efforts, as shown in Fig. 8. The major purpose of reengineering at any level is to enable an organization to produce a better product at the same or lower cost that performs comparably to the initial product. Thus, reengineering improves the competitiveness of the organization in coping with changing external situations and environments. An organization may approach reengineering at any or all of these levels from any of three perspectives:
reactive because the organization realizes that it is in trouble and perhaps in a crisis, and reengineering is one way to bring about needed change; interactive because it wishes to stay abreast of current changes as they evolve; or proactive because it wishes to position itself now for changes that it believes will occur in the future and to emerge in the changed situation as a market leader.
14
Systems Reengineering
Figure 8. Conceptual model of systems engineering morphology.
Reengineering could be approached from an inactive perspective, although this suggests not considering it at all, and this is likely to lead to failure to adapt to changed conditions and requirements. Reengineering at any level—product, process, or systems management— is related to reengineering at the other two levels. Reengineering can be viewed from the perspective of the organization fielding a product as well as from the perspective of the customer, individual or organizational, receiving the product. From the perspective of either of these, it may well turn out that reengineering at the level of product only may not be fully meaningful if it is not also associated with, and generally driven by, reengineering at the levels of process and systems management. For an organization to reengineer a product when it is in need of reengineering at the systems management or process levels is almost a guarantee that a reengineered product will not be fully trustworthy and cost-efficient. An organization that contracts for product reengineering when it is in need of reengineering at the levels of systems management and/or process is asking for a technological fix and a symptomatic cure for difficulties that are institutionally and value related. Such solutions are not really solutions at all. There are potential needs for integrated reengineering at the levels of product, process, and systems management. Product reengineering may consume significant resources; the combined resources needed for systems management and process reengineering can also be substantial. Resources expended on product reengineering only, and with no investigation of needs at the systems management and process levels, may not be wise expenditures from the perspective of the organization producing the product or the one consuming it. In an insightful article, Venkatraman (38) identifies five levels for organizational transformation through information technology. We can expand on this slightly through
Figure 9. Representation of improvements at the level of products, processes, and systems management through reengineering.
adoption of the three levels for reengineering we have described here and obtain the representation shown in Fig. 9. This figure shows our representation of these five levels: two for organizational reengineering, two for product reengineering, and one for process reengineering. Organizational reengineering is generally revolutionary and radical, whereas product reengineering is usually evolutionary and incremental. Process reengineering may be at either of these extremes. Venkatraman notes technological and organizational enablers and inhibitors that affect desired transformations at both evolutionary and revolutionary levels of transformation. Technological enablers include increasingly favorable trends in cost effectiveness for various information technologies and possibilities of enhanced connectivity. Technological inhibitors include the lack of currently established, universally accepted standards and the rapid obsolescence of current technologies. Organizational enablers
Systems Reengineering
include managerial awareness of the need for change in existing leadership. Organizational inhibitors include financial limitations and managerial resistance to change. Although both product reengineering and organizational reengineering ultimately lead to change in organizational processes, changes for the purpose of producing a product with greater cost-effectiveness, quality, and customer satisfaction generally differ from and are more limited than those for improving internal responsiveness to the satisfaction of present and future customer expectations. Organizational network and organizational scope are at the highest level here, because efforts at these levels are of much concern relative to information technology and associated knowledge management efforts today, including contemporary efforts involving systems integration and architecting (39). Effective management pays particular attention to technology and to the human elements in the organizations and the environments in which they are embedded. In a recent work (40), eight practices of exceptional companies are described:
balanced value fixation commitment to a core strategy culture-system linkage massive two-way communication partnering with stakeholders functional collaboration innovation and risk never being satisfied
and guidelines are presented that enable the enduring human asset management practices that make such efforts as reengineering long-term successes. The human element is a major part of reengineering because the purpose of technology is to support human endeavors. Reengineering efforts continue to this day, although the name is sometimes changed to reflect contemporary issues better. One current phrase, which is in many ways reengineering of the term reengineering, is that of enterprise transformation (41). This does not, in any sense suggest that such terms are merely contemporary buzzwords as they reflect a needed critical awareness that the vast majority of change, if it is to be meaningful, must truly start from and consider issues at the highest, or enterprise level. BIBLIOGRAPHY 1. A. P. Sage Systems Management for Information Technology and Software Engineering, New York: Wiley, 1995. 2. A. P. Sage Systems reengineering, in A. P. Sage and W. B. Rouse (eds.), Handbook of Systems Engineering and Management, New York: Wiley, 1999.(Second edition in Press 2007). 3. M. G. Rekoff, Jr. On reverse engineering, IEEE Trans. Syst. Man Cybern., SMC-15: 244–252, 1985. 4. Software Engineering Glossary, IEEE Software Engineering Standards, New York: IEEE Press, 1991. 5. IEEE Standard for Software Maintenance, P1219/D14, New York: IEEE Standards Department, 1992.
15
6. E. Chikofsky J. H. Cross Reverse engineering and design recovery, A Taxonomy, IEEE Software, 7 (1): 13–17, 1990. 7. J. H. Cross II E. J. Chikofsky C. H. May, Jr. Reverse engineering, in M. C. Yovitz (ed.), Advances in Computers, Vol. 35, San Diego, CA: Academic Press, 1992, pp. 199–254. 8. R. S. Arnold (ed.) Software Reengineering, Los Altos, CA: IEEE Computer Society Press, 1993. 9. H. M. Sneed Planning the reengineering of legacy systems, IEEE Software, 12 (1): 24–34, 1995. 10. A. P. Sage Systems Engineering, New York: Wiley, 1992. 11. H. M. Sneed Planning the reengineering of legacy systems, IEEE Software, 12 (1): 24–34, 1995. 12. W. M. Ulrich Re-engineering: Defining an integrated migration framework, in R. S. Arnold (ed.), Software Reengineering, Los Altos, CA: IEEE Computer Society Press, 1993, pp. 108–118. 13. M. R. Olsem Preparing to reengineer, IEEE Comput. Soc. Reverse Eng. Newsl., December 1993, pp. 1–3. 14. H. A. Muller Understanding software systems using reverse engineering technologies research and practice, Proc. Int. Conf. Software Eng., Berlin, Germany, March 1996. 15. P. Samuelson Reverse engineering someone else’s software: Is it legal?, IEEE Software, 7 (1): 90–96, 1990. 16. V. Sibor Interpreting reverse engineering law, IEEE Software, 7 (4): 4–10, 1990. 17. E. M. Hall Managing Risk: Methods for Software Systems Development, Reading, MA: Addison-Wesley, 1998. 18. J. D. Ahrens N. S. Prywes Transition to a legacy and reuse based software life cycle, IEEE Comput., 28 (10): 27–36, 1995. 19. R. Wheeler R. W. Burnett A. Rosenblatt Concurrent engineering, IEEE Spectrum, 28 (7): 32–37, 1991. 20. A. Kusiak (ed.) Concurrent Engineering: Automation, Tools, and Techniques, New York: Wiley, 1993. 21. B. Prasad Concurrent Engineering Fundamentals: Integrated Product and Process Organization, Englewood Cliffs, NJ: Prentice-Hall, 1996. 22. B. Prasad Concurrent Engineering Fundamentals: Integrated Product Development, Englewood Cliffs, NJ: Prentice-Hall, 1996. 23. C. C. Andrews N. S. Leventhal FUSION-Integrating IE, CASE, and JAD: A Handbook for Reengineering the Systems Organization, Englewood Cliffs, NJ: Prentice-Hall, 1993. 24. K. Kronlof (ed.) Method Integration: Concepts and Case Studies, Chichester, UK: Wiley, 1993. 25. D. Schefstrom G. van den Broek (eds.) Tool Integration: Environments and Frameworks, Chichester, UK: Wiley, 1993. 26. A. P. Sage J. D. Palmer Software Systems Engineering, New York: Wiley, 1990. 27. D. E. Carter B. S. Baker Concurrent Engineering: The Product Development Environment for the 1990s, Reading, MA: Addison-Wesley, 1992. 28. J. Fiksel Computer-aided requirements management for environmental excellence, Proc. Natl. Council Syst. Eng. Annu. Meet., Alexandria, VA, July 1993, pp. 251–258. 29. T. H. Davenport Information Ecology: Mastering the Information and Knowledge Environment, New York: Oxford Univ. Press, 1997. 30. H. K. Bowen et al. (eds.) The Perpetual Enterprise Machine: Seven Keys to Corporate Renewal Through Successful Product and Process Development, New York: Oxford Univ. Press, 1994.
16
Systems Reengineering
31. Air Force Material Command Guide to Integrated Product Development, May 25, 1993. 32. DoD Guide to Integrated Product and Process Development, Office of the Undersecretary of Defense for Acquisition and Technology, February 1996. 33. M. Hammer Reengineering work: Don’t automate, obliterate, Harvard Bus. Rev., 68 (4): 104–112, 1990. 34. M. Hammer J. Champy Reengineering the Corporation: A Manifesto for Business Revolution, New York: Harper Business, 2004. 35. M. Hammer S. Stanton The Reengineering Revolution, New York: Harper Business, 1995. 36. J. Champy Reengineering Management: The Mandate for New Leadership, New York: Harper Collins, 1995. 37. M. Hammer Beyond Reengineering: How the Process-Centered Organization is Changing Our Work and Our Lives, New York: Harper Business, 1996. 38. N. Venkatraman IT-enabled business transformation: From automation to business scope redefinition, Sloan Manage. Rev., 35 (2): 73–88, 1994. 39. A. P. Sage C. L. Lynch Systems integration and architecting: An overview of principles, practices, and perspectives, Syst. Eng., 1 (3): 1998. 40. J. Fitz-Enz The 8 Practices of Exceptional Companies: How Great Organizations Make the Most of Their Human Assets, New York: AMACOM, 2005. 41. W. B. Rouse (Ed.), Enterprise Transformation: Understanding and Enabling Fundamental Change, John Wiley and Sons, Hoboken NJ, 2006.
ANDREW P. SAGE George Mason University, Fairfax, VA