Advances in COMPUTERS VOLUME 9
Contributors to This Volume
PAULW. ABRAHAMS A. S. BUCHMAN AVIEZRIS. FRAENKEL L. J. KO...
53 downloads
1522 Views
20MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Advances in COMPUTERS VOLUME 9
Contributors to This Volume
PAULW. ABRAHAMS A. S. BUCHMAN AVIEZRIS. FRAENKEL L. J. KOCZELA JOHNMCLEOD W. J. POPPELBAUM L. M. SPANDORFER
Advances in
CO MPUTERS EDITED BY
FRANZ L. ALT American Institute of Physics New York, New York AND
MORRIS RUBINOFF University of Pennsylvania and Pennsylvania Research Associates, Inc. Philadelphia, Pennsylvania
VOLUME
9
ACADEMIC PRESS. New York-London4968
COPYRIGHT0 1968,
BY ACADEMIC PRESS, INC. ALL RIGHTS RESERVED. NO PART O F THIS BOOK MAY B E REPRODUCED I N ANY FORM, BY PHOTOSTAT, MICROFILM, OR ANY OTHER MEANS, WITHOUT WRITTEN PERMISSION FROM THE PUBLISHERS.
ACADEMIC PRESS, INC. 111 Fifth Avenue, New York, New York 10003
United Kingdom Eddion publiahed by ACADEMIC PRESS, INC. (LONDON) LTD. Berkeley Square House, London W. 1
LIBRARY OR CONGRESS CATALOQ CARD NUMBER: 59-15761 Second Printing, 1972 PRINTED I N THE UNITED STATES OF AMERICA
Contributors t o Volume 9 Numbers in parentheses indicate the pages on which the authors’ contributions begin.
PAULW. ABRAHAMS, Courant Institute of Mathematical Sciences, New York University, New York, New York ( 5 1 ) A, S . BUCHMAN,Federal Systems Division, International Business Machines Corporation, Electronics Systems Center, Owego, New Yorlc (239)
AVIEZRIS . FRAENKEL,Department of Applied Mathematics, The Weizmann Institute of Science, Rehovot, Israel and Department of Mathematics, Bar Ilan University, Ramat Gan, Israel (113)
L. J. KOCZELA, Autonetics Division, North American Rockwell Corporation, Anaheim, California (285) JOHNMCLEOD, Simulation Councils, Inc., L a Jolla, California (23) W. J. POPPELBAUM, Department of Computer Science, University of Illinois, Urbana, Illinois (1)
L. M. SPANDORFER, Sperry Rand Corporation, UNIVACDivision, Philadelphia, Pennsylvunia (179)
This Page Intentionally Left Blank
Preface
There is a seductive fascination in software. There is a pride of accomplishment in getting a program to run on a digital computer, a feeling of the mastery of man over machine. And there is a wealth of human pleasure in such mental exercises as the manipulation of symbols, the invention of new languages for new fields of problem solving, the derivation of new algorithms and techniques, the allocation of memory between core and disk -mental exercises that bear a strong resemblance to such popular pastimes as fitting together the pieces of a jigsaw puzzle, filling out a crossword puzzle, or solving a mathematical brainteaser of widely familiar variety. And what is more, digital computer programming is an individual skill, and one which is relatively easy to learn; according to Fred Gruenberger, “the student of computing can be brought to the boundaries of the art very quickly. . . . It is becoming commonplace for to-day’s students to write new compilers . . . the beginning student of computing can be involved with real problems very early in his course.” [Commun. ACM 8, No. 6,348 ( 1965) .] All these factors help to account for the currently widespread popularity of programming. In contrast, the design of circuits and the production of computer hardware is a relatively mundane occupation, calling for years of education, fraught with perils and hazards, and generally demanding the joint efforts of a large team with a considerable diversity of skills. Similarly, the use of analog computers is considered equally mundane, even though analog machines are often faster and more versatile than digital computers in solving many important problems in engineering, management, biomedicine, and other fields. The more romantic appeal of software has led most of our universities and schools to overemphasize the computer and information sciences, to the relative neglect of computer engineering, and the almost complete disappearance of analog and hybrid devices, systems, and even course material from computer curricula. [See for example J . Eng. Educ. 58, 931-938( 1968).] Future advances in computer systems are at least as dependent on the full range of analog, digital, and hybrid hardware and techniques as they are on digital software. Future advances in computer applications demand a broader based familiarity and more comprehensive skill with the full complement of tools at our disposal and not just an ability to write isolated digital computer software. In keeping with our past philosophy and practice, the current volume of “Advances in Computers” continues to recognize vi i
viii
PREFACE
this broader based need and again presents a highly diversified menu of articles on analog and digital hardware, software, and applications, each with its own copious bibliography. There are two articles on computer hardware. The opening article by W. J. Poppelbaum describes a number of new devices based upon recently discovered technological effects. These include new computer organizations based upon arrays of computers and stochastic machines, new memory devices using optical holographs and ratchet storage cells, and new circuit techniques that stem from stochastic signal representation. L. M. Spandorfer discusses recent developments in producing integrated circuits whereby a plurality of circuits are mounted on an almost microscopically small silicon chip. The many implications for low cost, low power computers and memories of miniature size are investigated in considerable detail, and the various competing integrated circuit approaches are evaluated for relative reliability and applicability to computers. There are two articles on software. In his article on symbol manipulation languages, Paul W. Abraham defines symbol manipulation as a branch of computing concerned with the manipulation of unpredictably structured data. He then traces the development of such languages from IPL-V through LISP, L6, PL/l, SLIP,SNOBOL,COMIT, and EOL, with comments on their relative capabilities, advantages, and disadvantages. In his article on legal document retrieval, Aviezri S. Fraenkel examines the pros and cons of a number of retrieval systems, both with and without indexing, and evaluates them in light of his general conclusion: that present-day computers are potentially very efficient for problems which can be formulated well, but they are rather clumsy in heuristic problem solving. Legal literature searching still falls largely in the latter category but the computer can provide certain types of assistance. There are two articles on computer systems, both oriented to aerospace computation. L. J. Koczela presents the results of a research study of advanced multiprocessor organizational concepts for future space mission applications, with particular emphasis on a “distributed processor” organization that provides a very high tolerance of failures. A. S. Buchman reviews the special-purpose computers that have been used in space vehicles to date and looks forward to the future systems that must be designed to meet ever-growing space needs for performance and high availability. And last but not least, there is an article on hybrid computers and their ro!e in simulation. Because of his own background and experience, John McLeod writes his article in the setting of physiological simulation, but his comments on simulation attributes, complexity, adaptability, noise, hardware, and software are equally relevant to all areas of hybrid computer application. December, I968
MORRISRUBINOFF
Contents
CONTRIBUTORS PREFACE CONTENTSOF PREVIOUS VOLUMES
V
.
vii
xii
What Next in Computer Technology? W. J. Poppelbaum
1. 2. 3. 4. 5. 6.
Plan for Projections . . Limits on Size and Speeds of Devices, Circuits, and Systems New Devices New Circuits New Memories . New Systems . References .
1 2 6 8 12 17 20
Advances in Simulation John McLeod
1. Introduction 2. A LookBack . 3. Progress . 4. The Best Tool . 5. Physiological Simulation 6. ALook Ahead . References .
23 24 26 30 33
.
45 49
Symbol Manipulation Languages Paul W. Abrahams
1. What Is Symbol Manipulation? 2. LISP 2 3. LISP 1.5
.
lx
.
51 57 69
CONTENTS
X
4. 5. 6. 7. 8. 9.
L6 . PL/I String and List Processing
.
SLIP . SNOBOL Other Symbol Manipulation Languages Concluding Remarks . References .
.
74 78 84 92 101 109 110
Legal Information Retrieval Aviezri S. Fraenkel
1. 2. 3. 4.
Problems and Concepts . Retrieval with Indexing Retrieval without Indexing Projects Appendix I Appendix I1 Appendix I11 . References .
114 121 128 150 158 161 163 172
Large Scale Integration--an Appraisal L. M. Spandorfer
1. 2. 3. 4. 5. 6. 7. 8. 9.
Introduction Device Fabrication Packaging . Economic Considerations , Interconnection Strategies . Bipolar Circuits . Mos Circuits LSI Memories Further System Implications . References .
179 180 184 190 194 205 213 218 231 2 34
Aerospace Computers A. S. Buchman
1. Introduction 2. Application Requirements
.
239 241
xi
CONTENTS
3. Technologies 4. Current State-of-the-Art
5. Aerospace Computers of the Future References
.
. . . .
262 269 274 283
. . . . . . . .
286 289 295
.
326 338 346 349 353
. .
355 360
The Distributed Processor Organization L. J. Koczela
1. Introduction
2. Parallelism . 3. Development of the Distributed Processor Organization 4. Architecture 5. Failure Detection and Reconfiguration . . 6. Cell and Group Switch Design 7. Communication Buses . a. Software Analysis References . AUTHOR INDEX INDEX SUBJECT
301
Contents of Previous Volumes Volume 1 General-Purpose Programming for Business Applications CALVXK C. GOTLIEB Numerical Weather Prediction NORMAN A. PHILLIPS The Present Status of Automatic Translation of Languages YEHOSHUA BAR-HILLEL Programming Computers to Play Games ARTHURL. SAMUEL Mechine Recognition of Spoken Words RICHARDFATEHCHAND Binary Arithmetic GEORQEW. REITWIESNER
Volume 2 A Survey of Numerical Methods for Parabolic Differential Equations JIM DOUGLAS,JR. Advances in Orthonormalizing Computation PHILIPJ. DAVIS AND PHILIPRABINOWITZ Microele&ronics Using Electron-Beam- Activated Machining Techniques KENNETHR. SHOULDERS Recent Developments in Linear Programming SAULI. GASS Th9 Theory of Automata, a Survey ROBERT MCNAUGHTON
Volume 3 The Computation of Satellite Orbit Trajectories SAMUEL D. CONTE Multiprogramming E.F. CODD Recent Developments of Nonlinear P r o g r p d f PHILIPWOLFE Alternating Direction Implicit Methods , DAVIDYOUNG GARRETTBIRKHOFF,RICHARDS. V ~ AAND Combined AnalogDigital Techniques in Simulation HAROLD K. SXRAMSTAD Information Technology and the Law REEDC. LAWLOR
Volume 4 The Formulation of Data Processing Problems for Computers WILLIAMC. MCGEE All-Magnetic Circuit Techniques DAVIDR. BENNIONAND H E W ID.~ CRANE
CONTENTS OF PREVIOUS VOLUMES Computer Education E. TOMPKINS HOWARD Digital Fluid Logic Elements H. H. G L A E ~ L I Multiple Computer Systems WILLIAMA. CURTIN
Volume 5 The Role of Computers in Election Night Broadcasting JACK MOSHMAN Some Results of Research on Automatic Programming in Eastern Europe WLADYSLAW TURSKI A Discussion of Artificial Intelligence and Self-Organization GORDON PASK Automatic Optical Design ORESTES N. STAVROUDIS Computing Problems and Methods in X-Ray Crystallography CHARLESL. COULTER Digital Computers in Nuclear Reactor Design ELIZABETH CUTHILL An Introduction to Procedure-Oriented Languages HARRYD. HUSKEY
Volume 6 Information Retrieval CLAUDEE. WALSTON Speculations Concerning the First Ultraintelligent Machine IRVING JOHN GOOD Digital Training Devices CHARLES R. WICKMAN Number Systems and Arithmetic HARVEY L. GARNER Considerations on Man versus Machine for Space Probing P. L. BARGELLINI Data Collection and Reduction for Nuclear Particle Trace Detectors HEREZERT GELERNTER
Volume 7 Highly Parallel Information Proceasing Systems JOHN C. MURTK~ Programming Language Processors RUTHM. DAVIS The Man-Machine Combination for Computer-Assisted Copy Editing WAYNEA. DANIELSON Computer-Aided Typesetting WILLIAM R. BOZMAN Programming Languages for Computationd Linguistics ARNOLD C. SATTERTHWAIT Computer Driven Displays and Their Use in Man/MachineInteraction ANDRIES VAN DAM
xiii
xiv
CONTENTS
OF PREVIOUS VOLUMES
VoIume 8 Time-shared Computer Systems THOMAS S . PYKE, JR. Formula Manipulation by Computer JEANE,SAMMET Standards for Computers and Information Processing T. B. STEEL,JR. Syntactic Analysis of Natural Languagc NAOMI SAGER Programming Languages and Computers: A Unlfied Metathcory R. SARASIMHAN Incremental Computation LIONELLO A. LOMBARD1
What Next in Computer Technology? W. J. POPPELBAUM Department of Computer Science University of Illinois, Urbana, lllinois
1. Plan for Projections . 2. Limits on Size and Speeds for Devices, Circuits, and Systems 3. New Devices . 4. New Circuits . 5. New Memories . 6. New Systems . References .
.
* . .
1 2
.
8
.
12 17
.
.
6
20
1. Plan for Projections
Before setting out on our tour of extrapolated hardware technology, it is perhaps useful to lay down and discuss the principles of such aq operation. All too often the thin line between reasonable guessing and wild science fiction is transgressed because of the ignorance of certain fundamental principles of physics which set a severe limitation t o the attainable performance figures. Section 2 will therefore discuss the physical limitations on size and speed of processing elements and systems. Section 3 will examine devices of novel configuration which have already seen prototype realization. All of them use phenomena which are well known but have not been utilized. We shall not describe devices based on recently discovered effects such as the Gunn negative differential mobility effect [3, #]; such devices are obviously one order removed from the linear extrapolation which is the basis of our forecast. In Section 4 we will examine circuit techniques that have emerged in the area of hybrid analog-digital processing and stochastic signal representation. Again these techniques have proved themselves on a laboratory scale but are only on the point of being used in practical applications. Section 5 examines the outlook in the memory field. Here it is not a. question of discussing whether or not semiconductor memories will become preponderant, but when this type of memory will become more attractive than the magnetic kind. But above and beyond semiconductor I
2
W. J. POPPELBAUM
memories we shall consider such newcomers as high-density optical stores operating in the holographic mode and ratchet storage cells for analog variables. The final section will be devoted to some non-von Neumann machine organizations, ranging from array computers to stochastic machines and intelligent display devices. The relative conservatism of the devices, circuits, and systems under discussion should be attributed not to an unwillingness to consider far-out ideas, but rather as a proof that one can remain with one’s feet on the ground and yet discuss techniques which were virtually unknown a few years ago. 2. Limits on Size and Speeds of Devices, Circuits, and Systems
We shall first discuss the ultimate limitations of speed and size set by the atomic nature of our storage media. To this end we are going to use rather approximate physics. Figure 1 shows a cube containing n3 atoms used as a model of a flip-flop. The two states might, for instance, correspond to all the spins being straight up or straight down. Let us call a the atomic spacing, i.e., the lattice constant. The thermal energy which is inside this little cube at temperature T is approximately n3kT where k is Boltzmann’s constant: This thermal energy could be considered a sort of noise energy. When we want to store information inside the cube, we must inject into it or take out of it energies which are at least of the order of this noise energy. Let us suppose that in order to inject or subtract this energy, AE =n3kT, we have available a time At. Quantum mechanics tells us that the product AE - At cannot be made smaller than h / h , where h is Planck’s constant. Assuming that in the limiting case we have equality and that At =na/c where c is the speed of light (because the information will have to travel at least to the opposite face of the cube and the speed does not exceed that of light) we can obtain an estimate of n, and this turns out to be approximately 102 atoms. This in turn gives us for At about 10-16 sec and for the number of bits sorted B about 1021 per cubic foot. Finally, if we want to keep the dissipation inside this cubic foot to approximately 100 W, the duty cycle for any given bit is 10-5 accesses per second, i.e., approximately one access every 30 hr. Having thus determined the ultimate limitations in size and speed, it is useful to consider what exactly is gained by microminiaturization. Figure 2 shows time as a function of size. There are three types of times which we have to consider when we look at a system consisting of many devices connected together. First, there are times resulting from delays in the devices. One can characterize them by giving the
WHAT NEXT IN COMPUTER TECHNOLOGY?
3
density of carriers (holes or electrons) PO and the distance w that the carriers have to travel across some critical zone, e.g., the base width of a $ransistor. The average drift speed of the carriers, b, is proportional to po/w and consequently the drift time T d is proportional to wZ/po. The second time delay is that necessary to travel from one device to another device at a distance w at approximately the speed of light. The delay caused by this is rt and is given by wlc.
E?A€
in A t
*
* a
NB. OF LAYERS:
-
RISE TIME:
At=
BITS STORED:
B*
N n3
-
DUTY CYCLE:
0:
P BAE
-
10-'6!3Ec
102' per cu 11
I O - ~( P - I O O W )
FIG.1. The ultimate limitations. The influence of high energy cosmic particles is neglected.
The third time we have to consider is the Resistance-Capacitance (RC) rise time inside the semiconductor material. Reasonable approximations for both the capacitance and resistance for a slab of area S and of thickness w show that T R C is equal to PE where p is the resistivity and E is the dielectric constant. Note now that 71 decreases very rapidly as the size w goes down,
4
W.
J. PQPPELBAUM
while rt only decreases linearly and T R C does not decrease at all. It is therefore entirely unwarranted to assume that microminiaturization will ultimately solve all speed problems: It will be necessary to supplement it with considerable redesign of the circuits themselves.
DEVICE‘TIMES:
TRANSMISSION TIMES:
I-
W
RC
- TIMES: 1
s
I
I
FIG.2. Time as a function of size.
At this point, it might be good to remind ourselves that we must supplement the discussion of physical limitations with the criterion of practicability. Figure 3 shows a succession of criteria that a typical device has to meet in order to appear on the market. Among these criteria are some additional physical limitations in the form of due respect to such principles as the conservation of energy and momentum and the finite speed of signal propagation, but beyond these evident limitations we have the group which could be called the technology group: Here we ask such questions as: Is it reliable? Does it have sufficient speed? Is the device reproducible? This group of questions is followed by the questions about blending with other technologies
dL \ i /
science fiction
I
shelf of
I
academic
I
FIELDS
PARTKXES
1 I
J
I
IN c
INTERACTION
\
YY
CONS. OF ENERGY
REPRODUCTIB.
INTEGRABILITY
CONS. OF MOMENTUM
RELIABILITY
COMPATIBILITY
OUANTUM COND.
SPEED
COST 0 0 0
L
OUT FIELDS
WRTICLES
FIG.3. Additional criteria for devices.
6
W. J. POPPELBAUM
available at this time : Large-scale integration, compatibility of signal levels and, finally, such mundane questions as cost. It is interesting to note that the spectacular demise of the tunnel diode is due to its failure in two directions. First, it could not be produced by integrated circuit techniques, and second, its nonlinear behavior could not be matched to transmission lines which were necessary if its high speed were to be useful. 3. N e w Devices
Until quite recently active semiconductor devices were based on the injection of minority carriers through a junction (in either Boltzmann or tunneling mode) or on the control of carrier density or channel width by appropriate electric fields. Of late, a new device has appeared which seems to have some very attractive properties, especially in the area of high current switching and memory action; this is the Ovonic Threshold Switch, also called QUANTROL [5]. This device uses simultaneous injection of holes and electrons, conserving therefore spacecharged neutrality for very high current densities. What is more, this double injection device (see Fig. 4) can be built out of amorphous semiconductor material, i.e., does not necessitate monocrystalline configurations. This, however, is a minor advantage compared to that of JUNCTION DIODE
+
AS-TE-I-GLASS
-
n
+
INTRINSIC
SWITCH
-
OHMIC) !--JUNCTO IN 1.
SINGLE INJECTION OF MINORITY CARRIERS.
2.
RECOMBINATION LIMITED CURRENT
P+n (OHMIC)
.
.-
I.
DOUBLE INJECTION OF BOTH CARRIER TYPES
2.
HIGH IMPEDANCE -3 RECOMBINATION LIMITED CURRENT.
3.
LOW IMPEDANCECURRENTS OVERRIDING RECOMBINATION.
n
P
*
PHASE CHANGE INDUCED BY FIELD ( MODIFIED X-RAY DIFFRACTION )
FIG.4. Amorphous semiconductor devices.
7
WHAT NEXT IN COMPUTER TECHNOLOGY?
being easily modified to have memory action in the absence of applied potentials. This memory is due to a phase change which can be induced by high enough fields. This phase change corresponds to a modification of the electronic cloud around certain atoms and can be made visible by X-ray diffraction methods. Visibly, the latter device is exceedingly attractive because the absence of volatility would make possible such things as a semiconductor analog of magnetized spots. Here, however, static readout is easy. Finally, it should be mentioned that these devices do not operate in an avalanche mode as witnessed by the fact that they can be cycled in fractions of a microsecond. TO PUMP
CONOUCTING ANO TRANSPARENT FILM
LINEARLY POLARIZED LIGHT
I
CROSSED ANALYZER PRINC. DIR. I
COMPONENT I AT SPEED v ,
PRIM. MR. 2
COMPONENT 2 AT SPEEO V *
FIG.6. Ardenne tube.
A device which has suddenly become of great importance, especially for those interested in electrooptics, is the so-called Ardenne tube shown in Fig. 5. This is the answer to our search for “instant negatives.” In some sense, therefore, we are talking about a highly specialized memory device and it would have been possible to include the Ardenne tube in Section 5. The device aspects are, however, preponderant, as is easily seen from the figure. The heart of the Ardenne tube [I,121, is a KDP crystal charged up by an electron beam, the intensity of which is controlled by a video signal. The charge distribution sets up local electric fields in a crystal of KDP which produce a difference between
8
W. J. POPPELBAUM
the phase velocities v 1 and v 2 along the principal directions. When linearly polarized light hits the crystal, it will be decomposed into components 1 and 2, chosen to be equally inclined with respect to the incident light. When the light emerges, its state of polarization has changed, being typically elliptical, and a filtering action can be obtained by an analyzer passing only vibrations perpendicular to those of the incident beam. The net result is that an eye to the right of the analyzer sees light roughly in proportion to the charge density deposited at each point of the crystal. The immediate usefulness of the Ardenne tube (also-called Pockels’ effect chamber) is evident when we consider that it is now possible to produce a transparency in the time that it takes to sweep out one video frame and that, furthermore, the surface can be prepared for new information by the simple expedient of erasing all charges with a flood gun. Presently the Ardenne tube is used in large screen television projection and in the control of directive lasers. Future applications will include the formation of interference patterns and holograms from charge patterns deposited on the KDP. A last example of a novel device presently being considered is the creep vidicon, shown in Fig. 6. This is a modification of the standard vidicon in which the photosensitive plate is made continuous rather than being a discontinuous mosaic. The purpose of this device is to detect electrically the “inside” of a complicated closed curve. The principle is to choose the semiconductor plate of such a resistivity that the dark resistance of those regions having a boundary projected on them is nearly infinite while the resistance of the illuminated regions is very low. This, then, separates conducting islands from each other in such a way that a charge deposited in an arbitrary point of such an island spreads uniformly and produces a uniform potential for all inside points. A subsequent video scan of the whole plate allows one to measure this potential and to discriminate via the beam current “ inside ’) and “outside.” That such a creep vidicon is useful to the highest degree is apparent when one considers the fact that even fast digital computers run into considerable trouble when the “ inside/outside question )’ has to be decided in a hurry. On-line operation of computers is virtually ruled out. The device can solve easily command and control problems like the one of coloring the inside of a region indicated by its boundary on a tricolor tube. 4. N e w Circuits
circuitry trend that should be followed with great attention is the more intimate mixing of digital and analog signals in hybrid processors. Up to now such processors have generally consisted of a digital and
LIGHT SENSITIVE SEMICONDUCTOR PLATE
CONDUCTING (ILLUMINATED) ISLAND, ISOLATED BY INSULATING ( D A R K ) BOUNDARIES
-
2 I Z
0 DEPOSITED INSIDE
LIGHT
BY ELECTRON GUN SPREADS AND PRODUCES EVEN V FOR WHOLE ISLAND
SCANNING RECOGNIZES V AND ANSWERS INSIDE /OUTSIDE QUESTION.
FIG.6. Creep vidicon.
s-
Z
10
W. J. POPPELBAUM
analog computer coupled together by a converter interface: This is not what is meant by a hybrid machine in our sense. We feel that a true hybrid machine should consist of a set of circuits in which the information signal is analog in nature while the steering signals are digital. Of course, the analog processing must be made very fast in order to compete speedwise with high-speed digital processing. To process analog signals with rise and fall times of the order of 100 nsec one must give in on the precision requirements. I t is, however, not too difficult to build circuits in which the output differs by less than 0.1% from the theoretical output. The way in which high speed is bought is to forgo very high gain amplifiers and strong feedback and to limit oneself to circuits which are linear enough without using phase shifting loops. Figure 7 shows a typical hybrid circuit in which a constant current SENSITIVITY t2SV 3.3K
3.3K
ANALOG
'-?
2-16V
1
ANLO(
+ '
1 -5v
1.OK
DIGITAL OUT
-5v
-2SV
FIG.
7. Variable sensitivity comparator.
difference amplifier operates a digital output via some collector logic. It is easy to see that the output is high only when the analog voltages are of nearly the same magnitude, the exact width of the overlap zone being controlled by the sensitivity input. Such a circuit is evidently useful whenever an electrical table lookup is required. Figure 8 shows a small hybrid machine called PARAMATRIX 181 in which incoming graphical information is digitized, normalized in size and position, and finally improved by filling in gaps and averaging out cloudy regions. In the operation of this system a clock scans through the output matrix (in the center of the photograph) remaining approximately 1 psec on each one of the 1024 points. The inverse transform of the
WHAT NEXT IN COMPUTER TECHNOLOGY?
II
coordinates in the matrix is formed, and a high-speed scanner is positioned to answer within this time slot of 1 psec the question: Is the inverse transform on the input slide or not? A more ambitious machine -ARTRIX [Yl-has been built in which graphical information is processed in such a way that, while the data are on their way from one graphical memory to another (each memory consists of a memotronvidicon pair), we can draw lines and circles, label figures, and erase portions thereof.
FIG.8. PARAMATRIX.
The availability of a compatible set of hybrid circuits makes the design of special-purpose computers using direct analog inputs attractive for those cases in which cost can be diminished and speed improved by so doing. These cases include systems in which both input and output are analog in nature and in which the processing loop is very short. It also includes preprocessors for visual or audio information in which the on-line operation of very fast analog circuitry can save enormous amounts of time to the central computer. It is not too hard to forsee
12
W. J. POPPELBAUM
that integrated circuits of the hybrid type will be available in the near future with 0.1O; precision and a bandwidth of over 50 Mc. Another area in which low precision demands can be taken advantage of is that of stochastic computers [Z, 9,101. In these, pulses of standardized shape are generated with an average frequency proportional to the variable to be transmitted. Figure 9 shows a method for producing such random pulse sequences with the additional condition that a pulse, should it occur, must occur within a given time slot. The principle is to compare the output of a noise diode with the voltage v representing the variable, to reshape the output of the threshold difference amplifier, and to clock the output of an appropriate shaping circuit by differentiating and reshaping the transient in a flip-flop, which is set to the " 0 " state a t a high rate by a clock. Figure 10 shows why the use of synchronous random pulse sequences is attractive in computation. The OR circuit forms visibly an output sequence in which the average frequency is proportional to the sum of the incoming average frequencies. Some difficulties arise when two input pulses occur in the same time slot, but this dificulty can be eliminated most easily by having " highly diluted " sequences. The formation of a product can be obtained equally simply by running the representative sequences into an AND circuit. Kote that no restandardization of pulses is necessary because of the appearance of the pulsesshould they appear-in fixed time slots. In Section 6 we shall discuss the implications of having available very low cost arithmetic units on systems design. A t this point it should be noted that there is a vast field in which such processing leads to entirely adequate precisions. Typically, we must wait 100 pulses for 10% precision, 10,000 pulses for 1% precision, and 1,000,000 pulses for O.ly& precision. With pulse repetition rates of the order of megacycles it is easily seen that we can equal the performance of the analog system discussed above within a small fraction of a second. On-line graphical processors would be a typical application for such stochastic circuitry, as would be array computers of considerable complexity in which each computing element is reduced to one integrated circuit performing limited precision arithmetic under the control of a central sequencer. 5. New Memories
One of the neglected areas of memory design is that of long-term analog storage. Such storage is usually both difficult and expensive as long as direct storage is contemplated. Indirect methods are usually based on the conversion of voltage to time but run into drift problems, which limit storage times to a few seconds. Of late it has been realized
'
I
c
ui a V
3 0 r
b
P
@
1
13
2Ya
I
WHAT NEXT IN COMPUTER TECHNOLOGY?
T 4-l
>
14
W. 1. POPPELBAUM
that indefinite storage can be obtained if we agree t o quantize the voltage to be stored. One way of doing this is shown in Fig. 11. I n this PHASTOR circuit [ 6 ] ,voltages are stored as the phase differences between the synchronized monostable generator and the subharmonic of a central clock. As can be seen from the figure, we have a simple multivibrator circuit in which the feedback loop contains a synchronizing input from the clock which makes sure that at the end of each oscillation
7 l
1
1
l
I
I
I
I
I
J
I
1
I
I
1
1
1
1
I
I
I
I
AVG. FREQUENCY-PROP. TO SUM F t + F z
AVG FREO. F2
f/
AVG. FREWENCY
- PROP.
TO PRODUCT F, Fz
AVG. FREO. F2
FIG.10. Addit ion and multiplication using random sequences.
the timing is in step with it. It is obvious that if this clock has a period which is very short compared to the natural period of the monostable (practically 100 times smaller if 1% accuracy is required), the monostable is only able to lock into 100 different phase jumps with respect to the central muhivibrator used to generate the subharmonic 100. It should be noted that circuits like the above-so-called ratchet circuitscontradict the old theorem that in order to store n different numbers one needs logen flip-flops, i.e., 2 l o g 2 n transistors. It should also be noted that the gating in and out of a PHASTOR storage cell can be done bp purely digital means and that integrated circuit technology will make the price of such storage cells competitivc with those of simple binary cells. In the area of binary storage the forecast is curiously enough very simple if we extrapolate to the distant future. There is no doubt
WHAT NEXT IN COMPUTER TECHNOLOGY?
15
whatsoever that ultimately semiconductor storage cells in Large-Scale Integration (LSI) arrays will be cheap enough to replace magnetic memories of every kind. The great advantage of semiconductor storage is, of course, the very high output levels that can be obtained and its general compatibility with semiconductor logic. Whether these semiconductor memories will use bipolar or Metal-Oxide Semiconductor (MOS) techniques is not entirely clear, but now that micropower bipolar +25
FIG.11. PHASTOR storage cell.
circuits are available it does not seem excluded that the latter will triumph. The outlook for the immediate future, however, is much harder to assess because of late the plated wire memories (in which a magnetized mantle surrounds a substrate used as a sense wire) have reduced cost and increased speed to such an extent that magnetic memories will have a new lease on life, at least as far as relatively big units are concerned. To show how extremely elegant the semiconductor memories can be when read only designs are considered, Fig. 12 shows a design by Chung of Texas Instruments in which emitter followers are used a t the cross-points of a matrix with horizontal base driving lines for word selection and vertical digit lines connected to the emitters. By
16
W. 1. POPPELBAUM
connecting or disconnecting the bases in a given row, a digit pattern can be stored permanently and readout, simply corresponds to a sample pulse injected into the word line which reads-via the digit lines-into the computer. Obviously, this system eliminates complicated sensing devices and is ideally suited to microminiaturization : Access times of the order of 20 nsec are easily attained. Sooner or later every computing system has to supplement random access (now magnetic, later semiconductor) memory by bulk storage of WORD n
WORD n + i
WORD n + 2
FIG.12. Read-only semiconductor memory (designed by Chung of Texas Instruments).
files with a low duty cycle and relatively long read and write times. Memories of this type have recently been made in optical form, and one of the most interesting is perhaps the holographic storage developed a t Carson Laboratories and shown in Fig. 13. Here a matrix of dots is stored in the form of a hologram inside a crystal using F-center techniques. Such crystals differ from the now abandoned photochromic substances in that the electronic configuration of atoms is modified by the incident light encountered. The crystal stores the interference pattern of a laser shining through the matrix of dots (some of which may not be transparent) and a reference beam emanating from the same laser. Reconstitution of the information is obtained by the classical holographic process. i.e., the crystal is examined in laser light occupying the same relative position as the reference beam. One of the advantages of this wave front storage is that many patterns can be superimposed
WHAT NEXT IN COMPUTER TECHNOLOGY?
17
by turning the crystal through a small angle between each additional storage operation. Rather exciting storage densities can thus be obtained.
INFORMATON BEAM
ROTATE BETWEEN EXPOSURES
HOLOGRAM 2
HOLOGRAM I
MIRROR
FIG.13. Holographic storage (Carson Laboratories).
6. New Systems
Most computing systems of the last two decades have had a fundamental structure proposed by von Neumann [ 1 3 ] ; i.e., they consist of a certain number of registers connected to a memory on one hand and to an arithmetic unit on the other with appropriate shuffling of numbers dictated by a control unit. It is a tribute to von Neumann’s genius that this arrangement should have become completely standard. The time has come, however, for computer designers to reassess the situation and to think in terms of arrangements in which either the memory and processing functions are more intimately connected or in which there is a very strong parallelism even to the extent of having hundreds of arithmetic units. Finally, it has become mandatory to think again in terms of organizations for specialized computers, in particular those connected with input-output problems. Figure 14 shows, rather symbolically, the layout of an array computer. The idea here is to have a great number of processing elements with considerable autonomy and local storage under the direction of both a (micro-) and an over-all (macro-) control. The advantages of such an array are evidently the possibility of parallel access to all elements for preprocessing purposes, and the facility of connecting the elements together for processing which involves exchanging of information between all of the registers and all of the arithmetic units.
18
W. 1. POPPELBAUM
The well-known example of such a highly parallel array computer is ILLIACIV [I]],in which 256 processing elements are used. Here each processing element consists of over 10,000 gates, i.e., has the capability of ILLIACI (but, of course, a much higher speed). Another waj- of realizing the layout of Fig. 14 is to replace each processing element by a general computing element made on one integrated circuit wafer using stochastic techniques. The fantastic reduction in cost (since each element would contain less than 30 I
I
I
MACROCONTROL 1TYPICAL)
--
i-I, j
~-I,J+I
--
--Q----Q+i+l,j-l
i+l, j
I
I
1
I
i+l,j+ I
I
Fro. 14. Array computer (fixed interconnection to nearest neighbors). A11 Processing Element (PE) inputs and outputs can be used as inputs and outputs to the array. Each PE has local (microprogram) and over-all (macroprogram) control facilities. Some local memory is provided.
junctions) is, of course, bought by a considerable decrease in precision or, to be more exact, by having to wait for a reasonable time until the array has processed a high enough number of pulses to give the desired precision. As mentioned before, there are many applications in which such an array . -odd be very useful. Another attractive application of stochastic computer elements is shown in Fig. 15 which represents a graphical processor with a highly parallel structure [9]. On the left is an n x n input matrix with the wire in position ( i , j )carrying a signal in the form of a random pulse sequence called xtj . One could imagine that these signals are generated from photodiodes which receive an incoming picture and have noise
WHAT NEXT IN COMPUTER TECHNOLOGY?
19
amplitudes which vary with the incident light. Using simple AND and OR circuits, let us now form output signals Y k l , where (k,I ) are the coordinates in an n x n matrix such that
After addition of a constant term a k l we can consider the above function as the linear portion of the expansion of the most general function of all such xti’s. I n forming this function and displaying the
FIG. 15. TRANSFORMATRIX: (a)input matrix (used in parallel), (b) coefficient matrix (used frame by frame), and (c) output matrix (produced sequentially),
output as a picture, we obviously produce an image mapping from input to output described by 124 coefficients b m j and encompassing therefore translations, rotations, magnifications, conformal mappings, convolutions, and Fourier transforms. In order to define which one of the above transformations we desire, it is necessary to specify all coefficients b k l g j , and this will have to be done by a control unit. The core of the input-output plus processing unit ” is, however, a complex of n4 AND circuits, a. number which can be reduced to 2n2 if sequential operation is contemplated for each one of the output points. It is clear that such an on-line general graphical transformer would be of great usefulness. I n the area of specialized organizations we can consider the one shown in Fig. 16: the so-called POTENTIOMATRIX. This is a modern analog of the electrolytic trough. A matrix of discrete resistors imitates the behavior of a continuous conducting sheet, and potential islands are defined by forcing certain groups of nodes into fixed potential states via electronic switches. All nodes are connected to a bus bar via sensing ‘I
20
W. J. POPPELBAUM
elements consisting of a small light which comes on whenever node and bus bar have the same potential. Ultimately, of course, such an array would be made in continuous fashion using electroluminescent panels with built-in comparator amplifiers. When the bus is stepped from the potential of the first island to the potential of the second one, the nodes
I
I
BUS
lying on equipotential lines will be displayed in succession. It might be noted that such an " intelligent display " is capable of generating curves of considerable complication from a small number of input variables: For conical sections we only have to remind ourselves that they are equipotential lines in a field in which the potential islands are a directrix and a focal point. REFERENCES 1 . Calucci. E., Solid state light valve study, Inform. Display, p. 18 (March/ April 1965). 2. Gaines, B. H., Stochastic computer thrives on noise, Electronics 40, 72-79 ( 1967).
WHAT NEXT IN COMPUTER TECHNOLOGY?
21
3. Gunn, J. B., Instabilities of current in 111-V semiconductors. I B M J . Rea. Develop. 8, 141 (1964). 4 . Kroemer, H., Theory of Gunn effect. Proc. IEEE 52, 1736 (1964). 5. Ovshinsky, S. R., Ovonic switching devices. Proc. Intern. Colloq. Amorphous and Liquid Semicond. Bucharest, Rumania, 1967. 6 . Poppelbaum, W. J., and Aspinall, D., The Phastor, a simple analog storage element. Computer Technol. Conf ., Manchester, July 1967. 7 . Poppelbaum, W. J., Hybrid graphical processors. Computer Technol. Conf., Manchmter, July, 1967. 8 . Poppelbaum, W. J., Faiman, M., and Carr, E., Paramatrix puts digital computer in analog picture and vice versa. Electronics. 40, 99-108 (1967). 9. Poppelbaum, W. J., Afuso, C., and Esch, J. W., Stochastic computing elements and systems. Proc. Fall Joint Computer Conf., 1967. 10. Ribeiro, S. T., Random-pulse machines. IEEE Trans. Electron. Computers 16, (1967). 11. Slotnick, D., ILLIAC IV. IEEE Trans. Electron. Computers 8 (1968). 12. Von Ardenne, M., Tabellen der Elektronenphysik, Ionenphyaik unsl obermikroskopie, Vol. 1, p. 202, Deut. Verlag. Wiss., Berlin, 1956. 13. von Neumann, J . , Collected Works. Macmillan, New York, 1963.
This Page Intentionally Left Blank
Advances in Simulation JOHN McLEOD Sirnulotion Councils, Inc. Lo lollo, Cdifornio
1. Introduction 1.1 Simulation-a Definition . 2. A LookBack . . 2.1 1943andOn . 2.2 Hybrid Simulation-a Definition . 3. Progress . 3.1 Analog Hardware 3.2 Analog/Digital Communications . 3.3 Digital Software-CSSL 3.4 The Price . 3.5 Simulation for Model Development 4. The Best Tool . . 4.1 Analog Advantages . . 4.2 Digital Advantages . 4.3 Hybrid Advantages . . 4.4 Typical Appiications 4.5 Benchmark Problems . 5. Physiological Simulation . . 5.1 System Characteristics 5.2 Attributes of Simulation . 5.3 Complexity and Simplification . . 5.4 Adaptability and Its Control 5.5 Measurement Facilitated . . 5.6 Noise and Its Control 5.7 A Hybrid Example . . 6. ALook Ahead . 6.1 New Fields . . 6.2 Analog Hardware . 6.3 Analog Software 6.4 Digital Computers . 6.5 Toward Greater Hybridization 6.6 The Large Systems . 6.7 The Future of Simulation . . References
. .
. .
. . .
.
.
.
. . . . . .
. . . . . . .
. . .
. . . . . . .
. .
.
. . .
23 24 24 24 25 26 26 26 27 28 29 30 30 31 31 31 32 33 34 34 34 35 36 38 39 45
45 46 46 47 47 48 49 49
1. Introduction
Although progress in simulation is not directly dependent on progress in computers, the two are certainly closely related. Analog, digital, and hybrid electronic computers are the tools of the simulation tradethe best ones we have found to date, by a good margin.
23
24
JOHN McLEOD
Computers, in the broad sense, can do many other things. Analog computers can draw pictures, and digital computers can keep books. (“Accurate” t.o an unlimited number of significant digits, if one is really that concerned.) But important as they are, these related uses of computers will only be given a nod of recognition by this chronicler as he turns his attention, and the reader’s, to simulation. 1.1 Simulation-a
Definition
As used in this article, simulation is “the development and use of models to synthesize, analyze, and study the dynamic behavior of actual or hypothesized real-life systems.” Note that computers are not mentioned. But as already stated, computers are the best tools we have yet found for simulation; therefore, this discussion will be limited to computer simulation exclusively. 2. A Look Back
2.1 1943 and O n
Computer, or at least electronic circuit, simulation began as early as 1943 with a study of the dynamics of aircraft and guided bombs [15]. After World War II, analog computers-or electronic differential analyzers or simulators, as they were variously called-began to emerge from the shadow of security classification. A t first they were used primarily for the simulation of guided missiles as well as aircraft, but with the advent of the space age they were ready to simulate thingspeople, devices, environments, situations-which were literally ‘‘ out of this world.’’ The writer met his first ‘‘ simulator ” in the winter of 1949-50, when a large and-according to the bill of lading-very expensive box arrived at the Naval Air Missile Test Center at Point Mugu, California, identified as a Lark Simulator. Advances in simulation from that eventful date until the end of 1961 followed the usual slow-starting exponential growth pattern and are covered, to the best of this writer’s ability, in a previous article [lo].The birth and childhood of simulation will not be reiterated here. Instead, the author would direct attention to the adolescence of the past few years and the current coming of age of simulation. It was during this period that hybrid simulation began to come into its own, an event that may be largely credited for simulation’s recent maturation. Hybrid simulation has suffered from overambitious parents. Analog
ADVANCES IN SlMULATldN
25
and digital simulation are genetically unrelated; therefore, having common aims in life, it should have been perfectly natural and proper for them to marry and proliferate. But their parents-and friends-tried to force a marriage before they were mature enough. That was about 1956. Yet in spite of dire predictions analog simulation has survived, digital simulation is beginning to live up to expectations, and the grandchildren, hardy mongrels or hybrids that they are, are exhibiting many of the characteristics of gifted children. To be more specific, in the era 1955-62, attempts were made to combine the advantages of analog and digital computers to produce a better tool for simulation, but the timing was wrong and the philosophy more attractive than it was sound. The analog-to-digital, digital-toanalog hardware necessary for combined (married) simulation (as distinct from hybrid simulation, the offspring)was simply not good enough. The conversion-equipmentvendors were pushing the state-of-the-art too hard. The problem was philosophically skewed by wishful thinking. Anyone should have realized that disadvantages would combine as readily as advantages. To be sure, combined simulation did-to the extent allowed by the interface equipment-permit the speed of the analog to be combined with the precision of the digital computers. But the low accuracy of the analog computer also combined with the relatively slow speed of the digital. It was only through clever programming to alleviate these problems that combined simulation survived. Now we have hybrid simulation, which is in want of a generally accepted definition. There are those who maintain that if all the variables in a simulation are represented by continuous signals, the simulation is analog, no matter how much digital equipment may be used to program, set up, control, and monitor the analog. At the other extreme are those who consider any simulation in which both discrete and continuous signals are involved in any way to be hybrid. So again the author offers a definition which is valid within the context of this article. 2.2 Hybrid Simulation-a
Definition
“A hybrid simulation is one in which the variables of the simuland, the real-life system being simulated, are represented by both continuous and discrete signals.” Having defined “ simulation ” and “ hybrid,” we can now turn our attention to “ advances ”: those improvements in hardware and software which, meshing like the teeth of gears, deIiver the power required to advance the state-of-the-art of computer simulation.
26
JOHN McFEOD
3. Progress 3.1 Analog Hardware
As previously indicated, most analog computers have married and changed their name, but some “ pure ” analogs are still being produced, mostly as pedagogical or special-purpose computers. But advances in analog hardware as used in combined and hybrid systems are profoundly influencing advances in simulation [ S ] . The most important of these advances stem from the complete shift to improved solid-state electronics resulting in improved accuracy, increased speed, and greater reliability. Because relatively low accuracy and reputedly poor reliability of analog components have been used as arguments against their use, both as analog computers and as components in hybrid systems, the importance of marked improvements in these will be recognized. But how about speed? Speed has always been the analog’s strong point. With equivalent electronics, an analog computer operating in parallel will always be orders of magnitude faster than a digital machine constrained to operate in series. And speed is important because it makes it possible as well as economically expedient to simulate some kinds of systems and solve some classes of problems in faster-than-real-time. Sometimes running in faster-than-real-time may be desirable for purely economic reasons; for instance, in stochastic processes, where many thousands of runs may be required to produce statistically significant results, or in multivariable optimization problems, which also require many runs. A t other times, faster-than-real-time operation may be a requirement imposed by the nature of the system simulated; for instance, in certain adaptive control schemes which require the model to be run at many times the speed of the system to be controlled, or for predictive displays which show the consequences of current action a t some time in the future, or with some techniques for the solution of partial differential equations when the results are required for a simulation operating in real time. 3.2 Analog/Digital Communications
However. speed is not the only reason for using analog hardware with digital equipment to create hybrid computing systems. No matter how man may operate a t the microscopic neuron-synapse level, he communicates with his real-world environment in a continuous fashion. As a
ADVANCES IN SIMULATION
27
consequence, if he is to communicate with a digital computer, there must somewhere be a continuous-discrete-or analog-digital-interface. Perhaps partially for this reason, most of our real-life machines are continuous in their mode of operation. Therefore, because one must always pay, either in accuracy or complexity (usually in both) to change the form of information flow in a system (from continuous to discrete and/or vice versa), it is often convenient to avoid the issue by using only.an analog computer so that input, processing, and output can all be continuous. Again, if the man-machine communication only involves monitoring and controlling a digital computer, the simulation, may be all digital. On the other hand, if in the simuland, in the real-life system being simulated, the signal flow is through the man, the simulation is hybrid, even though all of the hardware is digital. In such cases, although it is possible to make the analog-digital interface physically coincident with the manmachine interface, this places the burden of conversion on the man. It is often more expedient to mechanize the analog-digital and digital-analog conversion. This moves the analog-digital interface to within the computing system, and results in a hybrid computing system as well as a hybrid simulation. What has been said of man-in-the-loop simulations also applies to real-world hardware-in-the-loop simulations if the hardware operates in a continuous rather than a discrete fashion. Thus, if it is desired to introduce an actual component-a transducer or servo, for instance-of the simuland into an otherwise digital simulation, there would still have to be an analog-digital interface, and the simulation would be hybrid. 3.3 Digital Software-CSSL
The development of digital languages for the simulation of both continuous and discrete systems was engendered and stimulated primarily by two factors: the ubiquity of the digital computer and the ebullience of its proponents. Many who had a need to simulate did not have access to an analog computer. Others who did have access to one did not appreciate its potential. Indeed, to the great majority the word computer was synonymous with digital computer. If they knew that such a thing as an analog computer existed, they probably did not understand it, in which case it was something to be avoided-no sense in showing one’s ignorance. Simulation languages have had an interesting history [3, 8,161. The first ones were developed to “make a digital computer feel like an analog.” [3]. Then there were many developed for special purposes (a frequent one being to obtain an advanced degree).
28
JOHN McLEOD
Recently, an extensive effort has been made to consolidate past gains and give direction to future effort. In March 1965 a group of Simulation Council members organized the SCi Simulation Software Committee. Among its members were most of the developers of important digital languages for the simulation of continuous systems. They reviewed all available previous work and prepared specifications for a Continuous System Simulation Language (CSSL) [I71 which embodied all of the best features of previous languages that were considered compatible with the objectives of the Committee and appropriate in the light of the current state-of-the-art. Like its predecessors, CSSL is intended to allow an engineer or scientist to set up a digital simulation without dependence on an intermediary in the person of a programmer. The man with the problem need have no knowledge of digital computer programming per se; a knowledge of CSSL should be all that is necessary. Implicit in the design of all such languages is the requirement that they be easy to learn. Undoubtedly, CSSL in its present form will not be a final answer. Like FORTRAN, COBOL, and other higher order languages, it will have to be used and abused, reviewed and revised. But it is hoped that the specifications as published, although developed primarily to facilitate the simulation of continuous systems, will bring some order to the chaotic proliferation of simulation languages and give direction to future efforts. Perhaps by the time CSSL-IV is developed the language will have been modified to accommodate the simulation of discrete systems, and the simulation community will for the first time have a standard language. 3.4 The Price
Progress has been expensive, but high-performance aircraft, complex missiles costing in the millions, and space systems involving human lives have elevated simulation from a convenience to a necessity. The bill for the development of hardware and the time and effort required to improve the techniques had to be paid. The United States government, directly or indirectly, paid, and by the mid-sixties it was possible to put together quite sophisticated simulations of complete space systems [ 7 ] . The spillover into industry was slower than might have been expected. This was partially because the pressure in industry was not as great. In most cases there were other ways of doing the job. Furthermore, development costs had not been amortized; hardware was, to a profitoriented organization, well-nigh prohibitively expensive. Even the giants of industry, who had the need and could pay the price, were extremely cautious, as some of us discovered.
ADVANCES IN SIMULATION
29
Inspired by the success of the Simulation Councils organized in 1952, some of us with backgrounds in aerospace and automatic control tried in 1954 to organize the Industrial Analysis and Control Council. After two meetings [ I , 21, it folded. Nost prominent among the rocks and shoals that stove it in was the plaint, " I don't know enough about my process to simulate it." It was undoubtedly true that in most cases of interest these people did not understand their process well enough to sit down and develop a mathematical model which could be solved on a digital computer. What they did not seem to realize was that the analog computer is a powerful tool for use in the development of models. 3.5 Simulation for Model Development
The empirical approach to model development is a simple iterative procedure: attempt to build a computer model based on what is known and what is hypothesized concerning the simuland. Exercise the model by driving it with simulated inputs corresponding to those of the reallife system. Compare the outputs. If they differ, guess again. Or in more sophisticated terms, adjust your hypothesis. The forced and directed thinking which this requires will usually improve the hypothesis with each model simulation-model iteration. And the insight gained is in some cases as rewarding as the final simulation itself. An important advantage of simulation, and one which helps make the foregoing procedure feasible, is that the model can usually be much simpler than the real system. The simulation need resemble the simuland only to the extent necessary for the study at hand. Indeed, if a complex system were to be simulated in every detail it would, in some cases, be as difficult to study the model as the actual system. Even were this not so, there would still be powerful incentives to simulate. Many tests which would destroy human life or expensive hardware can be run with impunity on the simulated system. The ability to return to initial conditions if a simulated patient " dies ')or a plant " blows up " is clearly impossible in real life. And of course there are important systems which are not amenable to arbitrarily controlled experimentation, the national economy and the weather, for instance. Yet these unwieldy and complex systems are currently being simulated in some detail, and with a satisfactory degree of success, by using the empirical procedures described. In connection with the method recommended for development of a model of an incompletely understood system, the author is repeatedly asked if similar outputs resulting from similar inputs prove that the model is a valid analog of the system simulated. Rigorously, the answer
30
JOHN McLEOD
is no. It is possible to have two black boxes (in this case the simuland and the simulation) which w i l l have the same transfer function but different internal dynamics. And it is not difficult to conceive of two systems reacting differently internally and yet producing the same kinds of outputs in response to a step or ramp or sinusoidal or any other given input. But it has been the author’s experience that the chances of the simulation outputs being the same as those of the simuland in response to step and ramp and sinusoidal and arbitrary inputs is very remote. In fact, if the model responds like the system modeled for all inputs of interest, then the model is adequate for the particular study at hand even if the internal dynamics are sufficiently different to make the simulation invalid for other studies. 4. The Best Tool
All too often the choice of the computer for a particular simulation must be made strictly on the basis of availability. This cannot be argued; certainly it is better to use a digital computer to develop a simulation which might be done better on an analog than not to simulate at all. Besides, both kinds of computers are so versatile that either can often be used for jobs which should be done on the other. But given a choice among analog, digital, or hybrid equipment, which should one choose? That depends on the job. It should be possible t o choose the best kind of computer by considering the advantages peculiar to each kind. 4.1 Analog Advantages
In general, analog computers have the following advantages: They are faster. The analogy between the computer simulation and the real-life simuland can be straightforward. Man-computer communication comes naturally. Inputs and outputs do not require quantizing and digitizing. The values of parameters and variables can be easily changed during operation, and the results of the change immediately observed. Being easy to program directly from block diagrams, they are a natural for simulation systems which can be diagrammed in block form, even if the mathematical description of the system is not known in detail. No “ software ” is required.
ADVANCES
IN SIMULATION
31
4.2 Digital Advantages
Digital computers have quite different advantages: They can be as precise as one desires and can be made as accurate as time and input data will allow. The basic operation is counting, which to most people comes naturally. They handle logical operations exceedingly well. They have prodigious memories. Once set up and debugged, a problem or simulation can be removed, stored indefinitely, then replaced exactly as it was. With floating point machines, scaling is no problem. They are versatile, being able to handle mathematical, scientific, and business problems with equal facility. 4.3 Hybrid Advantages
In general, all of the advantages of hybrid computers (or hybrid facilities) are derived from the combination of the analog and digital elements (or computers) of which they are composed, but to these is added one additional prime advantage-versatility , Hybrid computers allow the user the flexibility required for optimum simulation of complex sirnulands. 4.4 Typical Applications
In the light of the foregoing, it is evident that analog, digital, and hybrid computers are all suitable for simulation, that each offers certain advantages to a potential user, that the choice is highly problemdependent, and that no rigorous procedure for choosing can be devised. However, a consideration of some typical simulations for which each kind of computer would seem to be the best choice may be helpful. 4.4.1 Analog
For the analog computer the typical simulations would be as follows: ( 1 ) simulations for classroom demonstrations, where the effect of changing parameters and variables can be dramatically illustrated; (2) simulation for system predesign, where the effects of various trade-offs can be set up and demonstrated “live ’’ for project review boards and the “ customer ”; (3) simulations involving, primarily, the solution of large sets of simultaneous differential equations wherein high speed is more important than great precision;
32
JOHN McLEOD
(4) simulations requiring empirical development because there is insufficient a priori information to support the detailed design of a mathematical model; ( 5 ) simulation of stochastic systems where a large number of iterations (probably controlled by binary logic) are required to obtain statistically significant solutions; and ( 6 ) where intimate man-computer rapport is an overriding consideration.
4.4.2 Digital
For the digital computer the simulations would be as follows: (1) the simulation of discrete systems such as the economy and traffic flow; (2) the simulation of continuous systems if: (a) there is an overriding requirement for high accuracy; (b) there is no requirement for real-time simulation, or if the problem is such that the speed-accuracy trade-off will allow real-time operation; (c) the mathematics involves, primarily, algebraic equations and logical decisions; and (d) suitable software is available.
4.4.3 Hybrid For the hybrid computer the simulations would be as follows: (1) simulations where the information flow in the simuland is both continuous and discrete; (2) simulations in which the over-all system requires high accuracy, yet there are numerous high-frequency inner loops; (3) simulation of complex systems involving ordinary and/or partial differential equations, algebraic equations, iterations, generation of functions of one or more variables, time delays, and/or logical decisions; and (4) in short, the misfits and the tough ones. Consideration of the foregoing is a help in determining the best ” tool, but if the nature of the workload is known, there is still another way. ((
4.5 Benchmark Problems
The slow progress of simulation in the process industries notwithstanding, by the early sixties Monsanto Chemical Company, one of the first companies to turn to simulation to aid in the study of the dynamics
ADVANCES IN SIMULATION
33
of industrial processes, was sold on it. But when it came to expanding their simulation facility they needed some kind of yardstick with which to measure computer performance; they wished to determine not only which kinds of equipment under consideration could solve their kinds of problems but also which could do it most efficiently. Thus, they developed a descriptive model of a typical chemical process, which became known as the Monsanto Benchmark Problem. The idea was to offer it to competing vendors and see who could handle it most effectively. Surprisingly (or perhaps understandably) the vendors did not fall over themselves to offer solutions, but eventually some were forthcoming r5,141. The Monsanto benchmark experiment was successful enough to warrant consideration of taking a similar approach to problems in other fields. Thus, a t a Simulation Council-sponsored Advanced Simulation Seminar in Breckenridge, Colorado, August 1966, it was suggested that the benchmark concept could be usefully adapted to the simulation of physiological systems-a matter of great interest to many of those present, particularly the author. 5. Physiological Simulation
While engaged in the simulation of intercontinental ballistic missile systems, the author had become interested in the development of an improved extracorporeal perfusion device-a heart-lung machine. The author had seen one, and its complexity motivated him to develop a much simpler pump-oxygenator for blood [9].But when it was tested by connecting it to a living organism to take over the functions of the heart and lungs, the simple device became an element in an amazingly complex organism. Things happened that the author did not understand and that the physicians and surgeons could not explain. To gain insight the author turned to simulation, even though he had little knowledge of how a cardiovascular system should be simulated. However, instead of alternating between the simuland (in this case the living animal) and the computer simulation, he went from the literature to the simulation t o a physiologist interested in the same problem (usually for a clarification of the physiology which might explain shortcomings of the model) and back to the simulation. This procedure resulted in a simulation of the circulatory system [ll] which was adequate for the study for which it was designed, even though the author did not know enough physiology to develop a satisfactory model without extensive empirical, trial and error (hypothesis) testing. After this experience the author was ‘‘ hooked ” on simulation for the study of physiological systems. So were a number of others [a]. Of this
34
JOHN McLEOD
comparatively small number, however, it was interesting to note how many had doctorates in medicine or physiology as well as in engineering or physics. In short, a significant number of them combined in one person the training required to undertake successfully and appreciate the value of physiological simulation. But compared to the number of researchers--and clinicians, too-who could benefit from simulation, the number with training in both the life and the physical sciences is woefully inadequate. It is fortunate that the team approach can be made to work, even though it is not always easy. Those with a background in the life sciences and those with an education in the physical sciences actually think differently. Unfortunately the sincerest efforts of the twain to work for a common goal are also hampered by language differences, which have a psychologically alienating effect out of all proportion to the actual difficulties of communication. It sounds trivial, but how many engineers know that -when a physician refers to the " lumen " he means the i.d.? And if the question is turned around, the physicians would probably fare even worse. Fortunately, it seems that mutual respect can more than compensate for differences in points of view and vocabulary and make teamwork on physiological simulation a fascinating and rewarding experience. 5.1 System Characteristics
Among the properties that characterize physiological systems and make them difficult to study are (1) complexity, (2) adaptive behavior, (3) measurement problems, and ( 4 ) noise. 5.2 Attributes of Simulation
Among the properties of computer simulation which make it well suited to cope with the foregoing difficulties are the ability t o (1) simplify, (2) control the environment, (3) facilitate measurement, and (4) eliminate or control noise. 5.3 Complexity and Simplificatio-n
The complexity of physiological systems begins a t the molecular level with the basic building blocks. the amino acids, and extends through the levels of cells, individual organs, and organic systems to the complete
ADVANCES IN SIMULATION
35
living plant or animal. And at each level, and between levels, there are feedback loops, regulatory mechanisms, and cross-coupling. To attempt t o develop a simulation complete in every detail a t even one of these levels, to say nothing of all of them, would require more knowledge, equipment, and know-how than is available-by orders of magnitude. Fortunately it is neither necessary nor desirable to simulate in complete detail because simulations should be " question-oriented." The model, therefore, should be comprised only of those elements and relationships which bear on the questions of interest. Unnecessary detail makes the model more difficult to develop and to understand, and understanding (or insight) is the ultimate objective of simulation. To gain understanding there is no better way than by attempting to simulate that which would be understood. I n the light of the foregoing we can see what is at once one of the weaknesses and strengths of simulation. The weakness is the amount of talent, of human judgment, that is required. Simulation without human talent can no more answer questions than a scalpel can perform a surgical operation without the knowledge and skill of the surgeon t o guide it. Basing action on results of an improper simulation, or an improper interpretation of results of a valid simulation, can lead to trouble. The strength of simulation is that it can be used as a means of evaluating the importance of elements of the simuland and their functions with respect to the questions to be answered. In case of doubt a test experiment can be designed and run, the function of interest added or deleted, and the experiment run again. Armed with the results, the investigator can judge the relative importance of the function manipulated. 5.4 Adaptability and Its Control
One of the most remarkable characteristics of living organisms is their adaptability: without it the species would have perished from this earth. But it often presents serious difficulties for investigators of physiological phenomena. The time required for an organism to react to the (usually traumatic) investigative techniques is usually very short, so that the investigator will often find at some time during an experiment that he is no longer working with the system with which he started. In a simulation those conditions which all too often affect system parameters can be held constant, or changed only in accordance with the requirements of the investigation. Those reactions of a specific individual to its environment which make it function differently under different circumstances, or under the same circumstances a t different times under apparently identical circumstances, have the same effect as
36
JOHN McLEOD
though they were random or "noise " in the system. They make it very difficult to distinguish between changes in measured variables resulting from deliberate experimental changes and changes resulting from uncontrollable factors. The primary mechanisms which enable an organism to adapt and make it difficult to perform controlled experiments are feedback and redundancy. Feedback systems contribute to homeostasis, the tendency of an organism to resist change in spite of external stimuli. Redundancy complicates experimental procedures by increasing the number of paths from the point of application of a stimulus to the point where the reaction is measured. As contrasted to a living organism, these feedback and parallel paths in a simulated system are under the complete control of the investigator. If they interfere with his observations but are not germane to the study at hand, he can eliminate them. If, on the other hand, they make a necessary contribution to the validity of the simulation, he can easjly measure their effect.. 5.5 Measurement Faci Iitated It has become a truism that it is impossible to measure anything without disturbing that which is measured. There is no field of endeavor in which this fact leads to more difficulties than in the study of physiological systems. If a cell is punctured by a microelectrode, t o what extent is its normal function altered? If an experimental animal is anesthetized and cannulas inserted, how representative are the measurements of normal physiological functions? It is difficult to say for sure. Consider a simple but important example. I n the study of cardiac function. a parameter of great interest to investigators is cardiac work per ventricular contraction. But to calculate this it is necessary to determine the amount of blood ejected per contraction and the pressure differential across the ventricle. The pressure differential across the left ventricle can be measured with satisfactory accuracy by passing a cannula up a vein in the arm through the superior vena cava, the right atrium, the right ventricle, the pulmonary valve, and the pulmonary artery into the lung and wedging it into a bronchial notch. If it is properly placed and tightly wedged so that the blood flow past the tip is occluded, the pressure measured a t the catheter tip is assumed to be the back pressure from the left atrium and therefore the pressure of the blood flowing into the left ventricle. Then if another catheter is passed through an incision in the neck and down the carotid artery into the aorta, it can be used to measure the pressure of the blood flowing out of the ventricle, which is the pressure that the heart is working against. The instantaneous difference between these two pressures multiplied
ADVANCES IN SIMULATION
37
by the instantaneous flow rate of the blood ejected from the heart and integrated over one heartbeat will give a reasonably accurate measure of the work performed by that left ventricle during that particular contraction under the existing circumstances. But how can the instantaneous flow out of the ventricle be measured? There are several good blood flow meters, but to measure the flow from a ventricle all of them require thoracotomy, or open-chest surgery, to allow the flow transducer to be placed in or around the aorta. (Or the pulmonary artery, if it is flow from the right ventricle that is of interest.) How much error is introduced by the reaction of the patient (or experimental animal, for that matter) to such a traumatic preparation? It is indeed difficult to say. Given a valid simulation, mensuration presents no problem. All parameters and variables are available in an analog computer, where they can easily be measured. In a digital computer they are probably computed as part of the simulation; if not, it can easily be done. Obviously, the difficulty is that the stress-provoking procedures described (or others which may be less stressful but more indirect and/or more difficult) will have to be used to gather data required to develop the simulation. Thus, if the simulation faithfully reflects the simuland, the result will be a computer model of a stressed animal. But because the simulation can be more readily manipulated than the animal, the stress can be decreased to “ normal.” Or it can be increased beyond the limits that the animal would be able to tolerate, thus allowing a full spectrum of experiments to be run with one “ preparation ” and without sacrificing the animal. The technique of decreasing the stresses until a normal animal is simulated raises the interesting question of how one knows when he gets there. That is obviously a question that the experimenter must answer, but the author is not about to attempt to do so here. However, the implications of the technique are exciting: If a “ normal ” physiological system can be simulated and the effect of stresses on measurable variables duplicated, can the same model be used to identify unrecognized stresses which might be causing abnormal values of measured variables? Closely related to the problem of making measurements without inducing unwanted stress in the subject is that of instrumenting astronauts. The medical team on the ground would like to have continuous measurements of everything. But some measurements are impractical (the cardiac work per ventricular contraction cited, for example) and others are inconvenient or uncomfortable for the astronauts. Furthermore, since the rate at which information can be telemetered from a spacecraft to earth decreases with the distance, the number of physical variables of astronauts in deep-space probes that can be monitored will
38
JOHN McLEOD
be severely limited. But it is believed that the current state of the simulation art offers a solution. It is believed that it is now possible, and that in the near future it will prove practical, to develop a computer model of a man which will embody the elements, systems, and variables of primary interest to the physiological monitoring team. This model could be tailored to simulate a selected astronaut and tuned ” before the fiight to react to stimuli and stress in the same way that he does. Then during flight the model could be driven by easy-to-measure variables (e.g., respiration, heart rate. and galvanic skin resistance) telemetered from the astronaut it was designed to simulate. Since the value of all parameters and variables in the model would be readily available for measurement and observation, a good idea of the astronaut’s over-all well being could be obtained and the effect of stresses evaluated. In case the foregoing sounds too (‘far out,” it should be remembered that what is proposed is actually only a sophisticated method of on-line data reduction! It differs from systems in actual use only in the degree of complexity of the system simulated. ‘(
5.6 Noise and Its Control
Noise in the sense used here refers to any random or unpredictable variation. Thus included are those changes in physiological systems which cannot be predicted for whatever reason, whether it is because of an incomplete understanding of the system or because the change is truly random. In any case, noise is irksome to the (‘wet lab experimenter. Physiological systems, particularly the more complex ones, differ from subject to subject, and within the same subject from time to time. This makes it difficult to ‘‘ repeat the experiment,” a requisite for the validation of all good research. Though in the past some reservations concerning analog simulations were justified, modern analog equipment and techniques, and certainly digital ones, make it possible to repeat the simulated experiment without the intrusion of unwanted noise. Note the word unwanted-in the simulation of stochastic processes, and in certain other kinds of experiments, noise is necessary. But even then the amount and kind of noise should be under the control of the investigator. The ability to eliminate or control noise not only allows the experimenter to validate his own w-ork by repeating the experiment but also al1ows him t o compare his results with other workers in other locations, a difficult thing to do precisely when different experimental animals (or people) are involved. But through simulation it is possible to develop a “ standard dog ” (or person?). ”
ADVANCES IN SIMULATION
39
5.7 A Hybrid Example
Because we have discussed some of the more or less unique difficulties attendant on the simulation of physiological systems, and because we have said that hybrid simulation is best for the tough ones, we will use a hybrid simulation of a physiological system as an example of what has been done. The simulation chosen is not meant to be indicative of what could have been done at the time (and could be done better today), much less that which will be possible and practical tomorrow. It was chosen because the author had followed through on the suggestion that the benchmark concept might be applicable to problems in other fields and had developed PHYsm-a physiological simulation benchmark exconcept [13] periment [12].Experience in implementing the PHYSBE furnished the background necessary for the following discussion. The example chosen is a special-purpose implementation of the PHYSBE concept. It is particularly pertinent to the preceding discussion, and in addition: (1) It illustrates how a basic, simplified, reference simulation can be modified and expanded in a particular area to give the investigator a “ blow-up,’’ or more detailed representation, of the field of interest. (2) It demonstrates how the rest of the reference simulation can be used to “ close the loop ” around the field under investigation and thus give the feedback which is SO often all-important in the study of physiological systems. (3) It emphasizes the fact that to be useful no simulation need be complete in any more detail than that which is required for the study at hand. (4) It shows how typical simulation “chores” may be allocated to the analog and digital equipment, and the signal flow between them. ( 5 ) It underscores an advantage of all simulation: noise, the unpredictable variations that intrude into clinical studies, the inevitable variations from subject t o subject, and in the same subject from time to time, can be controlled.
This modification and refinement of the published version of PHYSBE (by Baker Mitchell and his colleagues in the Department of Mathematics at the University of Texas, Texas Medical Center, Houston) was implemented for the following reasons: ( 1 ) to allow study of effects of hydraulic characteristics and timing of DeBakey-type heart pumps on the over-all circulation; (2) to provide physiologically accurate pressure curves for obtaining
40
JOHN McLEOD
and testing models of baroreceptor characteristics and their influence on heart rate; and (3) to analyze and simulate tilt-table responses as recorded from astronauts, and to extrapolate these responses to 0-to-6 g space environments. The last two are expected to suggest objective criteria for interpreting the effect of tilts and to point the way for future experiments. In the example case a hybrid rather than an all-analog or all-digital simulation was dictated by the following facts : (1) All-digital simulations using MIMIC and FORTRAN had been tried, but although both yielded satisfactory solutions to the equations, turnaround times of about a half-day precluded effective investigation of the phenomena of interest. (2) The aorta-being a relatively stiff elastic tube-introduced highfrequency effects which made digital running time quite lorig. (3) The analog computer was better for the kind of empirical investigation that was called for, but there was not enough analog equipment to allow an all-analog mechanization of the system in the detail that was necessary.
The EAI 680 analog computer was used to simulate an “ expanded view ’’ of those parts of the circulatory system selected for parametric studies. The additions and refinements which contributed to the expansion were : (1) left atrial pumping (which required the addition of block 23, Fig. l), (2) mass of the blood (the basic simulation considers the blood to be weightless because inertia effects are small). (3) gravity effects, (4)nonlinear elasticity of the capillaries in the blood compartments, (5) the division of the aorta into three segments (blocks 41,42, and 43 of Fig. 3) (discrete-space, continuous-time method of solving partial differential equations), (6) viscoelastic aortic wall effects, and ( 7 ) a carotid baroreceptor/heart-rate control loop.
The simulation of all compartments which did not require parametric “ pumping ’’ functions of both ventricles and the left atrium were assigned to the SDS 930 digital computer. Both FORTRAN and machine language were used to program the 930: FORTRAN for ease of programming and machine language for analog-digital, digital-analog conversions.
‘‘ twiddling,” plus the generation of the
41
ADVANCES IN SIMULATION
Figure 1 is a block diagram of this simulation, Fig, 2 is a one-page sample of the five-page digital computer listing, and Fig. 3 is the analog computer diagram. It is interesting to note that this particular simulation demonstrated the well-recognized limitations of both the analog and digital computers: They ran' out of analog equipment and digital time.
I
I
-_-
i'
U
i
FIG.1. Block diagram of hybrid mechanization of PHYSBE as modified to include the requirements listed in the text.
5.7.1 Signal Flow
The signal flow, insofar as the information concerning the value of problem variables is concerned, can be seen in Fig. 1. But that diagram does not show nearly all of the communication between the analog and digital computers required for hybrid simulation. As an illustration, consider first that part of the circulatory system in the example simulated on the analog computer, as indicated in Figs. 1 and 3. Before starting the simulation (but after all connections have been made and all pots set), the mode control on the analog is switched to
42
JOHN McLEOD
"Initial Conditioiis." This causes the outputs of all the analog integrators to assume the values set on their respective initial-condition pots. In the example problem the voltages (labeled V 2 3 , V 3 , etc.)represent the volumes of blood in the body compartments that the integrators simulate. These volumes, or voltages, are then divided by 6JBB. &REWIND M T t r . &ASSIC\ S*t'TCkrXl~''TIa. &ASSIS& s l . c ~ r L a = L P I P P = w . &ASSIG$ B ~ J . Y T I ~ * nR1FTDAL.l €38,La. C C
C
C C C C
C
c
C C C C
C C C
C C C
t C C C
2oc 350
700
C C
C C
s s
SET I Y I T l A L C B W I T I r t Y S
5 ceuTIhcuE Few 0 3 1 1 ~ 3
pel
-399
FIG.2. A sample of the listing of the FORTRAN machine language program for the SDS 930 portion of the hybrid PHYSBE simulation.
ADVANCES I N SIMULATION
43
(actuaily multiplied by the reciprocal of) the compliance of the respective compartments by electronic multipliers and pots to arrive at the related pressures by mechanizing the relationship
P=
vp.
The resulting pressures are represented by the voltages (P23@,P3, etc.) at the outputs of amplifiers connected to the outputs of the multipliers or pots.
--I%tFIG.3. Computer diagram of the analog portion of the hybrid PHYSBE modulation.
The pressure drops across the simulated body compartments are determined by the summing amplifiers (by adding with the sign of one of the variables reversed), and the voltages (F231, F23@, etc.) representing the flows into and out of the compartments are determined by
44
JOHN McLEOO
dividing by (multiplying by the reciprocal of) the resistances to flows between the compartments by means of the associated pots. In other words, the relationship ( P , - P,+I)/Ris mechanized. Note that, appropriately, the resulting voltages are fed to the integrators representing both the upstream and the downstream compartments; that is, the fact that the flow out of the upstream compartment must be equal to the flow into the downstream compartment is assured by letting the same poltage represent both flows. Similar relationships are programmed into the digital computer for the body compartments simulated digitally. But as the physiological system being simulated is a closed circuit, there must be an analog-todigital and a digital-to-analog interface for signals representing the value of variables on both sides of each interface. Typically the interface equipment will include, besides the Analogto-Digital Converters (ADCS) and the Digital-to-Analog Converters ( DACS), sense lines, interrupt lines, and control lines. A patchbay and/or some digital logic is also often included to allow flexibility in the routing of signals. Sense lines allow the digital computer t o interrogate the analog computer to determine some preselected condition of the analog computer or of the simulation. They carry only binary information; that is, they are either low (binary zero) or high (binary one). Thus they can be sampled by the digital computer to determine the mode (pot set, initial conditions, operate, etc.) of the analog computer or the condition (open, closed) of function switches, relays, comparators, etc. Interrupt lines carry commands to the digital computer which cause it to respond in various ways depending on how the computer has been programmed. Typically, interrupt lines are used in hybrid simulation to cause a digital computer working in a " background-foreground " or a time-shared mode to stop work on the current problem; store that program, information and status; call up the simulation program and data; and execute the appropriate commands in support of the hybrid operation. The primary difference between sense lines and control lines is that the digital computer must be programmed to interrogate sense lines, whereas a command on an interrupt line is executed as soon as the digital computer completes the command on which it is working at the time of the interrupt. Therefore, insofar as the digital computer is concerned, sense lines may be considered synchronous in that they are only interrogated at specific times in the digital program, as contrasted to interrupt lines which can interject their commands at any time. Control lines carry commands in the opposite direction, from the digital to the analog computer, and are typically used when a digital
ADVANCES IN SIMULATION
45
computer is in command and is controlling the mode and/or other aspects of the analog operation. In the example problem only the sense lines were used. Details of both analog and digital programming are beyond the scope of this article. Sufficeit to say that the digital computer was programmed to go into a fast iterative loop which included a branch point. The branch point required the digital computer to test a mode sense line from the analog computer at each iteration. If the sense line was low (binary zero) it indicated that the analog computer was in some mode other than operate ” and the digital computer continued to “ idle ” in the iterative loop. But when the analog mode control was put into operate (causing all analog integrators to integrate) the sense line would go high (binary one), and after the next interrogation the digital computer would branch into the preprogrammed subroutine to generate the functions 1/C23 and 1/C3 required to cause the left atrium and the left ventricle simulated on the analog computer to “pump.” The instantaneous digital values of llC23 and 1/C3 were converted to an analog voltage by the DACS at the interface and used to drive the multipliers to produce the voltages representing the left atrial and the left ventricular pressures. The voltages representing these pressures in turn caused the voltages representing the flows and volumes in the analog computer to change as functions of time in a way analogous to their counterparts in the physiological system. The analog voltages F231, representing the flow from the lungs into the left atrium, and P43, representing the pressure in the descending aorta, were converted at the interface into binary inputs to drive the digital part of the simulation. The digital computer thus driven generated the pressures, flows, and volumes. associated with the body compartments it simulated. Of these P 2 2 g , the pressure in the pulmonary vein, and F43# (equal to the sum of the flows into the arms, head, trunk, and legs computed digitally) were converted at the interface to analog voltages to close the simulated circulation loop. I
.. and Raphael. B., A comparison of list processing languages. Ct~m,r/ .-lCJI 7. 231 240 (1964). 6. 13obri)u. I].. inid 'I'c~itclinaii. IY., Format-direeted list processing i n L I ~ P . 'I'cch. f i t b p t . . I M t , Btranrk. mid Stw mail, Tnc.. Cambridge, Massachusetts, 1968. $.
I'irracciolo
di
E'oriiio, -4.. and \Volbmsteni. S., On a class of programming
" J mbol manipulatiori based on c-utcnded Markov algorithms. Cetrtro Sttcrli CrilcolatrLci Elettronzclte tlel C . S . K . 121. 21, (1964). 8. < ?iristc~ir~(m. C.. Esainplta of s j mbol rnaiiiptilattoii i n the AMBITprogramming laiigiia&y. I'roc. AC.11 S n t l . COT?^., 20th. Clerelarirl. Ohm, 19/35, pp. 247 261. Thoinpstrii. \\'.lshiiigt{rn. D.('. 9 . ( ' l i r i s t ( s i i w i i . ('.. 0 1 1 tlrr impleinciitntiorl cpf AXBIT, a language for symbol r r i ~ i i i i ~ ~ i i l ~ ~ t Covtm. ii)li. dCJI 9. 570 572 (1966). 10. ( ' i t l i c ~ i i . J., -4n w o f fast aiid slou iiitmwrie5 i i i list-procrss~ng larigiiages. Coinm. -4c.1110. 82 hCi (1947). 2 2 . J?'artx.i. D. J., ( ~ i i ~ w o l c lImatIcallyIrldrxrd systrm for t hc rctricval of case law,
of project I967 Active 5
1961
Ternmmatctl i n 1965
-
Compu tern -111.Law Inst., Thc National Law Ccntcr, (ioorge Washington Urriv., Washington, D.C. and AUTOCOMP, h c . Handling and retrieving tax law (about 20,000 pages of material) Summer 1966 Active 11 programmers, 3 lawyers
-rnz
: ?
Currcnt monthly number of computing hours Sponsoring agencies or institutions Project leader(s)
14 $250,000 IBM 360/40 at George Washington Univ., Recognition Equipment scanners. Photon 901 and 7 13 photocomposition equipment About 40
3
The Weizmann Inst. of Science, The Hebrew Univ., Bar Ilan Univ. Aviezri S. Fracnkel
IBM, Council on Library Resources William B. Eldridge
Major book publisher. George Washington Univ., AUTOCOMP, Inc. John C . Lyons; George Cozzens
n
P c t
I-
Institution
Name or aim of project
Law Research Service, Inc., Western Union Building, 60, Hudson St., N.Y. Remote case law retrieval on commercial basis
Antitrust Division, Department of Justice, Washington, D.C.
Internal Revenue Service, Washington, D.C.
Retrieval system for Supreme Court decisions, briefs, legal memoranda, opinions of interest to the Antitrust Division 1962 Inactive since 1964 1 programmer, 4 lawyers (formerly) 5 (formerly) IBM 407 (card machine)
RIRA (Reports and Information Retrieval Activity)
Year research was started Present status of project Current number of workers Full time equivalents Current annual budget Type of computer used
1964? Active ?
Current monthly number of computing hours Sponsoring agencies or institutions Project leader(s)
?
25 (formerly)
8
Elias C. Hoppenfeld
John C. Lyons; Michael A. Duggan
Charles Casazza (founder and former leader: David T. Link)
? ? ?
UNIVAC 418 and two Honeywell 1200 computers
1962 Active 2?
l? ?
IBM 7074 (at Detroit)
Appendix IIl--Continued Institution
Central Intelligence Agency
Federal Trade Commission, Washington, D.C.
Name or aim of project
Retrieval system for about 100,000 documents
Retrieval system for Federal Trade Commission decisions, Circuit Court and Supreme Court decisions
Year research wasstarted Present status of project Current number of workers Full time equivalents Current annual budget Type of computer used
? ? ?
Current monthly number of computing hours Sponsoring agencies or institutions Project leader(s)
?
?
munication Commission About 8
?
Paul R. Beath
John C. Lyons; B. Slosberg
Federal Communication Commission, Washington, D.C. Retrieval system for Federal Communications Commission decisions
1963?
1965
Dormant or terminated? ?
Active 2 programmers, 4 lawyers
? ?
? ?
Card and magnetic tape equipment
UNIVAC 3 at Federal Com-
Institution Name or aim of project Year research was started Present status of project Current number of workers Full time equivalents Current annual budget Type of computer used Current monthly number of computing hours Sponsoring agencies or institutions Project leader@)
Federal Aviation Agency, Washington, D.C. Aviation law indexing and retrieval system
Department of Labor, Washington, D.C. Retrieval system on unemployment, compensation and security
CREDOC,Brussels, Belgium Set up retrieval system for all Belgian law, using manual indexing
1964
1966
1966
Active 2 programmers (half-time), 3 lawyers
Active 1 programmer (half-time), 3 lawyers (half-time)
Active
4
2
?
?
?
?
IBM 360/30 at the Federal Aviation Agency
IBM 1401 at Department of Labor
BULLGAMMA115 and XBM 360 system of L’Information Rationelle?
8
Not yet in production
?
John C. Lyons
Brussels bar, Belgian universities Baron Edouard Houtart
John C. Lyons; James Minor
?
Appendix Ill-Continued Institution
Paris bar, 38, ruo Scheffer, Paris
Name or aim of project
Experiment of manually indexed system for companies legislation retrieval 1967 Active? 35?
Year research was started Present status of project Current number of workers Full time equivalents Current annual budget Type of computer used Current monthly number of computing hours Sponsoring agencies or institutions Project leader(s)
7? ?
BULLGamma 115 and IBM 360 system of L'lnformation Rationelle?
Dept. of Political Seienca, Univ. of Washington, Seattle, Wash. U.N. Treaty Series Project 1963 Act,ive Varies with student enrollment
Point of law " retrieval -system for statutory and case law 1957 Terminated "
Varies with student enrollment Varies with student enrollment IBM 7094 and Burroughs B-5500
Paris bar
Varies greatly with uneven demand Univ. of Washington
Claude Lussan
Peter H. Rohn
?
Oklahoma State Univ., Stillwater, Okla.
Robert T. Morgan (deceased)
b
Institution
Western Reserve Univ., Cleveland, Ohio
Name or aim of project
Electronic searching of legal literature
Year research was started Present status of project Current number of workers Full time equivalents Current annual budget Type of computer used Current monthly number of computing hours Sponsoring agencies or institutions Project leader( 8 )
After 1955 Terminated before 1961 -
Michie Co., Bureau of National Affairs, Matthew Bender and Co., Jonker Business Machines, American Asaoc. of Law Libraries Project Lawsearch ; nonelectronic manually operated retrieval system for motor carrier case law 1962?
Terminated before 1967 -
Council on Library Resources Jessica S. Melton; Robert C. Bensing
William H. B. Thomas
c
in c)
?
I72
AVlEZRl S. FRAENKEL
REFERENCES In this list Modern Usea of h g i C in Law is abbreviated &.U.L.L. 1. A national crime information center. F B I Law Enforcement Bull. 35, pp. 2-6,22-23
(1966).
2. A.B.A. Special Committee on Electronic Data Retrieval-eurrent activities. M.U.L.L., pp. 82-83 (June 1965). 3. Adam, E., EDP aids to the courts.State and Local Gout. Conf., New York, 1964, pp. 18-22. System Development Corp. 3a. Air Force Project LITE. 17th Rept., Committee on Government Operations, 90thCongr., Home Rept. No. 1133 (1968).U.S. Government Printing Office, Washington, D.C., February, 1968. 4. Allen, L. E., Sketch of a proposed semi-automatic, hierarchical, open-ended storage and retrieval system for statute oriented legal literature. Proc. Congr. Intern. Federation Doc., Washington, D.C. October 1965, pp. 189-198. 5. Allen, L. E., Brooks, R. B. S., and James, P. A., Automatic Retrieval of Legal Literature: W h y and Hour.Walter E. Meyer Res. Inst. of Law, Inc., New Haven, Connecticut, 1962. 6. Alt, F . L., Information handling in the National Standard Reference Data System. Natl. Bur. Std. Tech. Note No. 290 (1966). 7. American Bar Foundation to study automated indexing of court decisions. M.U.L.L., p. 147 (September 1963). 7a. Annual Review of Information Science and Technology (C. A. Cuadra, ed.) (Am. Doc. Inst.). Wiley (Interscience),New York, Vol. 1, 1966; Vol. 2, 1967. 7b. Archibald, R. D., and Zilloria, R. L., Network Based Management Systems, p, 14. Wiley, New York, 1967. 8. Bar-Hillel, Y., Theoretical aspects of the mechanization of literature searching. DiqdaJe Informationswandler (W. Hoffman, ed.), pp. 406-443. Vieweg, Braunschweig, 1962. 8a. Birch, B. J., and Swinnerton-Dyer, H. P. F., Notes on elliptic curves, 11. J. Reine Anoew. Math. 218, 79-108 (1965). 8b. Bohnert, H.G., and Backer, P. O., Automatic English-to-logic translation in a simplified model. IBM Research Paper RC-1744, January, 1967. 9. Bourne, C. P., Methods of Information Handling. Wiley, New York, 1963. 10. Case law information retrieval demonstration. M . U.L.L., pp. 145-147 (September 1963). 11. Caasels, J. W. S., Arithmetic on an elliptic curve. Proc. Intern. Congr. Math., Stockholm 1962, pp. 234-246. Almqvist and Wiksells, Uppsala, 1963. 12. Chartrand, R. L., The Library of Congreas Legislative Reference Service; The S y s t e m Asproach: A Tool for the Congeas. The Library of Congress, Washington, D. C., March 1967. 13. Clevinger, F. M., Symposium on legal information retrieval. M . U.L. L. pp. 27-32 (March 1964). 14. Communication Scimcea and the Law: Reflections from the Jurimetrics Conference, Yale Law School, September 1963 (L. E. Allen and M. E. Caldwell, eds.). Bobbs-Merrill, New York, 1965. 15. Computer8 and the Law, a n I n t r o d w h y Handbook (R. P. Bigelow, ed.). Am. Bar. Assoc., New York, 1966. 16. Conte, A. G., Un saggio filosofico sopra la logica deontica. Riv. Intern. Filosof. Diritto, 42, Fasc. 111, 564-577 (1965).
LEGAL INFORMATION RETRIEVAL
I73
16a. Davis, R. P., The LITE system. Judge Advocate Gen. Law Rev. 8, 6-10 (Special issue LITE, Legal Information Thru Electronics) (1966). 17. Dennis, S. F., Status of the American Bar Foundation research on automatic indexing-searching computer system. M.U.L.L. pp. 131-132 (September 1965). 18. Dennis, S. F., The design and testing of a fully automatic indexingsearching system for documents consisting of expository text. Information Retrieval, A Critical View (G. Schecter, ed.). Thompson, Washington, D. C., 1967. 19. Dickerson, F. R., Electronic computers and the practical lawyer. J . Legal Educ. 14, 485-497 (1962). 20. Dickerson, F. R., A legal document retrieval system for the Federal Aviation Agency, M.U.L.L., pp. 191-216 (December 1965). 2Oa. Dietemann, D. C., LITE in action. Judge Advocate Gen. Law Rev. 8,20-25 (Special issue LITE, Legal Information Thru Electronics) (1966). 21. Dreyfus, H. L., Alchemy and artificial intelligence. Rand Corp., Paper No. P-3244, December 1965. 22. Duggan, M. A., Law, logic and the computer: bibliography with assorted background material, Bibliography 9. Computing Rev. 7, 95-117 (1966) (reprintsavailable from Assoc. Computing Machinery, 21 1 East 43rd Street, New York). 23. Duggan, M. A., Law, logic and the computer: bibliography with assorted background material, Bibliography 13 (Suppl. A to Bibliography 9). Computing Rev. 8, 171-188 (1967) (reprints available from Assoc. Computing Machinery, 21 1 East 43rd Street, New York). 24. Edmundson, H. P., and Wyllys, R. E., Automatic abstracting and indexing-survey and recommendations. Comm. ASSOC. Computing Machinery 4, 226-234 (1961). 25. EDUCOM, Bull. Interuniu. Commun. Council 2, No. 5 . (October 1967). 25a. Edwards, E. W., Electronic data-processing and international law documentation. Am. J . Intern. Law 61, pp. 81-92 (1967). 26. Edzhubov, L. G . , On automation of fingerprinting expertise. Sovelskaya Krimimlktika na Sluzhbe Sledstviya (Soviet Criminology in the Investigation Service), 4th ed. Gosyurizdat, Moscow, 1961. 27. Eldridge, W. B., Report on the American Bar Foundation project. M.U.L.L. pp. 82-83 (June 1965). 28. Eldridge, W. B., The American Bar Foundation project. M.U.L.L. pp. 129-131 (September 1965). 29. Eldridge, W. B., and Dennis, S. F., The computer as a tool for legal research. Law and Contemporary Problems 28, 78-99 (1963). 30. Eldridge, W. B., and Dennis, S. F., Report of status of the Joint American Bar Foundation-IBM study of electronic methods applied to legal information retrieval. M.U.L.L. pp. 27-34 (March 1963). 31. Ellenbogen, H., Automation in the courts. Am. Bar. Aeeoc. J . 50, 655-658 (1964). 32. Eysmtm, I. I., Certain problems of the theory of investigating material evidence. Voprosy Kriminalktiki (Criminological Problems). Gosyurizdat, Moscow, 1962. 32a. Fels, E. M., Evaluation of the performance of an information-retrieval system by modified Mooers plan, Amer. Documentation 14, 28-34 (1963).
I74
AVlEZRl S. FRAENKEL
33. Fels, E. M., and Jacobs, J., Linguistic statistics of legal indexing. Univ. of Pittsburgh Law Rev. 24, 771-791 (1963). 34. Fiordalisi, V. E., Jacobstein, J. M., Price, M. O . , and Marke, J. J., Project Lawsearch. Law Library J . 60, 42-63 (1967). 35. Freed, R . N., Evidence and problems of proof in a computerized society. M.C.L.L. pp. 171-184 (December 1963). 35a. Gallizia. A , , Mollame, F.. and Maretti. E.. Towards the automatic analysis of natural language texts (translated from Italian). A G A R D ( A d k o r y Group for Aerospace Research and Development), NATO, Paris, 1966. 36. Greenblatt, R. D., Eastlake, D. E., and Crocker, S. D., The Greenblatt chcss program. Proc. Fall Joint Computer Conj., Los Angeles, November 1967, pp. 801-810. Thompson, Washington, D.C., 1967. 37. Halcy, S. R . , Legislative information system. M.U.L.L. pp. 93-98 (September 1965). 38. Halloran, N. A., Court congestion. Computers and the Law, An Introductory Handbook ( R .P. Bigelow, ed.), pp. 67-72. Am. Bar Assoc., New York, 1966. 39. Halloran, N. A., Modernized court administratron, Appendix E, Task force report: the Courts. The P r d e n t ' s Gornmision on Law Enforcement and ildministrution of Justice, pp. 162-171. U. S. Govt. Printing Office, Washington, D.C., 1967. 40. Harris, A . , Data processing and court administration. M . U .L.L. pp. 17P-175 (Dcccmbcr 1965). 41. Harris, A., Judicial decision making and computers. FiZla.nova Law Rev. 12, 272-312 (1967). 42. Harris, D. J., and Kent, A. K., The computer as an aid to lawyers. Computer J . 10, 22-28 (1967). 43. Hess, S. W., Weaver, J. B., Siegfeldt, H. J., Whelan, J. N., and Zitlau, P. A., Xoiipartisan political redistricting by computer. Oper. Res. 13, 998-1006 (1965). 4 4 . Hoffman, P. S., Lawtomation in legal research: some indexing problems. M . t 7 . L . L .pp. 16-27 (March 1963). 45. Hoppenfeld, E. C., Law Research Service/Inc. M.U.L.L. pp. 46-52 (March 1966). 46. Horty, J. F., Searching statutory law by computer. Interim Rept. No. 1. Health Law Center, Univ. of Pittsburgh, Pittsburgh, Pennsylvania. 47. Horty, J. F., Searching statutory law by computer. Interim Rept. No. 2. Health Law Center, Univ. of Pittsburgh, Pittsburgh, Pennsylvania, May 1962. 48. Horty, J. F., Searching statutory law by computer. Final Rept. Health Law Center, biliiv. of Pittsburgh, Pittsburgh, Pennsylvania, November 1962. 48u. Horty, J . F., A look a t research in legal infcrmation retrieval. Proc. 2nd Intern. Study Conj. C h a i f i m t w n Research, Elsinore, Denmark, September 1964 pp. 382-393. Munksgaard, Copenhagen, 1965. 48b. How to Use Shepard's Citations. Shepard's Citations, Inc., Colorado Springs, Colorado, 1873 and subsequently. 49. IBM System/360, Document Processing System (36OA-CX-l2X), Program Description and Operations Manual H20-0477-0, 1967. Tech. Publ. Dept., White Plains, New York. 50. Improvement of land title records, Reports of the Am. Bar Assoc. Comm.
LEGAL INFORMATION RETRIEVAL
51.
52. 53. 54. 55. 56. 57. 58.
59.
60. 61.
62.
63.
64. 65. 66.
67. 68.
69. 70. 71.
I75
on Improvement of Land Title Records (R. N. Cook, Chairman). Real Property, Probate and T m t J., pp. 191-201, Fall 1966; Fall 1967. Information retrieval among examining patent offices. ICIREPAT Ann. Meeting, 5th, London, 1965 (H. Pfeffer, ed.) Thompson, Washington, D.C., and Academic Press, New York, 1967. Isaacs, H. I., System analysis theory vs. practice-a case study of the Los Angeles Police Dept. Inform. System, Publ. P-102. H. H. Isctacs, Res. and Consulting, Inc., Los Angeles, December 1966. Jacobs, M. C., Commission’s report re: Computer programs. J . Patent Ofice SOC.49, 372-378 (1967). Judge Advocate Qen. Law Rev. 8 (Special issue LITE, Legal Information Thru Electronics) (1966). Jurimetrics: the electronic digital computer and its application in legal research. Iowa Law Rev. 50, 1114-1134 (1965). Kalikow, M., A long-range program for mechanized legal and patent searching centers. M.U.L.L. pp. 78-86 (June 1962). Kayton, I., Retrieving case law by computer: fact, fiction and future. The George Washington Law Rev. 35, 1-49 (1966). Kayton, I., SYNDIG thesaurus of legal terms. SYNDIG, Inc., Washington, D.C., 1966. Kehl, W. B., Horty, J. F., Bacon, C. R. T., and Mitchell, D. S., An information retrieval language for legal studies. Comm. Assoc. Com@ing Machinery 4, 3 8 e 3 8 9 (1961). Kerimov, D. A., Future applicability of cybernetics to jurisprudence in the U.S.S.R. M.U.L.L. pp. 153-162 (December 1963). Kerimov, D. A., Andreev, N. D., Kask, L. I., Edzhubov, L. G., and Lebedev, P. N., The use of cybernetics in law. V’eetn. Leningr. Univ. pp. 141-144 (1962): Foreign Develop. Machinery Transl. Inform. Proc. JRPS: 14967. Office Tech. Service, Washington, D.C., August 1962. Key Word in Context, Index to vol. 41, Decisions of the Comptroller General of the U.S. LITE, Air Force Accounting and Finance Center, Denver, Colorado. Key Word in Context Index, U.S. Code, Titles 10, 32, 37, 50, 50 App. LITE, Vol. I, ABA-COM, Air Force Accounting and Finance Center, Denver, Colorado. Klug, U., Juristkche Logik, 3rd ed. Springer, New York, 1966. Kort, F., Quantitative analysis of fact-patterns in cases and their impact on judicial decisions, Harvard Law Rev. 79, 1595-1603 (1966). Lander, L. J., and Parkin, T. R., Counterexample to Euler’s conjecture on sums of like powers. Bull. A m . Math. SOC.72, 1079 (1966) [see also Math. Qomput. 21, 101-103 (1967)J. Law and Contern?. Probl. 28 (Special issue devoted to Jurimetrics) (1963). Law and Electronics: the challenge of a new era. Proc. Natl. Law Electron. Conj. lst, 1962 (E. A. Jones, Jr., ed.), Bender, New York, 1962. Law Enforcement, Science and Technology ( S . A. Yefsky ed.). Thompson, Washington, D.C., 1967. Law/Fact retrieval at F.T.C. M.U.L.L. p. 43 (March 1963). Law school research projects reported. M.U.L.L. pp. 117-124 (September 1966).
I76
AVlEZRl S. FRAENKEL
7 2. Lawlor, R. C., Information technology and the law. Advan. Computers 3, 299-352 (1962). 7 3 . Lawlor, R. C., Analysis and prediction of judicial decisions-informal progress report. M . U . L . L . pp. 132-137 (September 1965). 7 4 . Leboaitz, A. I., Mechanization of the USAEC library; Part I : legislative information, TID-22643. Natl. Bur. of Std., Washington, D.C., November 1966. 75. Lehmer, D. H., The primality of Ramanujan’s Tau-function. Pt. 11: Computers and computing. A m . Math. Monthly 7 2 , 15-18 (1965). 7 6 . Lehmer, D. H., The prime factors of consecutive Integers. Pt. 11: Computers and computing. A m . Math. Monthly 7 2 , 19-20 (1965). 7 7. Lehmer. D. H., Lehmer, E., Mills, W.H., and Selfridge, J. L., Machine proof of a theorem on cubic residues. Math. C o m p t . 16, 407-415 (1962). 7 8 . L E S , LegaZInder, 2nded.AntitrustDiv.Dept. of Justice, Washington,D.C., September 1964. 7 9 . Linden, B. L., The law of copyright and unfair competition: the impact of new technology on the dissemination of information. M . U . L . L . pp. 44-52 (June 1965). 8 0 . LITE. General System Description. Staff Judge Advocate, Air Force Accounting and Finance Center, Denver, Colorado, January 1967. 8 1. LITE, Hearing before a Subcommittee of the Committee on Government Operations, House of Representatives, 90th Congr., U.S. Government Printing Office, Washington, D.C., August 1967. 8 2 . LITE, Sewsletter, No. 1 (1968). Office of Judge Advocate, Air Force Accounting and Finance Center, Denver, Colorado. AFRP 110-3. 8 3 . Littlewood, D. E., The Skeleton K e y of Mathematics, A Simple Account of Complex Algebraic Them&. Harper, New York, 1960. X I . Loevinger, L., Jurimetrics: science and prediction in the field of law. JZinn. Law Rev. 46, 255-275 (1961). 85. Loevinger, L., Occam’s electric razor, A1 . U . L . L . pp. 209-214 (December 1962). 86. Luhn, H. P., The automatic creation of literature abstracts. I B M J . Re. Develop. 2 . 159-165 (1958). 8 7. Lyons, J. C., New frontiers of legal technique. M . U . L . L . pp. 256-267 (December 1962). 8 8. Lyons, J. C., Automation and the administrative process. M . U . L . L . pp. 37-45 (March 1964). 8 9. McCabe, L. B., and Smith, C. P., System analysis in criminal justice information systems, SP-2749. System Develop. Corp., Santa Monica, California. February 1967. 9 0 . Marke, J. J., Progress report on Project Lawsearch. Law Library J . 58, 18-23 (1965). 9 1 . Mattern, C . L., Search Framing Manual (Automated Law Searching). \Vebster Hall, Pittsburgh, Pennsylvania. 9 2 . Melton, J. S., The “semantic coded abstract” approach. M . U . L . L . pp. 48-54 (Narch 1962). 9 3 . Melton, J. S., and Bensing, R. C., Searching legal literature electro~iictllly: results of a test program. Minn. Law Rev. 45, 229-248 (1960). 9 4 . Menne, A., Possibilities for the application of logic in legal science. dI.C’.L.L. pp. 135-138 (December 1964).
LEGAL INFORMATION RETRIEVAL
I 77
95. Mixon, J., Review of Tutortext practical law: a course in everyday contracts by W. Lehman. M . U . L . L . pp. 226-231 (December 1962). 96. Morgan, R. T., The “point of 1aw”approach. M.U.L.L. pp. 44-48 (March 1962). 97. Nagel, S. S. (Review), Statistical prediction of verdicts and awards. M . U . L . L . pp. 135-139 (September 1963). 98. New York State Identi$mtion Intelligence System against Crime. Bureau of Public Inform. NYSIIS (R. R. J. Gallati, Director), New York, 1967. 99. Nix, L. S., Tomorrow’s techniques today; calender administration in the Superior Court of the State of California. The World Assoc. of Judges, World Conference on World Peace Through Law. Geneva, July 1967. World Peace Through Law Center, Geneva. 100. Paradies, F., Legal norms and the formbook of lawyers. RoZandino, Monit. Notariato 92, (1966). 101. Pennsylvania Statutes Word Frequency List (Automated Law Searching). Webster Hall, Pittsburgh, Pennsylvania, 1967. 102. Proposed plan for the computerization of law internationally (14 p. pamphlet). World Peace Through Law Center, Geneva, July 1967. 103. Ragan, L., Chicago’s police EDP system. Datamation 13, 52-53 (1967). 104. Riddles, A. J., Computer based concept searching of United States patent claims. M . U . L . L . pp. 175-188 (Decemberl965). 104a. Rohn, P. H., The UNTS Project. Geneva World Conference on World Peace Through Law, July 1967. Revised version forthcoming in Intern. Studies Quart. 105. Salmond, J . W., The literature of 1aw.Columbia Law Rev. 22,197-208 (1922). 106. Samuel, A. L., Some studies in machine learning using the game of checkers, 11-recent progress. I B M J . Res. Develop. 11, 601-617 (1967). 107. Searches of law by computer (20 pp. booklet). Univ. of Pittsburgh Health Law Center, Pittsburgh, Pennsylvania. 107a. Sieburg. J., LITE developmental activities Judge Advocate Gen. Law Rev. 8 , 36-41 (Special Issue LITE, Legal Information Thru Electronics) (1966). 108. Simitis, S., Automation in der .Rechtsordnung-Moglichkeiten h d Grenzen. Juristkche Studiengesellschaft Karlsruhe, Schriftenreihe, Heft 78, Miiller, Karlsruhe, 1967. 109. Simon, H. A., and Newell, A., Heuristic program solving: the next advance in operations research. Oper. Res. 6 , 1-10 (1958). 110. Springer, E. W., and Horty, J. F., Searching and collating the welfare laws of Pennsylvania by computer. Res. Rept. Health Law Center, Univ. of Pittsburgh, Pittsburgh, Pennsylvania, September 1962. 110a. Stevens, M. E., Automatic indexing: a state-of-the-art report. Natl. Bur. Std. Monograph 91 (March 1965). 111. Study of the proposed rules of criminal procedure. Final Rept. Health Law Center, Univ. of Pittsburgh, Pittsburgh, Pennsylvania, June 1965. l l l a . Swanson, D. R., An experiment in automatic text searching, word correlation and automatic indexing, Phase 1, F i n d Rept. Report C82-OU4 (April 1960), reprinted November 1960. 112. Swayze, F. J., Can we improve the sources of our law, Lectures on legal topics. Assoc. of the Bar of the City of New York. 3, 145-164 (1921). 113. Swinnerton-Dyer, H. P. F., Applications of computers to pure mathematics. Numerical Anuylsis: A n Introduction (J.Walsh, ed.), pp. 159-164. Thompson, Washington, D.C., 1967.
I78 114. 115.
116.
1lY. 118.
119.
120. 1.21.
122. 1.23. 124, 125.
AVIEZRI 5 . FRAENKEL
Tapper, C., Lawyers and machines. Mod. Law Rev. 26, 121-137 (1963). Tapper, C., British experierice in legal information retrieval. M. U.L.L. pp. 127-134 (December 1964). Tapper, C., World cooperat ion in the mechanization of legal information retrieval, Work paper for the working session on research and legal information by computer. Geneva World Conference on World Peace Through Law. GpncJua.July 1967. World Peace Through Law Center, Geneva. The George Wwhington IAW Rev., 33 (Special issue on law, scionce and technology) (1964). Thomas, ?V, H . R., Project Lawsearch, a non-electricon approach to law searching. M . U . L . L . pp. 49 54 (March 1963). Titus, J. P., Pros and cons of patenting computer programs. Comm. Assoc. Comput. Machinery 10, 12C-127 (1967). M'alston, C. E., Information rrtricval. Advan. Computers 6 , 1-30 (1965). Weizenbaum, J., ELIZA-a computer program for the study of natural language communication between man and machine. Comrn. Asaoc. Comput. Machinery 9, 36-45 (1966). Westin, A. F., Legal safeguards to insure privacy in a computer society. Comm. Assoc. Comput. Machinery 10, 533-537 (1967). Wiencr, F. R., Derision prediction by computers: rionsrnw cubed-and uorse. Am. Rar Assoe. J . 48, 1024-1028 (1962). Wilson, R. A.. Computer retrieval of case law. Southwestern Legal J . 16, 409-4:Ql ( 1 962). Wilson, R . A., Case law searching by machine. Computers a d the Law, an Introductory Handbook (R. P. Bigelou, ed.), pp. 55-59. Am. Bar Assoc.. Xew Pork, 1966.
Large Scale Integration-an
Appraisal
L. M. SPANDORFER Sperry Rand Corporation Univac Division Philadelphia, Pennsylvania
1. 2. 3. 4. 5.
6.
7. 8.
9.
Introduction . Device Fabrication Packaging . Economic Considerations . Interconnection Strategies . 5.1 Fixed Interconnection Patterns . 5.2 Discretionary Wiring 5.3 Special Studies . Bipolar Circuits . 6.1 Saturating Circuits . 6.2 Nonsaturating Circuits . MOS Circuits . LSI Memories 8.1 Bit Organization . 8.2 Word Organization . 8.3 Content-Addressable Memories 8.4 Read-only Memories . 8.5 Reliability . Further System Implications . References .
. . . .
.
. . . . . .
. .
.
. . . . . . . .
179 180 184 190 194 194 197 202 205 206 209 213 218 219 224 228 229 230 231 234
1. Introduction
The last generation of digital computer circuits consisted of discrete components such as transistors and resistors interconnected on printed circuit cards. Present third generation integrated circuit technology is based on the fabrication of one or more entire circuits on an almost microscopically small silicon chip. The action today, denoted LargeScale Integration (LSI), is based on fabricating a plurality of circuits on a single silicon chip roughly equal to or greater than the logic power contained on many third generation cards. More than 100,000 logic circuits can theoretically be batch fabricated on a 2-in. slice of silicon, leading to predictions which soar from vanishingly small processor costs to the era of a computer-on-a-chip. Large-scale integration has been flowering since 1964-1965. No single invention initiated its growth; always theoretically possible since the 179
I80
L. M. SPANDORFER
development of silicon planar technology, LSI was delayed because of a preoccupation with critical learning problems at the small chip level. No single proposal or publication seems to have sparked the fire and set forth the long-term technical framework. Instead, the LSI concept evolved out of inescapable observations on hypothetical yields of large chips on wafers, coupled with the possibility of improved masking t- provide the semiconductor industry with a slender rweniie in the order of several hnndred million dollars. The increaw in the level of engineering design does not really confront the systrni manufacturer with a n altogethsr new situation since he is already accnstomccl t o providing automated routines for partitioning, assignment, l a j oat. and simulation currently i w d t o support PC card nianufactnre. On the other hand. assumption of the design role by the independrnt semiconductor supplier might result in substantial new eleriirnts of cost. Regardless of which groul) iint1ertakt.s the LSI chip design. a major increase i n logic simulation activity will h>rcquirtd a t the chip level t o insure correctness of design : this is partic*ularly critical h c a u s e of the chnngc or turnaround problein i n r x . Perhaps the major new clement s of dehigii cost are tliosc related t o thc rather forinidable p r o t h n of specifying chip test and the generall\- nioi~siitvolved interface bet\\ t w i part su1)plier and user. Testing third geiwration cards is a comparatix-cly hiniple matter since all piecc parts a r c iridividually and independently tcsted before assenihlj-. The I)rcictical dctcwiiination of a sintal)ltL tvst for coiiiytles sequential w t n m.lis has thiis fiw proved to be dificiilt and costly in terms of inac~hinetimr. Thf, change 1)rohleni is particularly perplexing. and it is lacking in a suficient nu nil^^ of good proposals for its soliltion. If a changv is requircd in the carly development phase of a n LSI niachine program, redesign costs w i l l be incurrtd; t ~ l i e t h e ror not there will he a schedule s l i p p a p clelwids on thc prograni critical path. H o ~ T ( T ,during thc miwhirit. tt’ht phase. current practice is t o install a temporary fix within iniittttcs. posbihlj. hours. If the need t o niakc a fix occiirs during t m t , both slippage and redesign costs \vill accrue: a htring of repeated fix cyclrs would bc intolerable. -A good discussion of t hc design-redesign cycle has bctw given by Smith arid S o t z [76j. The part number problem can be illustrated with refwerice to the prowssor in a large-scale third generation system. ‘I’hr. logic of a macliirte in this class niight typically require on the order of 18,000 OR-inverter
LARGE SCALE INTEGRATION-AN
APPRAISAL
I93
(NOR) gates or equivalent. Assuming a maximum 36-gate chip, the average chip might carry about 30 gates, resulting in a total requirement of 600 unique part numbers. As for volume, the expected market potential for the large-scale machine over the next 4-6-year period appears to be numbered in the many hundreds or low thousands. The volume could increase considerably depending upon the growth of new innovations such as file-oriented systems, and various new market areas such as the utility field which are best suited to the large machine. Assuming our hypothetical processor makes a sizable penetration into this market, we see that about 1000 parts of each of 600 part numbers might be required. On the other hand, an equivalent third generation machine might have 10 semiconductor part numbers and an average of 600 parts per part number. I n terms of the traditional high-volume, low unit-cost practices of semiconductor manufacturing, the transition is quite unfavorable. It is too early to assess the degree and manner in which the semiconductor and system industries will adjust to this and other economic aspects of LSI. The example of the hypothetical large-scale machine is particularly significant since this is one of the few areas where LSI offers hope for improving performance. Volume requirements for classes of systems and subsystems other than the processor are more encouraging. Smaller processors have a much larger volume base and more than an order of magnitude fewer part numbers. Terminal equipment is undergoing a very rapid growth; comparatively few part numbers are required. Low [45] has cited the example of a typical airline reservation system: whereas the duplex processors require about 23,000 circuits, the terminals use over 290,000 circuits. Assuming 100 circuit chips, about four part numbers are required for the’latter. Part numbers for I/O controllers are as high as for a small processor but the part volume is growing steadily. Memory part numbers are low; the volume could become very large depending upon the outcome of its formidable competition with magnetics. One technique which should contribute in some measure to the solution of the design, change, and part number problems lies in the use of general-purpose or master chips [60] discussed in the next section; these chips are identical in all stages of fabrication except for the final metallization step in which the required logic function is defined. Hobbs [34], Flynn [ 2 7 ] , and Rice [68] have pointed out that silicon costs comprise only a very small fraction of the cost-of-ownership of digital computers. Rice indicates that the cost of the IBM 360150 installation a t Iowa State University including all auxiliary equipment, manpower, and overhead is $109,600 per month. Programming costs account for $40,000 or 36.50/,, installation operation costs $36,000 or 33%, and
I 94
L.
M. SPANDORFER
$33, 600 or 30.5:/, is applied to machine rental. It is further estimated that about one-third of the rental is for hardware and software, one-third is for sales and service, and one-third is for administrative overhead and profit. The fraction of the rental attributable to hardware is about 5.8%, and Rice estimates the silicon portion of the hardware to be 2%. The present writer does not concur with this implicit estimate of the rental attributable to software and believes it should be almost an order of magnitude lower. In any event, i t appears that the logic power of systems could be considerably enhanced without adversely affecting the cost to the user. This point will be briefly discussed in a subsequent section. 5. Interconnection Strategies
Yield considerations give rise to a spectrum of wafer utilization strategies with Fixed Interconnection Pattern ( FIP) and Discretionary Wiring (DW) currently given primary and limited consideration, respectively. Briefly, the FIP strategy involves using the same interconnection masking pattern on all the chips in a wafer, regardless of fault locations. The DW strategy does not make use of a chip structure. Instead, the entire wafer is first analyzed for faults; a specialized masking pattern is then produced which bypasses the faulty gates and implements the required function. 5.1 Fixed Interconnection Patterns
The fixed interconnection pattern is based on two premises: (1) reasonable yields can soon be expected for high device density chips of approximately 60- I 00 mils square. with larger sizes economically feasible within fire years: and ( 2 ) device densities considerably greater than in current use should soon be feasible because of improvements in tolerances and general processing. thus potentially insuring near-term acceptance of the strategy with only a moderate increase in chip size. The fixed interextension of current connection pattern is a corice~~tualJystrai~~itfor~~ard practice: masks are prepared in advance. the wafer is processed and diced. and the chips are operationally tested. The strategy requires that very small geometry devices must be used to attain a high level of circuit complexity. A serious problem arises in testing PIP chips. namely. that tcst data cannot be obtained from the chip prior to final metallization. This is because of inherent high packing efficiency of the PIP strategy which leads to the elimination of pad space for gates within the interior of the
LARGE SCALE INTEGRATION-AN
APPRAISAL
I95
chip. Thus the test procedure must be deferred until the 110 pads are metallized a t the completion of wafer processing, and a pad-limited functional test must be provided; as noted earlier, fault detection with a limited number of terminals is vastly more complex than the PC card test philosophy used in third generation systems. Fault detection reduces to the problem of finding a manageable sequence of input test patterns that will guarantee to some level of confidence that the chip is nondefective. A fault table can be constructed by providing a column for each conceivable fault, and a number of rows equal to the product of the number of input patterns times the number of internal states. A one is entered in the table if a particular fault can be detected by a specified input and internal pattern; a zero otherwise. I n principle, a minimal test sequence can be determined by selecting a minimal set of rows which cover all ones. Performing this in practice with chips containing, say, 10 or more storage elements and 30 or more input pins is clearly difficult. Much of the published work on testing is on problem formulation rather than on practical results ; several pertinent studies are indicated in the references [Z, 75,481. Testing is further complicated since (1) failures are not limited to gates, and the width of the table must be increased accordingly; (2) dynamic testing with varying load, temperature, etc., is desired; and (3) the chip manufacturer and user must jointly determine a confidence level for the test. The fixed interconnection pattern can be subdivided into muster and custom layout strategies. Master embodies the notion of a generalpurpose chip which is completely prediffused and then inventoried, if desired, without interconnections. A specific function required by a user is implemented merely by fabricating and applying the final interconnect masks. As in most general-purpose schemes, there is a disadvantage; in this case it is moderately inefficient use of silicon area, implying nonoptimization for high volume applications. Custom layouts are tailored precisely for the function a t hand, thereby achieving optimum area utilization a t the expense of generating an efficient and specialized complete mask set for each function. Custom is best suited for high volume applications which can support high engineering layout and mask design costs. An analysis of layout efficiencies by Notz et ul. indicates that custom achieves about twice the area utilization of master a t the 50-100 circuits per chip level, with the ratio increasing at larger integration levels [5Y]. An example of a master chip is the 80-gate 110s chip produced by Fairchild Semiconductor and shown in Fig. S . Figure X u and 8b shows an 80 x 80 mil chip prior to and after mt.talIization, respectively: each of the five major vertical columns contains the equivalent of I6 threeinput gates. The chip uses two-layer metal. The first layer defines the
I96
L. M. SPANDORFER
gates and provides limited gate-to-gate interconnections; the second level completes the interconnections. Thus, three specialized masks are used: one for insulation cuts and two for the first and second layers of metal. The interconnection widths and via holes are approximately 0.4-0.5 mil; relatively spacious x and y wiring corridors of about 20 wires per cell row and column are used, presumably t o satisfy a wide range of
la) FIG.8. Example o f mastcr 310schip (a) prior t o mctallization; (1)) aftcr mttallization (coiirtrsy of Fairchild Semiconductor, A Division of Fairchild Camera and Irlstrumcllt Corp.).
user applications. For more restricted usage. practice indicates that corridors of about 6-8 wires are able to handle a large fraction of the wiring requirements a t the 80-gate level. An early custom chip designed by the Motorola Semiconductor Products Division is shown in Fig. 9. The 120 x 120 mil chip contains
LARGE SCALE INTEGRATION-AN
APPRAISAL
I97
16 ultra-high-speed content-addressable memory bits based on current mode circuits with 2 x 0.2 mil emitter geometry. The chip is made up of 2 x 2 arrays of 4 bit cells; each 2 x 2 is 60 mils square and contains 131 devices; the custom layout problems are solved at the bit level and replicated. Four metal layers are used: the first two intraconnect within the 2 x 2’s, and the upper two interconnect between 2 x 2’s.
(b) FIG.8b.
5.2 Discretionary Wiring Since fault patterns vary from wafer to wafer, D W has the drawback that functionally identical arrays will in general require different interconnect mask patterns. The key problem is the critical dependence on data processing for mask layout, albeit off-line, and a host of other production control routines per copy. Further, use of a unique mask implies
I 98
L. M. SPANDORFER
a relatively slow production process for a given capital investment. On the other hand, the inherent suit.ability of conventional device geometries to DW can be construed as an advantage, albeit somewhat ephemeral ; the implication is that DW has the potential for implementing large arrays at an early date. Although advanced techniques for inspecting gates may eventually be developed, the only currently acceptable method involves mechanical probing of each elementary cell, i.e., gate and storage element. As noted,
FIG. 9. Custom chip containing 16 ultra-high-spced content-addressable rnernory (courtesy of Mtitorola Semicoiidiictor Products, Inc.)
probing requires area-consuming pads of the order of 2-3 mils, resulting in a relatively luw circuit density in the array. The area required by the pads sets a lower limit on cell size; a Diode Transistor Logic (DTL) gate currently in high volume production, for example, requires an area ( - 200 square mils) which is not much larger than the area of the pads and leads which service it. Possibly the most serious drawback to DW is that considerable wafer processing is required after the gate probing step, and odds on the creation of at least one post-probe fault appear to be high; no data have vet been presented to refute this self-limiting point. Discretionary wiring techniques have been described by workers a t
LARGE SCALE INTEGRATION-AN
APPRAISAL
I 99
Texas Instruments and International Business Machines [ZS, 411.Texas Instruments has reported studies on a research vehicle involving a several thousand gate general-purpose military computer, the largest known attempt at logic LSI thus far, at least in terms of gate count. Typically, several hundred gates or equivalent are assigned to a 14 in. wafer containing about 1000 gates. The several hundred gate arrays make use of two metal and two insulation masks. High-speed mask-making techniques are required to produce a set of four masks per array and keep up with foreseeable high production rates. One technique under development uses 2:l or 1 : l contact printing generated by a high-resolution Cathode Ray Tube (CRT) with a fiber optic faceplate, and another uses projection printing from a conventional high resolution faceplate; a n additional reduction step is needed if 2 : 1 printing is used to simplify the requirements on the CRT. The coarse line widths of the order of one mil available with CRT 1 : 1 printing are reasonably well matched to upper level metal requirements in DW. An alternate approach to the CRT makes use of a light beam impinging directly on the wafer which in turn is carried on an x-y table positioned by a stepping motor. This method provides very high precision and repeatability and is free from the various distortions found in CRT systems; however, it is several orders of magnitude slower than a CRT (but considerably less expensive). Studies indicate the CRT mask-making portion of the over-all system has the potential of supplying a complete mask set every 40 sec; this is generally longer than the time that might be required by a UNIVACQ 1108, say, to process the placement and wiring algorithms. Clearly, a detailed system analysis covering all bookkeeping and production line aspects must eventually be made before a meaningful DW throughput rate, and hence economic feasibility, can be established. Discretionary wiring may have potential advantages over FIP which seem to have not yet been exploited. One concerns the difficulty of testing sequential circuits already noted. The comparative grossness of the dimensions used in DW suggests the possibility that a considerable simplification in fault detection could be achieved if, at some sacrifice in wiring density, special interior test pads could be carried up to the top layer for subsequent mechanical probing. A second potential advantage is that for many years in the future DW may be capable of providing an optimal approach to the severe problems of nanosecond LSI. Depending on yield statistics and the difficulty posed by the self-limiting effect cited earlier, it may be possible to obtain many hundreds of good gates on a wafer; this is substantially in excess of the most optimistic projections for FIP, and provides the basis for the large-scale partitioning @
Registered trademark of the Sperry Rand Corporation.
200
L. M. SPANDORFER
needed at the 1-nsec level. I n addition. the comparatively high part number and low ~-olumeconsideration for a large-scale system with rianosecorid technology are not izecessarily iucorn1)atible with the economics of D W. An example of a DIV layout used by Texas Instruments for exploiting a complete wafer is shoa-n in Figs. 10 and 11. The wafer contains a memory subsysteiii consisting of 38-10 bits a n d 240 word drivers and a second level decoder [19, 871. The organization is flexible and provides TADDRESS INPUTS
2nd LEVEL DECODE GATES AND WORD DRIVES 16 BIT COLUMNS 0
16 BIT COLUMNS
0
16 BIT COLUMNS
16 BIT COLUMNS
-
BIT
MI WORDS
a inaxiniiini of. SRJ . 60 n.ords of 6-k 1)its each on a hypothetically perfect n-afbr. Figure 10 4zon-s the organization of the memory on the wafer, and 121g. I 1 contams a map of the faulty cells on a particular w-afer. The word ciircctioii is divided into four colunzns or .vr-ord groups of 16 bits c3acli. studies 11,3\-(. indicated that a t least 13 of the 16 bits in a group shottld 1)e noiitlcfectiw. thus enabling a. system \\ ord length of 5 2 bits. Bit c e l l size i y a h t i t 1-45 squarc mils \\ hieh is snficiently large to allow 1 i a i t o all pads. coii,scquci>tlJ-.only d sinplt~discretionary mc>tsl layer I \ rt.quirc.tl and i:, uwd t o form tlie bit liiws. If a given bit crll is ckfwtive, it i \ not coiitiwtr.d t o the I)it liiic. Bit rtyistratioii is niaintaiized b y jogotno t h r x othcrn ise straight bit line t o a right or a left iicipliboring hit cc~ll.A typical srray inap slzon-inp good and liad bit cells and a dwc.wtionar:\- m n s k drawing with hit lint, jogging arc sliown in Fig. 11 ;
LARGE SCALE INTEGRATION-AN
APPRAISAL
20 I
the bit lines are horizontally directed. If 13 good bits cannot be found within a 16-bit group, the group is not used and group jogging is employed instead. Alternate approaches to wiring a memorylike array without the need
ACTIVE MEMORY SLICE M A P
CELL YIELD = 87.4 O/o WORD Y I E L D = 6 9 . 5 %
(b)
FIG.1 1 . (a) Illustrative map of wafer yield, and ( b ) discretionary wiring mask (courtesy of Texas Instruments, Inc.).
of topologically jogging the bit line have been examined [77]. Although it has been shown that techniques such as binary erasure error correcting codes and logic deskewing circuits are applicable and permit the use
202
L.
M. SPANDORFER
of topologically straight bit lines, the trade-off is generally made a t the expense of reduced throughput capability and added circuit complexity.
5.3
Special Studies
Workers in various aspects of switching theory have occasionally turned their attention to certain issues of LSI such as the attributes of simple wiring patterns, wire layer minimization, and universal logic blocks. One activity has involved a study of cellular arrays by Minnick and co-workers [50, 51, 521 and Canaday [7]. An early and primitive form of cellular structure is the two-dimensional array shown in Fig. 12. The FUNCTIONS FOR THE CUTPOINT CELL INDEX1 a b c d
I
2
~~
0
0000
1
1
0001 0010
Y' x'ty'
2 3 4
0011
x' y*
0100
xty
5
0101
XY'
6 7
0110 0111
x + y
F
1101
x'=S,y'=R
0
FIQ.12. Cellular logic array and cutpoint notation.
study of cellular arrays presupposes that simple wiring patterns on an chip will be less costly than conventional irregular patterns; simple wiring patterns are then postulated (the one shown in Fig. 12 is a particularly simple and early prototype), and the logic power of the resultant structures are studied. Each cell can be set to any one of 16 functions of the two input variables 5 and y (six cell functions are LSI
LARGE SCALE INTEGRATION-AN
APPRAISAL
203
actually sufficient to implement all combinational switching functions [50] or an R-s asynchronous set-reset flipflop). This structure provides an arbitrary n-variable combinational switching function with an array n 1 cells high and no more than 2n-2 cells wide. A number of elegant synthesis techniques have been devised for specifying the function of each cell and reducing the size of the array, and interconnection patterns somewhat more extensive than the simple rectangular nearest neighbor pattern of Fig. 12 have been introduced [51]. An extensive survey of cdlular logic work has recently appeared [as]. Each cell in the early studies was set to the desired function during the regular masking procedure by appropriate metallization of its so-called cutpoints. Since the chip function thus formed incurs the possible disadvantage of being rigidly and permanently fixed, flexible programmable arrays were subsequently proposed in which the precise chip function need not be fully committed in hard wiring during manufacturing but instead could be specified later by an appropriate set of control signals. An early suggestion for achieving programmability used photo-conductive cutpoint switches, removable photomasks, and a light source. A more recent proposal [77] employs a special 2 0 memorylike control array superimposed on the logic array; the role of the control array is to store the cutpoint data and thus provide a first-step solution to the change or " yellow-wire '' problem. Since speed requirements on the control array are negligible, it could conceivably be implemented with low power MOS devices; the logic array could use higher performance bipolar circuits, if needed. The noncritical loading or input rate imposed on the control array results in a minimal pin requirement for entering control data onto the chip. Although intermixing MOS and bipolar circuits poses problems which have only been partially solved, recent progress by Price [67] and Yu et al. [92] in combining the technologies is encouraging. An important approach to circumventing dependence on a volatile control array lies in the use of a device such as the variable threshold field effect memory element to be described later in the memory section. I n principle, such devices constitute a near-ideal control array element with retpect to minimum area and power requirements. Ignoring the inefficiencies of cellular logic for the moment, the manufacturing advantages which might accrue in its use are not insignificant. The highly regular and simple interconnection patterns called for in cellular arrays are presumably simple to fabricate. The need t o solve wire-routing algorithms and perform complex layouts on silicon is eliniinated. Chip specialization by cutpoints is done at precisely specified interconnect points rather than by irregular interconnect lines. A maximum of two wiring layers is required. The trade-offs for these
+
204
L. M. SPANDORFER
features. however, are in gcneral severe; they arc ( 1 ) excess silicon area, possibly with an excessively rectangular or nonsquare form factor, and ( 2 ) an increase in the number of cells through which a signal must propagate in comparison with conventional logic not possessing stringent interconiiect pattern limitations. Both trade-offs arise because of limited ceil fan-in. fan-out, and pattern rclstrictions. The first trade-off might eventuallj- diminish slightl) in importance but will always remain an important competitiw isviie; the second trade-off, involving propagation dclay. and I)ossibly rciliability. appears unlikely to diminish in importanct., although iiianj- applications exist n-htw specLd is not critical. Current practice has sidcstvpped t hc cellular logic precepts and is instead forging ahcad with irrcgirlar intcrconnects and multilayer technol0,a- = Iiich provide minimal area and delay paramc%rrs. Whereas currcmt direct ions will undoultedlj- succeed by some measure becausc of thr heavy investment committmcnt , it should be noted that one of thc important dcxterents to mannfact wing success today, namely, lack of sufficient sntoniat ion. would be considerably Ivsscnedwi t h cellular logic. \\-ark hiis also hwi carried out on varions asp s of the multilayer problcins arising in large-scale integrated circuits. One study showed that given a logic function it is always possible to synthesize iri a way such that no logic line crossovers are requirtad [79]; power lines are not considered to be logic lilies and are treated separately. The result tends to involve a worse case layout restriction in the sense that logic lines in effect are not permitted to cross over or under gates but instclad are morc or less constrained to lie in the intergate corridors. Single lajer logic can bc achieved with no additional input pins; each inpiit and output variable need only make one appearance on thc chip periphery. Similar to cellular logic. however. single layer logic generally requires a hear?; sacrifice of silicon area and logic propagation levels. Other studic,s have concentrated on the nonplanarity of a given implcnientation. For. examlde. an algorithm for detecting whether a given graph is planar has been presented and programmed bj- Fisher and Wing [.XI. 'I'hc algorithni is expressed in terms of a graph incidence matrix; if the graph is nonplanar, thc algorithm identifies a submatrix which is planar. The program can bc iterated thereby generating a sequence of submatrices which include a11 vertices and edges of the graph. The special problems of associating a graph vertex with a chip or a circuit arc' describtd in \I'eindling and Golomb 1901. where the difficiilty in choosing subgraphs rrsulting in a niinimal number of planes is discussed. --Is illustratcd i n the section on packaging. there is little compelling reason to believe that the relative dimensions of the circuits and wiring on N chip will Icad to potentially fexvrr wiring layers for LSI in comparim n with earlier technology.
LARGE SCALE INTEGRATION-AN
APPRAISAL
205
Interest in standard or universal logic chips has existed from the beginning. The problem, however, is that universality may not be particularly efficient from a silicon area or I/O pin terminal viewpoint a t a level much above the simple NOR gate. Functional groupings such as decoders, adders, and register cells are useful and can be readily built, but are far from universal. A number of studies on universal logic modules have been carried out; an early paper by Earle [ZU]is particularly worthy of mention. A more recent study has been described by Elspas et al. [Zl];using a model similar to one introduced earlier by Dunham [18, 171, it is desired to attain universal module behavior by being able to implement any function of n variables in a module containing m input terminals, m > n. Function selection is performed by connecting the n variables from the outside to the appropriate subset of the m terminals; some of the unconnected m terminals can be biased to fixed voltages representing constants 0 and 1. In the study by Elspas et al., the complexity of the module is measured by the number of input terminals; the objectives were to develop techniques for the design of the module logic and to minimize the number of required terminals. Several constructive techniques for synthesizing n-variable universal logic modules were reported, along with upper and lower bounds on m as a function of n ; a strategy for searching for universal functions with a minimal or near minimal m was also described. The best result of the search was reported to be the discovery of an eight-input function universal in four variables. Minimization of input terminals was the sole cost criteria and issues such as propagation delay or logic levels encountered in passing through the universal logic module were not examined. 6. Bipolar Circuits
Many basic considerations are involved in comparing integrated versions of various digital circuits used in second and third generation computers. Among these are ( 1 ) speed versus power, (2) required silicon area, (3) current gain, (4)resistor tolerances, (5) fan-in, fan-out capability, (6) noise immunity, and ( 7 ) wired OR operation, i.e., ability to tie outputs directly together to perform a second level of logic. These considerations make generalization within one structure difficult, much less comparison over several circuits. The problem is compounded since each circuit can generally be shown superior to ail other circuits in at least one or more important attributes. Relatively little solid documentation exists in which circuits are carefully compared under the same conditions; several comparative studies have been made [ 4 4 , 4 7 , 781. The ensuing discussion will touch on broad aspects of the various
206
L.
M. SPANDORFER
circuit t,ypes. Typical propagation delays and dissipations cited are for roughly similar fabrication tolerances and operating conditions; trends rather than precise engineering data are presented. Preoccupation with Current Mode Logic (CML)is because of (1)its emergence as the leader in the speed race, ( 2 ) its recently achieved third generation computer production status with moderate geometries (0.3-0.5 mil) after years of laboratory hiatus, and ( 3 ) its potential as an ultra-high-speed (0.3-2.0 nsec) LSI gate. 6.1 Saturating Circuits
Some of the important second and third generation saturating comput,er circuits are shown in Fig. 13. The transistors in the gates shown are not constrained from entering the saturation region and as a result
.+J--.
iNEGATIVE LOGIC1
I$ *+yfX !3G RTL
(POSITIVE LOGIC1
D
:
c
= DTL
d
Wired OR
+E P
+E
obc
Lou Level TTL High Level T T L
FIG.13. Bipolar saturating logic circuits.
incur the well-known storage-time component of propagation delay. Roughly speaking, a transistor enters saturation when it has ample baseemitter drive but comparatively little collector current. In this condition, the collector voltage falls below the base voltage causing forward conduction of the base collector junction, and excess time is required to
LARGE SCALE INTEGRATION-AN
APPRAISAL
207
remove the resultant stored charges in the base when the transistor is eventually turned off. Saturating circuits appear to be limited to a turnoff delay of 3-5 nsec. Practical approaches to minimizing storage include the use of antisaturation feedback diodes and substrate-controlled saturation transistors [73] which limit base overdrive when the collector base junction starts to conduct forward current; gold doping is frequently used to reduce stored carrier lifetime. Resistor Transistor Logic (RTL) provides a simple structure and consequently was one of the earliest integrated circuits in production. It represents a slight modification of the Direct Coupled Transistor Lpgic (DCTL),' the first systematic logic structure devised to capitalize on the iterative switching capability of transistors. Typical speeds obtained with monolithic RTL circuits are in the 15-30 nsec range, with power dissipation on the order of 20 mW. The RTL gate performs the NAND function for negative logic, i.e., where binary 1 is interpreted as a low voltage signal; the NOR function is obtained for positive logic where a high voltage level is taken t o mean binary 0. Since RTL normally operates with gate collectors tied together, the two level AND-OR-NOT or wired OR function cannot be obtained by tying collectors of several gates. Discrete transistor RTL circuits using beam lead technology to reduce parasitic capacitances have been reported which attain propagation delays below 5 nsec at 23 mW for a fan-out of unity [43]. Diode Transistor Logic is another holdover from second generation discrete component circuitry; it provides typically 10-20 nsec propagation delay at 20-30 mW and can be designed to give good noise margins. It performs the NAND function for positive logic. Higher power DTL circuits are in production with propagation delays down to about 5 nsec [22]. Tying together outputs of several gates permits an AND-ORINVERT operation. The symbol for the wired OR outputs is shown in Fig. 13; tying outputs can be used to reduce part numbers and reduce input wiring as well as to provide two levels of logic. Its use in a logic chain, aside from packaging considerations, is a trade-off with the availability of input terminals on the succeeding logic stage. Transistor-Transistor Logic (TTL)is derived from DTL; the multiple emitter device is used as a transistor, replacing the input and voltagetranslation diode functions. It also acts as a NAND gate for positive logic. Transistor-transistor logic appears to embody one of the first really new circuit ideas of the integrated circuit era. With discrete components, designers tended to use relatively inexpensive resistors and diodes wherever possible and minimize transistor count within a gate; early 1 As a historical footnote, it should be noted that one of the editors (MR) was a oo-creatorof DCTL.
208
L. M. SPANDORFER
DCTL circuits with one transistor per fan-in were something of an exception, as were the npn and pnp current mode circuits used in earlier computers [GI. integrated circuits upset the earlier economic balance on the use of active and passive elements, and designers began to exploit transistor properties on a large scale. The TTL input circuit is one such example; another is the use of an emitter follower to replace a voltage shift diode in the DTL input network, thereby providing greater circuit gain bandwidth. Low. level TTL uses less silicon area than DTL on a component count basis. However. since TTL was developed as a natural competitor to DTL. a series emitter follower is sometimes used which provides a balanced output which in turn can drive the two transistors in the output or cascode stage. The cascode is used to insure a desirable low output impedance to drive capacitive loads for both positive and negative output transitions. These added components distinguish low level from high level TTL, and caiise the area requirements to be commensurate with DTL. Cascoded output-pair transistors have also been used in DTL configurations. In general, high level TTL provides a speed power improvement over DTL. with 10-15 nsec obtainable a t typically 13 mlV dissipation. Transistor-transistor logic circuits with 30-40 nsec delay and nominally 1 m1V dissipation are also in production. Despite the performance advantage of TTL and the nonsaturating circuits to be described below. it is worth noting that RTL and DTL, through speed'power and other trade-offs, are in use as sub 5-nsec gates in large-scale systems. Both TTL and DTL are currently being designed into LSI arrays. A TTL-compatible memory chip is described in a later section. Nevala [38] has described the elements of a DTL master chip currently in production a t Fairchild Semiconductor. The 80 x 110 niil chip contains 32 four input gates. and two level metal interconnects are provided. The function-defining interconnections and the number of I/o pads can be specified by the user. Fan-in can be increased or AND-OR-INVERT logic provided by borrowing gate diodes from neighboring circuits. Space is available for up to about 40-50 I!O pads. Relative speed 'power performance of the various logic families is dependent on the geometry of the transistors and other devices used. In particular, gain bandwidth is inversely proportional t o the horizontal emitter width and vertical base width. Small geometry devices not only provide high circuit performance but also imply more gates in a given chip area and therefore potentially lower cost per gate down to the point where photomasking resolution and alignment limits with visible light are reached. As an example of the performance improvement possible with more sophisticated device designs, small-geometry TTI, circuits currently in production have propagation delays of 5-6 nsec,
LARGE SCALE INTEGRATION-AN
APPRAISAL
209
and even faster saturating ultra-fine 0.1 mil geometry shallow-diffused transistors described by Spiegel [go] provide a n average unloaded delay of 2-3 nsec. 6.2 Nonsaturating Circuits
One approach to nonsaturating logic is through the use of emitter followers. Complementary Transistor Logic (CTL) [?'a]shown in Fig. 14 uses input p n p emitter followers to perform the AND function; the output is connected to the base of an n p n emitter follower which provides the OR. The n p n output can be tied to similar gates t o obtain the wired OR +E ?
I CTL
sum-of-products. Since dc compatability between successive emitter follower stages can be achieved without forward current in the base collector junctions, saturation is avoided, The emitter followers are operated both nonsaturated and class A , thus providing a fast AND-OR. A loss in dc voltage level is incurred going through the emitter followers, and a n inverter must eventually be included in the chain for voltage gain as well as for logic completeness. Although problems have arisen in obtaining high performance p n p and n p n devices on the same chip, the performance of the circuit and the ability to use maximum-duration rise and fall times for a given propagation delay are quite attractive features. Another method of constraining a transistor from entering saturation is through control of the emitter current and thereby control of the collector base voltage. Current mode logic shown in Fig. 15 contains this property as well as an unusual number of other important selfcompatible features [ 5 7 ] . For example, the current switch transistors Q1, Q Z stay out of saturation because their base emitter voltages and emitter currents are closely controlled. Close control comes about because ( 1 ) the base is driven by a low output impedance emitter follower of a preceding stage, and ( 2 ) the emitter current (and voltage) is
210
L. M. SPANDORFER
controlled with a sufficiently large resistor R and large voltage E . While QI, say, is turning on, the impedance a t the common emitter node becomes low, thus providing a relatively low impedance loop for the buildup of input current. When Q3 finally turns off, or unclamps, the node impedance reverts to the larger value, preventing overdrive of &I into saturation. The nonsaturating emitter follower further serves as a low impedance output source for posit.ive- or negative-going output
11
-4
VREF
b
a
-E
CML
(NEGATIVE LOGIC)
ab
01
I -E
ab
f
' 6
-E Current Switch
FIG.15.
Currrnt mode logic circuit and current switch circuit.
transitions, and it also conveniently provides the desired voltage shift for interstage dc compatability. The complementary current switching action betw-een QI, Q Z, and Q 3 provides the basis for the NAND and AND functions at, the output. Current mode logic is reasonably independent of resistor tolerances provided ratios are properly maintained, an important factor in the use of diffused resistors. In addition, the low output impedance of the emitter follower makes it relatively difficult to couple capacitive noise into a CML input. The circuit is usually operated with a 0.8-V swing; additional margin against noise and saturation can be
LARGE SCALE INTEGRATION-AN
APPRAISAL
21 I
obtained if needed by placing level shift diodes in, say, the emitter follower base. One of the critical problems in CMI, design is the instability associated with emitter followers under various input and output impedance conditions. The optimum stabilization method for the faster monolithic circuits is not clear a t present; emitter stabilization capacitors have been used in 1.5 nsec discrete hybrid circuits [70]. Current mode logic circuits with 0.1 mil masking tolerances, micron diffusions, and 40-50 mW dissipation provide unloaded stage delays of 1.5-2.0 nsec and are currently in early production phases; reasonable chip yield is believed to exist at the 3-6 gate level. Availability of such chips provides a basis for a new round of high performance processors with unloaded circuit speeds two t o three times greater than in existing third generation systems. State-of-the-art in packaging technology should be able to provide a comparable improvement in the delay attributable to interconnections, resulting in a net stage delay around 3-5 nsec. As noted earlier, an increase in circuit density up to 20-40 circuits per square inch appears to be required, with dissipation in the range of 1-1.5 W per square inch. Dhaka [16]has described an advanced transistor design with 0.075 mil emitter base widths, shallow diffusions which enable collector junction depths of 3300 A, and cutoff frequencies over 6 GHz at a collector current of 10 mA. Transistors of this class have operated in current switch circuits (CML without emitter followers)as low as 220 picoseconds [91];simulation studies indicate that the circuit speed is limited by ft rather than by parasitics. An experimental transistor with 0.05 mil emitters has.been described by Luce [46] which achieves an ft greater than 3 GHz at an unusually low collector current in the 0.5-1.0 mA range. Current mode logic circuits incorporating these devices have shown a delay of about 0.5 nsec at a power dissipation of 10-15 mW. It is not clear a t this time that horizontal line widths less than one-tenth mil will provide sufficient yield without a breakthrough in processing technology. Production yield at even 0.7 nsec is an open question today. Variations on the basic CML structure have been devised to reduce power and area requirements to a level pokentially commensurate with LSI. One approach to power reduction is through the use of the current switch without t,heemitter follower [56,10];operation with small voltage swings ( 0.4 V ) is desirable to keep the transistor out of deep saturation and to provide dc compatability. Removal of complementary emitter followers typically reduces the circuit dissipation by more than 50%. Feedback arrangements which permit a reduction in dissipation have been described [33]. Five-niilliwatt dissipation has been achieved with N
212
L. M. SPANDORFER
-
a low supply voltage (2.4 V), low signal swings ( 200 mV), and reduced speed (10-13 nsec). Nonfeedback reverse-CJIL circuits providing about 7 nsec a t 15 mW and 10 mW for 800 mV and 600 mV swings, respectively, have been reported by the same group. (It is not clear that CML providcs the best speed-power ratio a t these relatively large values of delay.) Reverse CXL involves interchanging input and output termirials such that the emitter follouer becomes the input stage, and the current switching transistor becomes the output. Its primary features, in comparison with conventional CXL, are a smaller input capacity since the Miller effect is reduced, and a reduced component count. I n addition to providing information on the design of low power, small area CML, the project reports describe the special design problems related to the interface circuits used to connect chip to chip. The reports further describe the optimization of area and dissipation on large two level metal CML master chips. Thirty-six gate chips using 288 transistors and requiring 175 x 175 mils have been fabricated and are in an early operational status; 72 gate chips with component packing five times more dense than the 36-gate chip are planned. One of the more advanced developments in 1 nsec LSI has been reported by Chen et al. [9].A three level metal master chip is used which contains up to 12 nsec range circuits in 60 x 60 mils; productionfeasible parameters (0.2mil emitter, 1 GHzft) and relatively high power circuits (50-80 mFY) are used. Seeds [72] suggested the possibility of production circuits a t 100 and 50 square mils in 1970 and 1974, respectively. Murphy and Glinski [Zrecently ] described an exploratory 10 square mil circuit (105 per square inch) which is believed to be the highest density yet reported. The computed value of delay is 4 nsec a t a power of 0.8 mW. The circuit uses a form of TTL with clever circuit and device innovations to keep the output transistor out of saturation and the iriverst slpha of the multiple emitter input gating transistor below a maximum value. Whereas this important development stresses innovations in device technology as an approach to LSI, much (but not all) of the work on basic LSI logic circuits appears to be directed a t the exploitation of a relatively large number of transistors using more-or-less conventional device techniques. Murphy and co-workers (see ref. [36]) also stressed device innovations in an interesting 25 square mil memory cell described in a subsequent section. Alt,hough important signal transmission issues such as cross talk and noise tolerance are apparently still in the open question category, it is worth noting that studies have been reported by Guckel and Brennan [29] that. in the worst case, indicated signal delay on silicon can be as high as 40-50 times the free space value. Experimental verification of this result is awaited.
LARGE SCALE INTEGRATION-AN
213
APPRAISAL
7. MOS Circuits
I n contrast with bipolar, MOS has a number of remarkable properties. Assuming equivalent masking tolerances, MOS transistor area requirements are about 3-5 times smaller than for a bipolar transistor, since neither isolation nor nested emitter base diffusions are required. The high input or gate resistance ( 1014-1016) permits voltage controlled operation without the need for input dc current. The high input and high source-to-drain OFF resistances permit the parasitic input node capacitance ( 0.3-0.5 picofarad in close-spaced layouts of the type possible in shift registers) to have important use as a storage element. The two-terminal resistance of the source-to-drain path provides a convenient method of implementing the pull-up resistor function, resulting in an area requirement about 100 times smaller than a diffused bipolar resistor of equal value. Metal-oxide semiconductor sheet resistivity ranges from 25 to 40 kilohms per square, and pull-up resistors of hundreds of thousands of ohms can be readily implemented. Although the pull-up resistor may have a sizable temperature coefficient, it is capable of tracking with the logic transistor. Finally, the transistor can function as a bilateral switch, permitting current flow in either direction. All of these features are advantageously employed in the circuit structures briefly described in t7hissection and in the later section on memory. Transit-time delays are negligible in MOS (1000 MHz operation is possible), and Resistance-Capacitance time constants constitute the primary speed limitation. The transistor is unable to supply a copious amount of current and thus has a particularly low transconductance or Gm (approximately ten t o several hundred micromhos for low level logic devices) which, if coupled with a high threshold voltage, provides a rather large propagation delay in contrast to bipolar transistors. The transconductance is proportional to the ratio of channel width to length; the latter is set by masking tolerances. Present state-of-the-art in clocked p-channel transistor performance (to be discussed below) appears to be in the 5-10 nsec range, with the actual delay being extremely sensitive t o interstage capacitance. To the writer’s knowledge, however, experiments on chain delay in close-spaced clocked or unclocked p-channel circuits have not been reported, leaving some uncertainty on attainable propagation speeds. The actual or potential improvement in p-channel speed is ostensibly due to the use of lower switching voltages, use of a thicker oxide which reduces parasitic interconnect capacitance, and smaller output junction capacitances by means of smaller devices geometries. However, Farina has pointed out that the geometries used in 1965 vintage shift registers which provided 100 kc rates are the same as those used in the more recent 10 Mc designs; he attributes the improveN
N
~
214
L. M. SPANDORFER
ment, which includes a seven-fold reduction in circuit area and a twofold reduction in power dissipation, to advances in circuit techniques [23]. (Presumably the 10-JIc designs make use of the thick oxide technology.) Chain delays have been reported for complementary transistor pairs; Klein 1391 has measured 2-4 nsec stage delays for close-spaced pairs with unity fan-out. Unlike the single polarity transistor, complementary pairs can provide high speed in a ripple or dc logic mode, i.e., without clocking. The technology reported by Vadasz et ul. [85] depicted in Fig. 8 uses 0.3-0.4 mil channel lengths and is production feasible; it appears to be capable of providing a nonclocked propagation delay in the 30-50 nsec range on the chip. Threshold voltage V T is the input voltage a t which the transistor begins conduction; V T has been typically of the order of 4-6 V. Improvements in process control offer promise of providing thresholds of around 1.5 V with a stability of &loo,; [39]. In addition to improving the switching speed, a low threshold is functionally important because it permits simple interface with bipolar circuits and permits reductions in power dissipation. Unlike bipolar junction devices where the switching threshold ( 0.75 V in silicon) is fixed by the energy band gap, the MOS threshold can be varied over a wide range by changing factors such as oxide thickness and surface and bulk charges. This apparent flexibility, however, is the very reason that the threshold value has proved difficult to control [53]. Metal-oxide semiconductor gates are generally structured in “ relay contact ” format, similar to early DCTL; both series and parallel stacks are used. The basic MOS gate is shown in Fig. 16; it provides a NOR for
a
-
FIG.16. Basic MOS gat,econfiguration.
negative-going inputs. Transistor Q1 serves as the inverter or logic transistor, and Qz as the pull-up resistor. In order to maintain a suitable “low” or binary 1 output level near ground, the ON resistance of &z is designed to be about 10 times that of &I. As a consequence, when the output voltage is driven to ground through Q1, the ensuing time
LARGE SCALE INTEGRATION-AN
215
APPRAISAL
constant is about one-tenth as large as when the output goes to the supply voltage through QZ. Recent circuit work in increasing MOS logic speeds has centered on eliminating the need for charging interstage capacitances through high resistance transistors. The principle usually invoked involves the use of special clocking and gating circuits which both precharge capacitances and provide the necessary low resistance paths. An example of a recently developed circuit which embodies this principle and has important logic implications is the four-phase configuration shown in Fig. 17.
9 1 4
FIG.17. Basic MOS four-phaseclocked
f-2 0,
9 3 4
95
'OUT
circuit.
Clock signals 41+4 are assumed to be a sequence of four nonoverlapping pulses. The stiff clock signal $1 initially charges C toward -E through &I. As soon as the voltage at node A goes sufficiently negative, Q 4 is primed to conduct but is inhibited by nonconducting & a . When C is' sufficiently precharged, clock $2 arrives permitting C to discharge through Q 3 and Q Z and only if the latter has been primed to conduct because of the appearance of a negative input signal. If node A is driven negative, for example, Q 4 is left in a conditionally conducting stage (parasitic capacitance C temporarily retains the voltage). Clock 4 3 then precharges the capacitance in the succeeding stage which is subsequently left charged or discharged. after the occurrence of clock 44, dependng on the state of Q 4 . Note that the nonoverlapping clocking sequence inhibits the establishment of a dc path from the supply voltage t o ground. The lack of dc eliminates the need for the resistance ratio between Q1 and QZ in Fig. 16, thereby providing the basis for the improved switching speed. Since only charging current flows, chip dissipation can be kept particularly low because clock power is not dissipated in the circuit but instead is entirely dissipated in the presumably off-chip clock source. The four-phase circuit is called ratioless since the absence of dc implies the absence of a constraint on the ratio of the conductances of the
216
L. M. SPANDORFER
transistors. The new design principles have been extended to two-phase ratioless schemes. and to circuits in which the pull-up resistor is replaced by a capacitor [ 2 4 . Iterated four-phase stages connected as shift registers are currently in production and are operating a t 10 MHz rates with a power dissipation of 1-2 mJT per bit; 100 pi!' dissipation per bit has been reported for a 1 MHz rate [ 2 4 . Progress in circuit and processing technology suggests a t least another factor of two in register speed will be attained. A photograph of a Philco-Ford Microelectronics Division 200 bit register chip is shown in Fig. 18. The 90 x 90 mil single layer metal chip contains 1200 transistors plus several output drivers. The six-transistor cell layout is shown in Fig. 19; the transistors have a channel length of
FIG.18. Photograph of 200 stage 110s shift register (courtesy of Philco-Ford Corporation, Microelectronics Dil-.).
LARGE SCALE INTEGRATION-AN
APPRAISAL
217
around 10 p , and the cell size is 25 square mils. The 200-stage register has been demonstrated; studies on state-of-the-art technology suggest that 2000 stages are feasible on the same size chip with 5 p geometry and two-phase ratioless circuit techniques [all. The four-phase circuit of Fig. 17 and other similar clocked stages can be used as clocking stages or pulse formers in a synchronous logic system. Furthermore, transistor &I can be replaced by a complex series-parallel
FIG. 19. Single stage layout of 200 stage register (courtesy of Philco-Ford Corporation, Microelectronics Div.).
p: I :%
2408
94
92
I
stack of transistors; the ensemble can then serve as a general clocked logic block in a multibeatz system. The beat rate will depend on the variation in propagation delay through the general logic block as a function of the topology of the series parallel stack. Cohen et al. [ I I ]have discussed a general synchronous logic block and suggested the feasibility of a 32-high stack [ I l l . Ignoring questions of instability, the greatest drawback t o the use of MOS is the increase in propagation delay incurred when driving off the chip into even moderate size capacitances ( 1 0 picofarads). The degradation implies that an ordinarily desirable design progression from small chips to increasingly large chips may be impractical, but that MOS instead requires a transition directly into full scale LSI. Since most current small commercial processors operate with typical stage delays in the order 30-50 nsec and contain relatively low cost, high-capacitance N
2 The term beat denotes the clock sequence from one timing stage to the next; the term phase denotes the succession of clock signals used to sequence through one timing stage.
218
L. M. SPANDORFER
wiring, a serious design transition exists at even the low end of the processor spectrum. \l’hereas the case for MOS in this application area is not hopelcss. it milst first be shown that the functions required in a small processor could be satisfied by a carefully partitioned and packaged array of 310s chips which, a t the same time, would provide economic advantages over conventional or small chip bipolar approaches. The prospects for 310s in commcrcisl areas arc more attractive in memory applications and in slower speed communication terminal or free-standing eqiiiymcnt such as keyboards, tj-peu-riters, desk calcnlators. and various document handling devices. On the other hand, high volume terminal equipment such as CRT displays carrentlp make efficient use of high-speed circuit technology to execute various fine-structure logic decisions. In the military field, arrays of MOS shift registerlike structures appear t o be uniquely suited to the highly parallel type of processing required in problems such as sonar beam forming. Clear applications for complementary pair MOS are particlilarly lacking. In addition to the drawbacks in added processing and area requirements already cited, it should be noted that the speed and low dissipation attributed t o the complementary pair docs not necessarily provide an advantage over single polarity MOS in the important memory area. Specifically, the Yleehko-Tcrman study, albeit with n-channel devices, along with other results dcscribed in the next section tend to point out the sufficiency of single polarity MOS over much of the memory spectrum.
8.
Lst Memories
Large scale integration memories show considerable promise. I n contrast with LSI logic, scratch-pad and main memory configurations are almost ideally suited for exploitation of the% advantages of large chips. The high degree of two-dimensional memory cell replication and wiring regularity results in comparatively simple chip design and, with the availability of selection logic directly on the chip, provides a powerful partitioning flexibility. Trade-off possibilities exist between chip storage capacity, speed, I:O pad requirements, and power dissipation which are simply unavailable in logic technology. Furthermore, major advances in chip complexity can be undertaken in a given development program wit,hout the need for concurrent major advances in chip bonding capability, thus conserving development resources and minimizing program risk. Finally. the severe economic barriers which confront logic LSI such as the part number and volump problems are considerably lower. and the change problem appears no more formidable than in batch fabricated magnetic memories.
LARGE SCALE INTEGRATION-AN
APPRAISAL
219
In view of the wide disparity between the natural suitability of memory chips to computer needs on the one hand, and the questionable match between large logic chips and computer needs on the other, one might wonder why there has been so much concentration on logic and so relatively little progress in memory. Whereas the answer to this question is involved, it is worth observing that memory may be the natural bellwether for LSI, with its progress, rather than that of logic, providing the more useful calibration point for the status of the technology. Semiconductor storage elements have already superseded magnetic films in fast scratch pads; the major unresolved question is the likely depth of penetration into main memory applications. All-bipolar scratch pads of several hundred words capacity have undergone considerable development and one version, noted earlier, is currently performing useful work at the system-user ievel. Bipolardriven MOS memory chips, with high storage density and low storage dissipation per chip, are promising for the larger memories. Bipolar-Mos combinations are probably in the strongest position to test the domain of larger film and core memories which operate under 500 nsec, and the lower performance main and buffer core memory areas. Despite the potential of semiconductor memories for the latter applications, relatively little progress is evident in these areas at the end of 1967, suggesting formidability of the design and economic problems. An earlier illustration arrived at a cost projection of$1.30 for a 10,000square-mil 56-gate logic chip; adapting the same figures t o a future 256bit memory chip results in a cost of one-half cent per bit (prior to testing and packaging). It should be remembered that any and all cost improvements in the general semiconductor art will be directly applicable to LSI memories; capital facilities developed for general semiconductor use will be similarly applicable. Likely improvements in yield and reduction in wafer processing costs over the next few years should take the onwafer cost down to a level of one-tenth cent per bit, comparable to the element cost in magnetics. Testing, packaging, peripheral circuitry, power supplies, and other expected elements of cost will undoubtedly take the over-all system cost considerably higher. Since it has become increasingly rare t o be able t o project a potential raw bit cost of the order of one-tenth cent or less for any technology, the prospect of attaining the one-tenth cent figure provides the basic motivation and sufficient condition for the pursuit of large LSI memories.
8.1 Bit Organization Performance considerations and iimitations can be illustrated by several examples of general design. One approach is the bit organization in which coincident read and write selection is used on a chip which
220
L. M. SPANDORFER
contains one cell (one bit) of each of, say, M = m2 words. Similar to t,hree-dimensionalcore structures, final selection is executed in the array at the bit level, resulting in a requirement of 2m I/O pads for addressing. The term bit organization as used here actually applies to the chip rather than to the system; a bit-organized chip can be used in either a bit- or word-organized system. As illustrated in Fig. 20, a word line and one or two additional lines for digit control are required; the sense line can be either combined with the digit line(s) or kept conductively independent to minimize digit noise. One approach is to use a series string of
, I
I ONE
1
STORAGE BIT PLUS SELECTION
3
1
FIG.20. Section of
I
BIT-SENSE LINES
chips t o provide multiples of M words, with one string required per bit. Thus an MK-word n-bit memory made up of M-bit chips consists of n strings each containing K chips. Two 16-bit bipolar bit-organized memory chps have been designed and described. The first consists of a 70-mil-squarechip using Solid Logic Technology (SLT)bonding technology and one level of metallization [ I ] . It was developed for exploration of large, high-speed low-cost arrays. Masking alignment tolerances of 0.2 mil were used; although there have been no reports on attainable system performance, the access time from chip input to low level output (