CONTROL SYSTEMS FUNCTIONS AND PROGRAMMING APPROACHES Dimitris N. Chorafas CORPORATE CONSULTANT IN ENGINEERING AND MANAG...
31 downloads
687 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
CONTROL SYSTEMS FUNCTIONS AND PROGRAMMING APPROACHES Dimitris N. Chorafas CORPORATE CONSULTANT IN ENGINEERING AND MANAGEMENT, PARIS
VOLUME B Applications
1966
@ ACADEMIC PRESS
New York and London
CoPYRIGHT © 1966, BY ACADEMIC PRESS INC. ALL RIGHTS RESERVED. NO PART OF THIS BOOK MAY BE REPRODUCED IN ANY FORM, BY PHOTOSTAT, MICROFILM, OR ANY OTHER MEANS, WITHOUT WRITTEN PERMISSION FROM THE PUBLISHERS.
ACADEMIC PRESS INC. III Fifth Avenue, New York, New York 10003
United Kingdom Edition published by ACADEMIC PRESS INC. (LONDON) LTD. Berkeley Square House, London W.I
UBRARY OF CONGRESS CATALOG CARD NUMBER: 65-26392
PRINTED IN THE UNITED STATES OF AMERICA
To H. Brainard Fancher
FOREWORD A striking feature of the scientific and technological development of the past 25 years is an increasing concern with the study of complex systems. Such systems may be biological, social, or physical and indeed it is easy to give examples of systems which combine elements from more than one of these areas. For instance, an unmanned satellite such as "Telstar" or "Nimbus" can be considered in purely physical terms. However, when an "astronaut" is to be involved in the system, a whole new realm of biological problems must be considered and, even more, the interaction between the biological and the physical subsystems must be taken into account. As we advance to large space stations involving crews of several men, we must add the complication of social problems to the systems analyses. A characteristic feature of most complex systems is the fact that individual components cannot be adequately studied and understood apart from their role in the system. Biologists have long appreciated this property of biological systems ana in recent years have attached considerable importance to the study of ecology or the biology of organisms in relation to their environment. Engineers and social scientists have profited from adopting this point of view of the biologists, and biological and social scientists are coming to an increased appreciation of the utility of mathematical models which have long been a principal tool of the physical scientist and engineer. In recent years there has emerged the beginning of a general theory of systems and a recognition of the fact that, whatever their differences, all goal-directed systems depend for their control upon information. Its encoding, storage, transmission, and transformation provide the basis for the essential decisions and operations by which a system functions. As the volume of information necessary to control a system has increased and as the transformations that are required to be performed on this information ix
x
FOREWARD
have become more intricate and time-consuming, systems designers have turned more and more to that information processor "par excellence"-the digital computer. In fact, the problems of control have become so complex that it is now necessary to consider in some detail the subject of Information and Control Systems. The designer of an information and control system must be concerned with such questions as, "What is the nature of the information that can be obtained about the system I hope to control?", "Where and how can it be obtained and how can it be transmitted to a digital computer?", "What transformations of the input information are required in order to provide the output information necessary to control the system?", "What are the timing requirements on the output information?", "How do the answers to the above questions affect the design of hardware and programs for the digital computer?" It is to problems such as these that Professor Chorafas, drawing on his wide background as an industrial consultant, directs his attention in this book. OTTIS W. RECHARD Director, Computing Center and Professor ofInformation Science and Mathematics Washington State University
CONT ENTS
;,
F'ORGWORD
CONTCm'S Of' VOlUMt: A l"fRODUCTION
XV
xvii
PART Vl
Process-Type Cases and Data Control Chapter XXI.
Chapter XXI I.
Computer Usage in the Process Industry Transitional Path in Computer Applic:uions
I 2
Evolution tcw;ard ProeeiiS-Type Studlc.!
5
Integrated Applications in the Refinery Systems Concept in Data Control
13
Applications with Teehnieal l'roblems Comp uter Us~ in Chemical and Petroleum Engineering Studying Pipeline Problems
Simulation Proble.ms ''F«dfoa·ward.. Concepts
Chapter XX Ill.
Chapter XXIV.
The Rationalization of Manaeement Data
7
17
18 22
26 30
Oevdopin.g an lmegr.ned lnrormation System
32 33
Computatiooal Requircmcnu in Dispatcl'lina lkin& Applied Mathematics Example 10i1h G.. Oispalchin'
40 41 44
Applications in The Field of Accounting Computerizing Oil and Gas Data
47
General Accoun ting ~Type Appli0
12 II
Doily •
.g 10
0
..
..
9
8
Weekly Monthly _
~
1
6
Quarterly _
=>
Annuolly
'" =>
5 4 3
c.
o
bi&1
2 I
I
1 2 3 4 5 6 7 8 9 10 (0)
etc .s:
C o
..'" e
.. :.c III
E
=>
:uc.
c
u
'" e '"
-"" E
'" =>
'0
C
..'"
'" =>
~
c
I
Q.
(b)
FIG.
2 (a) Summary monthly chart. (b) Monthly chart by function.
The specific approach for handling these problems will vary from company to company. In some cases, the systems analyst might decide to leave aside certain applications even if "theoretically" they seem to be good "opportunities" for further integrated data processing. In fact, depending on the occasion, a number of activities either do not represent significant work loads, or do not have a direct relationship to the main framework ofinformation. It may be better to have an electronic accounting machine at hand than to load the large-scale system with trivial work.
40
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
COMPUTATIONAL REQUIREMENTS IN DISPATCHING As an example of the integration of management information for a processtype industry, we will consider the automating of dispatching operations. The total work of scheduling and controlling the movement of multiple tenders through a system of pipelines can be divided into the following functions: • • • • •
Batching Sequencing Estimating pump rate and flow rates Recalculating Reporting
Batching refers to the "dividing" into portions of particular types of product, pumped into a line as one continuous unit. Where numerous grades of products are handled, proper sequencing is necessary to minimize the losses that result from degrading higher-valued materials to lower-valued materials. Optimal sequencing is also necessary to facilitate coordination of the movement of batches through limited tankage at the company's source station and intermediate tanks. Deliveries must be sequenced in a firm manner so that flow rates in the various line sections can be computed. Where numerous terminal points exist on a line, it is usually desirable to limit the number of simultaneous deliveries to two or three. More than this number of deliveries occurring simultaneously would result in a continuous change in the line flow pattern, requiring almost endless starting and stopping of pumping units. In addition to excessive wear and tear on motor-starter equipment, the operating personnel at the various stations would be occupied in observing the operation of the pumping equipment and would be unable to perform other duties. Also, like delivery sequencing, delivery rates must be scheduled to permit the computation of line section flow rates. Optimum delivery rates are those which permit the steadiest flow of products through a majority of pipeline sections downstream from the various delivery points. Where lines 'are of a telescoping nature, caused by either the reduction of line size or pumping power, delivery rates must be set to facilitate the pumping of the desired quantities into the lines at the source points. Quite often, delivery rates must also be varied to satisfy unusual conditions existing at various terminal points. Line pumping-rate computations focus on an accurate estimate of the average rate required to accomplish a desired movement over a scheduled period. Generally, where lines are powered by multiple pumping units, possible pumping rates vary from optimum rates. Therefore, the desired
XXIII.
THE RATIONALIZATION OF MANAGEMENT DATA
41
rates must be adjusted both upward and downward over given periods of time. Further adjustment to the desired rates is often necessary to facilitate the coordination of movements through feeder lines, carrier company tankage, and system lateral lines. The computation of line-section flow rates is also necessary. A line section can be defined as that section of line immediately downstream from each terminal point and extending to the next downstream terminal point. The flow rate in each section is the difference between the delivery rate at the terminal and the flow rate in the upstream line section. Estimates of this type explicitly point to the need for recalculations as conditions change. For instance, based on the inventory in a line at any given time, the position of the various batches with respect to the various stations and terminals must be recomputed. Then, by the application ofline section flow rates, the time that changes should occur can be re-estimated. In the sense of the foregoing discussion, one of the contributions of the data processing equipment is to help develop operational forecasts. Frequent revisions are normally required to account for variations between quantities scheduled to be pumped and delivered and the quantities actually pumped and delivered. When necessary, these forecasts should be teletransmitted to the various field locations, where they provide a basis for estimating the operations of station pumping and delivery equipment. To date, electronic data processing equipment has been used to advantage in several dispatching systems. Computers avail continuous checks on deliveries, as is, for instance, the case in crude oil delivery to power stations that use fuel for steel production. By means of an automated dispatching setup an oil company was able to match supply and demand, keeping its attention focused on demand variations in a timely manner. This pipeline network supplies twelve other crude oil consumption points. In total, fifteen telemetering units are being monitored continuously, whereas in the past instruments providing the necessary data were read hourly and the flow value at each point computed manually with a resulting substantial delay. By making frequent telemeter checks and flow calculations for each purchase or delivery point, the dispatching department of the oil company maintains control over system demands. The output from the computer is in the form of data charts presenting both l-hour flow and 24-hour total flow for each of the telemeter stations, plus certain combinations of the data.
USING APPLIED MATHEMATICS Other examples can be taken from simulation. A mathematical study was recently conducted to coordinate pipeline-arrival depot operations. Two
42
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
elements were represented stochastically: the occurrence of ship arrivals and turbine breakdowns. All other parts of the model were based upon engineering calculations or upon established decision rules used in the pipeline operations. Briefly, the model consists of a master routine, referred to as the "monitor program," and several subroutines which represent various phases of the operation. These subroutines are: • Generate ship arrivals and the berthing and loading of such ships. • Calculate flow rates in the pipeline. • Accumulate throughput and update inventories. The monitor program controls the entire sequence in the computer model. It calls in the subroutines for data, as required, processes this information in accordance with pipeline and terminal operating logic, and prints out resulting information on flow rates, ship delays, inventories, cutbacks in throughput, accumulated throughput, and changes in turbine status. Demand is placed on the system by the "ship berthing and arrival generator section" that produces a ship-arrival pattern that approximates previous experience and moves the ship into berths in accordance with operating rules. Provision is made for variations in the size of ships loaded, changes in demand for oil, storms, number of berths, restrictions on loading of very large ships, availability, of bunkers, and variations between the loading rate at various berths. Since the results were sensitive to the pattern of ship arrivals, the generation of ship arrival times and the corresponding lifts were incorporated in a separate computer program, thus permitting the use of the same arrival pattern for several case studies. Ship arrivals did not differ significantly in a statistical sense, from a negative exponential distribution having the same average time between arrivals. Individual arrival times were generated by random sampling from the negative exponential distribution. Statistical methods were used to insure that the cumulative numbers of generated arrivals over specific time periods were within control limits calculated from actual arrival data. Random sampling of a distribution relating expected frequency of occurrence to barrels lifted per ship was used to generate the size of ship cargos. The distribution used was derived from actual data by grouping all liftings into seven classes. The values for these classes as well as the expected number of arrivals, control limits on arrivals, and the like could be varied from case to case." The ship berthing section uses the arrival and cargo-size information
* A similar discussion
is presented at the beginning of this chapter.
XXIII.
THE RATIONALIZATION OF MANAGEMENT DATA
43
from the arrival generator in determining when each cargo would have been removed from central dock inventory and what delays would have been incurred to ships. The input to the model provides for assigning a "berth holding interval" for each ship's "size class," at each berth. The berth holding interval is the time that a berth is not available for other assignments while a tanker is being loaded. "Very large" tankers are given a priority and are assigned to berths capable of accommodating them. Otherwise, tankers are preferentially berthed in order of ship size to allow the earliest completion of loading. The largest tankers are placed in the most efficient berths, but only until delays are encountered. When a ship cannot be berthed upon arrival, because of conflicts with larger ships in all available berths, all ships other than the very large tankers are rescheduled to a berth in order of arrival until the congestion is relieved, thereby preserving the first come, first served policy required by the pipeline's contractual arrangements. The period between the arrival and the time a berth becomes available is recorded as a delay due to lack of berths. If sufficient oil is not available in tankage by the time the ship would normally have completed loading, the ship departure is delayed until a full cargo is available. The delay is recorded as being due to "inventory." Weather data are used to determine port closures. Ships arriving during the closure are delayed and the delays are recorded as being due to storms. To satisfy the demand produced by ship arrivals, oil is made available at the issue point in the quantities determined in the flow calculation subroutine. Flow rates in each of the four sections of line are calculated every six hours and whenever a turbine-powered unit is shut off or started. These flow rates are used in determining oil availability at the issue point and at main pump-station tankage. A stochastic element in the model exercises its effect in this subroutine. In developing the mathematical simulator, it has been assumed that the main pump stations will, because of adequate horsepower and multiple pumping units, be able to hold the maximum allowable discharge pressure. However, a turbine unit that goes off the line causes a major variation in flow. Shutdowns of turbine units are the results of mechanical failure, schedules maintenance, and excessive inventories in downstream tankage. Mechanical failure of turbines, because of its unpredictable timing, has been represented stochastically. The time of occurrence of mechanical failure, its duration, and the turbine affected are determined by random sampling from probability distribution supplied as input data. Profiles of turbine-horsepower degeneration, due to wear and to random events, are included as input data for each turbine, and are used to determine the horsepower for calculation of flow rates. Provision is made for periodic maintenance shutdowns of different durations.
44
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
Results thus far indicate that the simulation model realistically represents the actual system. The model was sensitive to the number, size, pattern of ship arrivals, to the distribution of turbine downtimes, and to the frequency and duration of storms at the issue point. Provided the necessary accuracy is maintained, the model provides information upon which to make efficient decisions about changes of facilities or operating policies.
EXAMPLE WITH GAS DISPATCHING
Satisfying the needs of the clientele, as weather permits, and within the limits of a pre-established 24-hour peak, is the important responsibility of the gas-dispatching department of any gas company. The gas-dispatching department, taking into account weather, customer demand, and available gas supply, must match supply and demand. To do so, it has to monitor gas deliveries into its system from different gas-producing stations. In one application of electronic information machines to gas-dispatching problems, a total of thirty telemetering units recording fifty-five separate values at sixteen discrete points are monitored. Prior to the use of a data processor, instruments providing necessary telemetered data were read hourly and the flow value at each point was computed manually: Under the computer system, the points are monitored every six minutes, thus eliminating the dispatching problems. In making hourly telemeter checks and flow calculations for each purchase or delivery point, the dispatching department maintains control over system demands and the potential cost that could occur unless demands can be limited to certain values related to the pre-established peak. The dispatching department accomplishes this limitation by exercising precontracted service interruptions with large industrial and commercial users. The usage of a computer in gas-dispatching operations has been well oriented toward the future. This was a structural need, for although acceptable flow calculation accuracy had been realized with manual operation, past growth had already overtaxed the gas-dispatching department's manual calculations workload. Future growth trends indicated that a computer was the only substitute that would avoid expanding dispatcher personnel, and labor costs was one of the prime reasons for considering computer usage. In another case, the usage of a data processor enabled a major gas company to control peak system demands without incurring high demand charges. This company buys its gas on a two-part rate that includes a straight commodity charge plus a demand charge based on the peak demand established during anyone day in a year. The demand charge is then applied over the other eleven months. Thus, a severely high peak demand during just
XXIII.
THE RATIONALIZATION OF MANAGEMENT DATA
45
one hour of one day in a year can directly affect operating expenses for the entire year. As a result, it was necessary to insure control of peak gas usage in the gas distribution system at all times. To do so, the gas company in question monitors its demand on a 24-hour basis throughout the year. Adjustments are made by interrupting service to industrial customers who buy gas at preferential rates on an interruptible service basis by agreeing to curtail use whenever demand in the utility's area approaches the condition of exceeding a pre-established peak usage point. The gas load dispatcher must monitor the hour-by-hour demand, anticipate unusual demands due to weather conditions, and evaluate the hourly load increase in terms of necessary industrial curtailment. Data from various purchase and delivery points on the system in the form of static pressure, differential pressure, temperature, etc., are telemetered to the dispatching center where the flow must be computed for each point to determine the total system demand. Some 75 telemeters are monitored every six minutes. In this as in all other applications in optimizing process control, the key to success is matching data processing with the real-time requirements. Substantial amounts of data, reflecting variations in the process, must be collected, analyzed, and displayed to permit control decisions to be made in time to effect corrective and optimizing action. When large numbers of variables with rapidly changing values are involved, the factor of time is especially important. Time lost in the preparation of data suitable for making decisions results in possible losses in quality, reliability, efficiency, and safety. It cannot be repeated too often that the primary advantage of computer process control is that it permits control decisions to be made at rates that match the time constants of the process and system involved. These time factors vary from process to process, and each process control situation requires control elements custom-tailored to particular specifications. In a certain specific case, in order to apply the computer to the process, it was first necessary to define the exact specifications of the process to which the computer was to be attached, and the desired functions the machine had to perform. The initial estimates showed that only a minor fraction of the computer time would be necessary for the dispatching calculations, although for reasons of on-lineness the machine had to work on an around-the-clock basis. The foregoing conclusion, which followed the first analysis, is typical enough of real-time applications. To take full advantage of the discrepancy between computational time and machine availability, the utility company
46
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
programmed the computer to carry out the prime objective of demand calculations and, in addition, perform engineering calculations for other divisions. A monitor program was established to set up a priority sequence of routines for the computer to follow.* This executive program makes it possible for the computer to perform the monitoring and calculation every ten minutes and again at the end of the hour. It then takes up additional computational work in the vacant five minutes before sampling periods. * See also the discussion on executive routines in Chapter XIX.
Chapter XXIV APPLICATIONS IN THE FIELD OF ACCOUNTING The well-known advantage a computing system offers for accounting report preparation and file updating is the preparation of all required data with one handling of the information. By eliminating individual, diverse, and overlapping steps, time and cost savings can be realized along with an increase in efficiency and accuracy. But the "computerized" methods thus far used in petroleum general accounting applications have left much to be desired. In Chapter XXIII, we made explicit reference to what we consider to be a rational information system for management use. Accounting should act as the feedforward element of this system, and this in itself means that accounting should work in close connection with mathematical simulationit should use the most recent concepts and devices in advance index evaluation, optimization, cost analysis, and experimentation. But how often is this the case? How many companies or organizations have the guts to "derust" their accounting systems? Examples of the effects of patching, and of the outcome of the partial measures applied to rusty systems are numerous. A good example comes from eastern Europe. The Russians, for one, have been considering computer control of fuel-power supplies. Their objective was to establish the most economic methods of distributing coal throughout the entire country from existing coal basins, but no evaluation was carried out to determine whether coal is indeed the most economic fuel to distribute. Piped gas, oil, or highvoltage electricity have proven to be less costly commodities for distribution. A study of this nature obviously should start with fundamentals, establishing a standard cost system and implementing rational cost accounting procedures. It is useless to use simulators and computers to determine the most economic distribution of existing fuel supplies, while being in no position to evaluate the cost effectiveness of the different alternatives because of old-fashioned bookkeeping. 47
48
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
The cost of a thorough analytic study to cover the foundations of the systems and procedures work is small compared with the magnitude of fuel power problems in a major industrial country which has its fuel power sources and its industries spread over such a large slice of the earth's surface. About one-third of all Russian freight turnover, it seems, is taken up with the transport of fuel. Fuel power production and distribution absorb around a quarter of all industrial investment and one out of every ten industrial workers. The size and scope of the problem is rapidly changing as industry expands and the proportion of gas and oil to other fuels rises. Some five years ago, the Russian government demanded an optimal control plan for fuel power production and distribution for the entire country, and for separate economic regions. A plan was produced, but apparently it did not yield the desired results, if one judges from the commentaries this plan got within the country: "Control must be optimal in the strictest sense of the word because a deviation of even a few per cent causes losses measured in bil1ions of rubles ...." Or " ... can one get an optimal balance for fuel power as a whole merely by adding together the optimal balances for coal, oil, gas, and electric energy?" Exact and analytic cost accounting is the first of two major conditions that must be met before further progress can be made. The second is a precise appreciation of the relative merits of basic fuel and power supplies. As in many other cases, applied mathematics and computers should have been considered in the next step; instead they were treated first. The Russian analysts established: • The quantities of fuel power resources in all economic councils of the Soviet Union. • The "firm" requirements in coal, oil, gas, and electric energy. • The "conditional" requirements in caloric values which can be supplied by any of the four fuel power sources mentioned, and other factors relating to distance from sources, transport costs, and the like. "But to establish balances for the future, this is still insufficient," they commented, "What about fuel power resources for factories now being built or reconstructed?" For this, one needs economical1y valid prices for timber, metal, machines, and general material resources used in the production of fuel and power; in equipment for fuel power installations; and in power transmission facilities. One also needs valid transport tariffs based on real production costs. It is like the English recipe on how to cook a rabbit: "First catch the rabbit ...."
XXIV.
APPLICATIONS IN THE FIELD OF ACCOUNTING
49
COMPUTERIZING OIL AND GAS DATA Gasoline accounting, for one, presents a good potential for an integrated data processing approach. This involves three main phases. The first is gas measurement. The second consists of the allocation of volumes and values in connected field systems of gas facilities. The third includes royalty accounting and disbursements, preparation of earnings and expense vouchers, and preparation of reports required for company operations and governmental agencies. Input, processing, and output of data throughout the range of petroleum operations is shown schematically in Fig. 1.
Company management
FIGURE
I
Many of the foregoing problems are of general nature to commercial concerns. In recent years, for example, the government has required each business to compile records of the number ofemployees on the payroll, hours worked, wages paid, and various contributions that are deducted from the employees' wages. Apart from the obligations imposed by the Federal Government, the company must keep records for the State Government, and also give each employee a detailed statement of wages earned, deductions for federal and state income taxes, social security and take-home pay. In addition, petroleum companies have problems of a more specific nature.
50
PART VI.
PROCESS·TYPE CASES AND DATA CONTROL
The area of application of integrated data processing in the field of general ledger accounting for an oil company are: • • • •
Capital and surplus Creditors, account charges, deferred liabilities Movable assets in existence at the effective date Depreciation reserve on movable assets in existence at the effective date • Debtors, prepayments, and deferred charges • Cash at bank, on hand, and in transit The way integrated files would be used can be exhibited by advancing the foregoing classification one more step in detail, as shown in the accompanying tabulation: I. Capital and surplus
Share surplus Earned surplus Dividend paid
2. Creditors, account charges, deferred liabilities
Accounts payable Deposits of cash Retention fees withheld Liability estimates Mobilization advances Unclaimed payments Salary and wage control Liability for goods and services not billed Liabilities in general Accrued staff movement expense
3. Movable assets in existence at the effective date
Opening balance Additions Retirements Sales/transfer
4. Depreciation reserve on movable assets in existence at the effective date
Opening balance Current provision Retirements Sales/transfer
5. Debtors, prepayments, and deferred charges
Accounts receivable pending Unbilled integrated charges to the company Provision for bad and doubtful accounts Amount due by employees Claims and deposits Deferred payroll transaction
6. Cash at bank, on hand, and in transit
Mounting credits Cash at bank, current Bank interest receivable
XXIV.
APPLICATIONS IN THE FIELD OF ACCOUNTING
51
Under former procedures the information necessary to accomplish these accounting objectives was almost always scattered throughout many departments. One department is responsible for billing information to all accounts, another department is responsible for credit information, another for accounts receivable, while the cash balance may be handled by a different division altogether. The burdens imposed on a company by this method of operation are significant. Inquiries to an account from either inside or outside the company frequently result in a maze of intercommunication to obtain the desired information. This is a costly operation, and one should not underestimate the possibility of errors caused by scattered handling of the files. Worse yet, this maze of disorganized data can mask the facts. An accurate and timely accounting system begins with the proper handling of the initial source information. As far as customer billing is concerned, this means measurements. Measurements, in the sense used here, encompass the work heretofore performed in the major producing division offices of a petroleum concern. This, in turn, includes several stages. It is necessary to compute the flow of gas through orifice meters. Meter charts containing continuously recorded pressure data are converted to numerical quantities by means of a chart integrator, which is a special-purpose mechanical analog computer. The integrator result must be converted to quantities expressed in standard units of volume by the multiplication of a series of factors which give effect to kinetic and thermodynamic laws which govern gas measurements. Where meters are not installed to measure gas, such as in the case of small volumes used as fuel in field operations, a system of estimating is employed. Nevertheless, in either case, it is necessary to accumulate figures that will enter into the following phase, namely, the allocation and assembling of volumes and values in a connected field system of gas facilities. The latter phase is the heart of the entire oil and gas application. It is the area where utmost accuracy is demanded. In allocation and assembling, the objective is to determine the amount of gas each lease contributes to various types of dispositions. These dispositions include sales to transmission companies, gasoline plants, carbon-black manufactures; as well as gas used for fuel, repressuring, gas lifting, and gas that is flared. The salient problem here is to maintain the files with factual information and bring the proper figures into perspective. Frequently, notification of changes must be routed to a considerable number of locations. The physical difficulties involved in maintaining files in this manner cause delays in posting of charges. These delays result in ·less satisfactory service to the customer, while errors, which must later be corrected, are introduced. An efficient data handling system should enable the company to record all of the information pertaining to an account in an integrated file:
52
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
• Customer description • Use history • Accounts receivable • Name and address • Credit history • Buying information All this information should be included, along with the proper statistics and past performance evaluations. The central computer should be made "responsible" for all inquiries on the customer accounts, communicating through the interface machines with the branch locations. Thus, a high degree of timeliness can be obtained while, for all practical purposes, discrepancies will be nonexistent. Figure 2 presents a data organization scheme for customer accounting purposes. Mathematical statistics have been used in data reduction, to help identify change situations and use "data tendencies" in an efficient manner. A parallel system keeps the customer quality history, including both the use of statistics (in a comparative basis) and credit information (Fig. 3). In the background of this data handling activity is the performance of five major functions: Heading
~(f) ~
~
Meter
ev
"8 -'
LL
~mQ) Etb tn~~ ~o2 OL" 0'0 il,-ev w ~;~g ~:g8~8 §~ ~~~"6~6
6
°
0
Demand (previous year)
Data
o
l.-
-
::I
CIl~CIl
ijlOCll{j
a::
_o
001£ u~
,
-0
~~
a.
12 month
. .1. bests '
(1,)
-
month
IJ
en
0'
c
.... ~ ::I _0
,-
~"5 '"
0::
..ccl~tJ)Q.)o-g
123 56 789 iuans 2a O c ; : ; : ' _
~iLO c .... C/)=-::u,!?Q)c ~mou 0-0 5 Q)
Open items -
-?-§UJ~ 0 C
C CP"O
E
C E ::J 0 c
oOE
"'0 -00 :!:: c:
Q,)Q)~
~_.~~~cSog
.!O(Do~ 2~oU. t?
§
u 0
=.
m
° Statistics
FIGURE
2
w w
ev L c: "§ev '0 Q
.~~c _::I
",0
c: 0
a=
evLev U ev
"'U
I!''08 "w U ,,~
-
Credit statistics Quality statistic
iiJ~J'~ c:
U
Q
c
-o
0 .-
i;
gev
c:
V;
-:: CL
"0
.2 o
::I
w
::I
Q
ev> 0:
0
....J
Performance and use statistics
in
£
c:0 0°
~
Ledger Last entry
w
o ev.
";: 0
~
]r~
L
0
ev
FIGURE
ev uev '0
'0 0
C ::I o
E
.
o
t:
.."
.. >
.-Ei :; E
. /1 "
o"
'.'. / ~--'--_--L-_--'--......L----'---50--J...-f9 /
30
100
$
200
300
400
.'
I
I
I
i_ 1000
Volue of individual sales orders _ lb)
FIG.4 (a) Distribution of individual sales orders by dollar value. (b) Cumulative dollar value per dollar class of the individual sales orders.
it is possible to determine critical cutoff points, and subsequently where it would be more economical to check all invoices because of the smaller volume involved and the greater risk due to high value errors. Stratification can also be carried into greater detail by subdividing the population into various "risk groups." Smaller sample sizes can be taken as the risk decreases. A statistical analysis may indicate that the low value invoices, usually representing a large volume, need not be checked at all. This was the example of the foregoing case-its implementation basically
62
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
means management by exception. Random sampling with its computed risks provides greater accuracy at less cost, improves recovery with less fatigue, and results in lower error rates. Concentration on the lots containing most of the "errors" promotes more efficient checking. Sampling of accounting data permits a fast and reliable means of determining areas of greatest error, so that effective corrective action can be taken. In a systems approach, error reduction can be handled through the usage of control charts in connection with sampling. To date, statistical quality control charts have been used mainly in connection with industrial production. Their suggested use in accounting is based on the generic fact that the process of controlling "quality" is unchangeable; it is only affected by the dependent nature of a verification. The accountant carrying out the quality assurance operation may be affected by what he sees in determining whether the element is identical with the "truth" or whether there is a discrepancy. The use of mathematical statistics presents in itself solid safeguards that the outcome of an audit will not be biased by subjective judgment. As a management tool, the statistical sample should constitute an integral part of the data network. The airlines in America, for one, have adopted a sampling system for their interline accounting.* Many passengers buy "through" tickets for journeys that utilize more than one airline. The whole fare is collected by the company that originates the journey, and so the problem is to account for the money that has to be paid to other lines for the later stages of the trip. Since there are journeys in the opposite direction, there also exists a reverse flow of credit. Hence, the cash payments necessary between airlines are a final balancing operation and are small compared with the actual turnover in each direction. It has, therefore, been agreed that the determination of the adjustment with "absolute accuracy" is not justified because this would necessitate the clerical processing of all ticket vouchers. The solution that has been chosen calls for a sample ofvouchers to be taken; the results from this sample determine the cash adjustments between the airlines. To our inquiry on this matter, a leading airline answered as follows: We utilize a sampling technique to settle interline balances with seven other large airlines. The following steps are involved: a. Tickets are sorted into the following categories First Oass Coach Exclusions (This includes half-fare tickets, excess baggage tickets and several other low volume categories.)
* Similarly, the railroads are said to proceed in a comparable manner in their accounting procedures, after having experimented with parallel runs on the accuracy of the inductive accounting approach.
XXIV.
APPLICATIONS IN THE FIELD OF ACCOUNTING
63
b. The exclusions are billed on an individual basis. c. A 10% sample of the First Class and Coach categories is obtained by selecting tickets with serial numbers ending in a predetermined digit. The digit is selected by means of a table of random numbers and a new selection is made each month. d. The sample tickets are priced and the remaining tickets are billed based on an average value that is developed from the sample tickets . . . . there are a number of built-in validity checks and safeguards against getting a biased sample.
Or, as another airline was to say: The sample of passenger tickets selected is based on the terminal digits of the tickets. The size of the sample is decided by the number of terminal digits selected. The local (point-topoint) fare is determined for the population and the actual revenue earned is determined for just the sampled items. The relationship (regressive estimate for domestic tickets and ratio estimate for international tickets) of local fare to actual fare for the sampled items is then applied to the population local fares in order to estimate the population actual fares.
Diverse areas of financial and accounting control, such as overtime work, absenteeism, budgetary measures, and performance evaluations, indicate a need for effective inspiration and measurement methods to provide the company with sharper tools than are otherwise available. But if sampling is a means for implementing approaches to data collection, subsequent data reduction techniques must also be considered. Traditionally, the results of sample surveys are produced in the form of distributions, means, medians, or aggregates. With computer use, it is possible to determine, for instance, the second moment of a distribution, so that added insight is obtained into the performance of the accounting system. This brings our discussion back to the need of establishing and testing appropriate models for the composite examination of multiple variables. The results of a statistically designed audit should also be accompanied with statements of the precision involved in their making. Structural to this requirement is the establishment of estimates of parameters from large masses of data. Such is the case with the determination of intraclass correlations at the level of clusters of small size, and of the analysis of variance. The mass scale determination of correlation coefficients involved in ratio and regression estimates might lead to more efficient utilization of complex stochastic processes, requiring smaller samples while assuring precise results. In conclusion, the use of mathematical statistics for financial control purposes falls within the framework of digital control functions. Accounting management has a structural similarity with a test of significance based on the null hypotheses. In the twenties, Shewhard treated the problems of quality control through a significance test adapted for workshop use. This helped point out the exceptions that needed attention; exceptions due to "assignable causes." But tests of significance are also particularly appropriate in helping managers to judge control information.
Chapter XXV CONTROLLING A POWER PRODUCTION PLANT Electric utility companies are constantly searching for methods of producing electric power more economically. This must be accomplished while maintaining safe, dependable operations and, without risking service interruption to the customer. In turn, such requirements imply substantial improvements in the operating performance and reliability of individual plant equipment and a reduction of the operating costs of the entire system: (1) The upward trend offuel costs is met by the installation of larger, more efficient generating units. Power stations are built on the unit principle and there has been a great increase in the size of the individual units in the last few years: 500-MW and I,OOO-MW units are currently projected. The nature of the load and the advent of nuclear power require that data control systems be implemented to maintain their efficiency. (2) Rising labor costs have been offset by a reduction in plant operating personnel. A reduction in operating costs is achieved by consolidation of widely scattered boiler, turbines, and generator control panels into one central guidance network. For this network, an automatic real-time supervisory control system is required, to properly optimize certain manipulated variables and to perform calculations vital to improved operability. Digital process control can provide a more effective plant equipment design and allow operation closer to design limits. It can detect equipment failure at an early stage of error development, and maintain high standards of safety.
Digital automation at the power factory level is often approached in a "step-by-step" fashion. A first approach is to establish a system able to collect, correlate, and compute data to produce information that would guide the operator to possible means of increasing plant efficiency. The following step involves expansion of the required equipment to accomplish automatic startup and shutdown of the entire generating unit. The third step 64
XXV.
CONTROLLING A POWER PRODUCTION PLANT
65
calls for a computer control system able to actually run the plant. The generating unit is directly under digital control, with the information machine correlating the operational data to produce and execute decisions.
INPUT AND THROUGHPUT ACTION FOR POWER PLANTS As with all process control applications, the essential functions in guidance for power are: collection of operational data, correlation and reduction of the data to produce guidance information, evaluation and comparison of this information with references to decision making, and, finally, execution of the decisions to achieve the desired operating conditions. Implementation of these functions in power plants calls for storage and considerable amounts of data, such as norms, past performance, formulas, and the like, which are subject to frequent changes. The efficient collection of operational data implies the existence of a substantial number of: • Contact sense points for detecting on-off conditions of the plant equipment. • Contact operate points for on-off switching of the plant equipment. • Analog inputs for measurements of plant variables. • Analog outputs that represent the analog equivalent of calculated digital values to adjust the controller's set points. • Pulse counts to accept plant information in digital form for kilowatthour meters, etc. • Priority interrupts that instantaneously interrupt the routine operation of the computer in order to handle high-priority occurrences in real time. The frequency of automatic logging cycles cannot be decided in a general or arbitrary manner. It must be determined experimentally by the rate; of change of the process variables. In the case of a base-load station, the routine logs usually need not be more frequent than, say, once an hour. The trend points might be logged at 24-hour intervals. For stations operating on a two-shift basis, it is advantageous to have a variable logging interval, so that during startup and loading the logging frequency can be greater than during normal running times. Every care should be taken to incorporate in the design of the data logging equipment such fundamental conditions. The sensory elements will be coordinated to the real-time computer through an "input interface." This needs to include interconnections between the plant measuring instruments, transducers, and the data logging equipment. It is at this interface that difficulties usually arise, not only at the design stage but also during systems testing. Some of these problems may
66
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
result from the conversion of input signals into digital form. * The provision of a converter for each input channel brings about certain economic considerations, and for this reason a shared converter is often employed. This means that all the input signals must first be translated to a certain common language. A data logger organization is shown in Fig. 1. The interface system must answer in an able manner questions of variety and of incompatibility. Apart from the electrical input signals, from transducers associated with process variables, such as pressure and flow, the data logger must be capable of accepting inputs representing the output of a variety of other media. A distinction here is that the instrument signals are at relatively low level while, for example, the power measurements are derived from current and potential transformers, with operational characteristics at a higher voltage level. Also, many of the input signals will be derived from primary measurement instruments which have nonlinear relationships between the output of the transducer and the physical quantity being measured. The nonlinearities may be merely a slight departure from strict proportionality over the range or parts of the range. In other cases, the measurements may be of an implicit nature involving functions of a variable. The linearization function requires that the signal be modified, and this modification depends upon the value of the signal. All these aspects will have to be studied in a detailed and precise manner. An interface operation should obviously reflect basic operational requirements, for instance, the continuous automatic scanning of inputs and printing records of any points that deviate from preset low and high limits. This brings into perspective the subject of output coordination. Instantaneous output will be needed to give point identification numbers the time of the alarm and the value, and the time and value when the point returns to normal. Depending on operational requirements, the occurrence of an abnormal condition may also need to actuate a visual or acoustic warning device to attract the attention of the operator. The output devices may be strip-printers, page-printers, typewriters, or visual displays. Their selection should be based on technico-economic considerations with technical requirements becoming imperative in the selection of media for recording the occurrence of alarms. A preferred format of the record today is a print-out of the time of the alarm followed by the point identification number, high or low limit symbol, and the measured value. In a more sophisticated data logger, a page-printer can be used for the alarm records, the additional printing space being used to record the high and low alarm limit settings. In a process control for boiler operations, for instance, the format
* See also Chapter VI.
I-
s:
CI>
0-
e
on on CI> o
.J
I
FIGURE
1
tJI
comparotor
Li~
•
&;j7I
Warning
~;~system ~L.Y L
To input/output media and the central processing system
Z
-..I
0'\
-l
Z
>-
l'
-e
o
-l
-z
o o c:: o
"t:I ;>::l
;>::l
rn
o :E
-e
>-
CJ
Z
rr-
o
;>::l
-l
o o
oc
-e c; 0 u
-;;
a.
E
c;
~ °i
.e .
"
"E
"..c:: u
> 0 E
~
..
"~
z
Q.
II>
:::l II>
'"
~
0
..
.!'" c; '"
~ -e c:: c
.
'" C E a.
:.c:
en
E
8
FiGURE
9
At the production floor level, the completed move orders must also be processed; this information is provided by material handlers within the mills. A computer sorting operation would be performed to arrange the above data in a sequence suitable for processing against the machine load master. For any manned production planning operation one of the most difficult jobs Js the order loading and unloading of all manufacturing units within the mill. This kind of data manipulation is one of the simplest for computer
XXVIII.
PRODUCTION AND INVENTORY CONTROL
133
8
t ..
~hipments '----_~
FIGURE
10
applications. In one pass, updating the machine load master file from previous processing, the new orders are treated as debits to machine load while the feedback from data gathering units throughout the mill is processed as credits to the load. The computer will relieve a particular unit of the orders it has processed during the previous shift, taking into consideration variables such as breakdowns, and then load into the unit all new orders, reprocessing what is necessary to complete the order to the customer promise. The logic necessary to perform this loading and unloading will be determined from established production levels of all units, priority processing categories for all classes of orders key customer, emergency, stock, and the like. Similarly, a matrix of possible substitutions of materials, and processing, by grade, condition, etc., will be used by the computer. As a joint product of this operation, a new machine load master file is established with an accompanying machine load report which will spotlight the "bottleneck"
134
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
8
t
~
Completed
~
6eparment~1 performance
l!.!2:.!.uid~~~. ~. FIGURE
I
Order I performance
~d
II
situations and, through careful analysis, enable management to develop new manufacturing techniques and establish revised parameters to meet current production levels and facility usage. Utilizing the sorted data, as above, two distinct file maintenance operations will then be performed by the computer: • Order file maintenance • Inventory file maintenance In the inventory file updating procedures, the steel application tickets reflecting the actual physical application of metal to an order are processed against raw material inventory files to reflect current status. This updated file is used in the following day's processing as input to the steel application. Simultaneously with the updating of inventory, new orders, which have been carried throughout the entire data processing sequence thus far, are introduced into the open order master file as a part of the order file maintenance operation. Orders and Inventory
Completed manufacturing operations, as reported by the data gathering
XXVIII.
PRODUCTION AND INVENTORY CONTROL
135
~t----;i!-~f:J ~
To customer, etc.
in'voice reg'is. includin~
g~t
FIGURE
12
units, are used as updating media for in-process orders reflecting actual pounds and costs and compared to standards. All completed orders are pulled to a work tape for subsequent processing and, as a result of this "completing" operation, teletype shipment tapes are prepared and transmitted to the central computer. An order status report is also produced, showing in detail the current status of all orders at a particular location. By exception reporting, manufacturing problems can be brought to light while the updating operation is taking place. These can be either problems that have been encountered or those that will be encountered unless corrective action is taken. The procedure is fairly simple. As soon as: • Completed orders • Open orders • Current inventory have been established by the machine, information is available to be sorted, manipulated, and classified, to produce timely, accurate, management reports including an inventory control and turnover on low profit operations, order execution, departmental performance, adherence to standards, and quality control histories. These reports are produced for factory management by the satellite computer.
136
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
Special "product progress reports" are prepared for production planning. They include order number, customer abbreviation, mill grade name, department number, machine number, sequence number, operation description, and standard data relating to the operation under consideration. The listings are established in a scheduling sequence and are accompanied by a set of more limited reports, whose characteristics depend on their subsequent use. These are separated and distributed as follows: • A copy is withheld in production planning which becomes a reference media for the determination of material movement and order status. • A copy is given to the foreman, so that he can know and direct the schedule operation within his area of responsibility. • A copy is given to the materials provider to help establish the sequence that he must observe in assuring that the material so scheduled becomes available for its processing in the manner that it is indicated to move from one area of scheduled operation to another. • A copy, plus a deck of interpreted cards in identical sequence, are given to the operating units. Upon completion of each scheduled operation, the mill or machine operator uses the subject input card as one media for immediate production recording. Special data gathering units distributed along the work centers are able to accept: • The tabulating card, which records "fixed" information. • Plastic or metal "slugs," which record "semifixed" information, such as operator and machine identity. • Variable information, manually posted, which cannot be known until the operation is actually performed. This includes produced pounds, scrap loss, and material conditions code. The operator inserts the various requirements of the message that he is about to transmit. He then presses the transmission button. This signals the remote station sequential scanner which is located at some interim point between the numerous remote stations and the data processing department. Its function is to establish direct connection with the central recorder for the receipt of one message at a time. It then sequentially seeks and establishes further connections from other remote locations as the need for transmitting service is indicated. The central recorder receives and records the address of the sending station. It assigns the time that the message was received. This information is automatically punched into paper tape. In turn, this tape will become immediate input to the satellite computer. The tabulating cards are referred
XXVIII.
PRODUCTION AND INVENTORY CONTROL
137
back to the production planning department where they become a visible as well as machinable record of past performance. At the central computer, the order has already been updated. All that is now necessary is some limited information concerning the shipment. This would trigger the printing of an invoice, the updating ofthe central bookings tape, and the preparation of the necessary accounts receivable and sales analysis records. The computer can control the shipment of data, establish shipments performance, and follow up open orders.
Chapter XXIX QUALITY ASSURANCE AS A REAL-TIME APPLICATION Prior to the fifties, the pace of industry, the level of product complexity, and the importance of quality were all handled adequately by shop inspectors who were, usually, a part of the manufacturing organization. These inspectors were production men with a more or less good knowledge of the shop process and the functions of the hardware. They inspected what they considered important, took action "as required," and in general fulfilled a vital need in the organization. But technological evolution, with the mass production effects that followed it, put new perspectives in this effect. Product volume and complexity made "time-honored" artisan methods for quality assurance no longer valid. "Inspection" became a management problem and quality control organizations were brought into being. With the aid of advanced technology, the quality assurance function was characterized by the use of sampling techniques, the tailoring of the inspection job to measure the critical dimensions, the "bringing forward" of the quality subject to focus on engineering specifications, the classification of the importance of defects found in a production line-and, later, the establishment of the fundamental role of reliability. With this, we experience the beginning of the approach to product assurance as an entity in itself. What is really new is the concept of continuity: that matters of product assurance constitute a process problem, like that of refining or of power production. This means that, even though quality evaluation trials have commonly been undertaken in the past, the present availability of electronic informations systems gives them a new emphasis. The need for dependability makes the performance of "independent" and "unrelated" tests prohibitively inefficient. It is therefore essential to conduct trials that will provide information about the product performance on such a basic form that it can, throughout the life of the product, be used to predict performance in new operational situations as they arise. 138
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
139
In studying matters concerning product assurance, a mathematical model of product use needs to be formulated and, subsequently, used to predict performance for conditions in which trials cannot be conducted. Theoretically, an improvement in evaluation procedures results if the trial conditions are statistically designed to reveal the effects of important parameters. Unless only very gross over-all effects are to be determined, a substantial sample of results is required in each condition, because of the statistical variability of performance. Practically, this is not always feasible and this is another reason why industry has to establish a continuous data trial for quality follow-up. The use of computers at the industrial production level made possible this "continuous trial" idea. Computers provide the means to plan, operate and control the advanced quality systems that mass production requires. This is valid provided the proper analysis has preceded the projected installation; provided management realizes that not only product quality is important in itself, but also how it rationally relates to costs. A common industrial fallacy is that good quality is always costly, and that inferior design and materials, sloppy workmanship, inadequate testing, and careless servicing are considered to "save money." The risk is losing much more than one "gains," besides the fact that poor quality is the most expensive item one can put into a product. The analysis of short- and long-range quality trends do help bring this into perspective. In Chapter XVI, we made reference to the foregoing concepts as applicable to the electronics industry, and more precisely, to the design, manufacturing, and operations of data systems. In the present chapter, we will consider how total quality assurance can be applied in the production process itself, and the computer used as an efficient means for data integration and treatment to product assurance ends.
QUALITY EFFECTS OF MASS PRODUCTION Under quality assurance of the mass products of industry, we understand their functional operation for a specific time period in a combination of conditions specified by standards and technical requirements. The effort should start at the plant laboratories which are performing functional tests, the findings of which are, more often than not, neither properly analyzed nor analytically evaluated. As a result, it remains practically unknown whether there is an improvement or deterioration in the quality of the product, whether the combined production quality meets the standards and technical requirements and to what degree. Not only should a total approach be taken towards quality problems, but
140
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
also, at each level, quality test results must be analyzed objectively. This, too, is contrary to the current handling of quality matters where the evaluation of test results bears a subjective nature and depends upon the experience and the "good will" acquired by the different inspectors. This is not meant to undervalue the established statistical quality control approaches, but often the volume of industrial testing does not guarantee the necessary dependability product quality evaluation should have. We finally come to realize that the rather haphazard inspection procedures, which have been used for many years with seeming success, are no longer economically acceptable or sufficiently effective for controlling: • • • •
The The The The
quality of internal operations adoption of subcontracting programs enlarged scope of purchasing activities advent of new materials
Current production processes have magnified a structural need which, somehow, managed to escape attention. The requirements of the mass market itself focused our attention on the inadequacy and inefficiency ofthe present system of control and the need for substituting a more formalized and analytic method to replace it, hence, the interest in process control concepts to describe the operating practices and procedures that are being established in order to obtain built-in quality in manufactured items, and to analyze the factors that cause variations, to control these variations, to increase processing effectiveness, and to decrease waste and error. Current advances in mathematics and technology allow us to redefine the need for establishing a continuous process to measure quality and to indicate specific causes of poor or substandard quality results. What we want is to establish ways for quickly detecting the substandard material and to identify the structural reasons behind it. In turn, the implementation of such practice requires the handling of large numbers of unit records during the process of accumulating and analyzing quality data. This is much more than the simple employment of certain mathematical or statistical techniques. Perhaps in no other sector of industrial effort can the need, the usage, and the benefits to be derived from integrated data processing be better exemplified than in quality assurance. The fact that the use of applied mathematics alone does not guarantee product control can be demonstrated in many ways. In a study the writer did quite recently, in the high precision instruments industry, he observed an abundance of quality charts where the QC limits were constantly crossed over by both the sample mean and the range. Justifying his case, the production manager argued that this mattered little, "since specification limits were too tight for the job, anyhow." Engineering answered by saying that specifications
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
141
had to be too tight, "since production would not have observed them, no matter what they were." This is not an isolated case, and it would continue happening as long as data on quality are kept on a scaler basis at the shop level. The thesis hereby maintained is that, through a company-wide integration of quality information, the "errors" committed during tests in respect to uniformity and conformance can be effectively curtailed. Also, the subjectivity of answers as to the evaluation of these errors can be eliminated, by introducing the concept of "standard quality," to indicate the conformity of the manufactured goods with standards and technical requirements. "Standard quality" should be measured by the process of selective plant tests, after pre-establishing the functional properties of each. The novelty here is the continuity and consistency this information will have. Through the integration of "standard quality" data, the company can obtain a quantitative evaluation of how the production process goes, to its minutest detail. This requires the treatment of each type of test both separately and in a continuum-by all types of tests taken together. Management could predetermine the tendencies in production throughout the entire flow of goods. In turn, this will help measure the ability of the manufacturing organization to produce according to quality standards. An approach, which only a few years ago might have been just a specialized application by larger firms in a narrow operational field, might, through process-type data control, develop into a comprehensive system, ranging significantly across the entire manufacturing process. This would effectively help enlarge the contribution of product assurance by bringing special emphasis on total quality. To be effective, this emphasis should not be just on quality for its own sake, but in relation to production efficiency, cost performance, product reliability, and customer satisfaction. The data integration for product assurance outlined so far is a natural evolution of quality control. In the sense of our discussion, while quality control deals chiefly with production phases, quality assurance starts earlier and goes further: from design, deep into customer use and product life. This last part requires a good deal of data feedback from the field; feedback which in an efficient manner can help maximize preventive action in product planning, minimize the need for corrective action in the manufacturing stages, optimize monitoring, and guarantee satisfactory experience in usage. Similarly, once data integration has been put into effect, management can efficiently examine cost/quality relationships, an approach of great economic significance and promise. This presupposes: • Organization of quality history files concerning each phase of the overall product cycle.
142
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
• • • •
Mathematical-statistical definition of problem areas. Identification of specific trouble spots. Pre-establishment of corrective action reporting in terms of cost. Feedback and relationship of data from one phase to all other phases of the product cycle. • Practical use of advanced mathematical techniques in effecting product quality.
Furthermore, the successful implementation of a computer-oriented quality evolution will greatly depend on sophisticated programming. This programming effort has to reflect the usage of fundamental mathematical tools, and, with this, a computer-based system handling advanced quality information could be developed. This system can be used to monitor critical areas, in fabrication or assembly, collecting and comparing data in terms of cost and quality. In-plant feedback would assure that manufacturing and test data would be fed back to engineering for improvement of the immediate future articles-i-an operation to be performed by means of in-process analysis, on real time. Though this is a perfectly true case for all industry, metals in particular, being a base industry, feel the pinch. Admiral Rickover, speaking to the 44th Annual National Metal Congress in New York, made the following point: " ... in the development and production of nuclear propulsion system, I am shocked and dismayed to find that quality and reliability of the conventional items in the systems are inferior to the nuclear reactors themselves." The awareness about product assurance on behalf of the nuclear reactors industry is in itself understandable when we consider the safety factors involved. It is also understandable that manufacturers of conventional components, such as valves, heat exchangers, or electrical gear, feel differently, because of inherent bias in that respect. They have been making these items for years and consider their processes to be "well under control," whatever this may mean. In a sense, this becomes a problem of leadership, and when the leader fails, the organization under him fails too. Within the framework of the foregoing case, two examples can be taken. Engineering design. In this case, statistical analysis helps determine reliability requirements. Necessary changes and chances of meeting these requirements can, then, be predicted by the system. If predictions indicate that standards are set too high or too low, engineering tolerances would need to be reappraised, special items developed, or inversely standard items substituted for "special"-through the appropriate value analysis. With this, product balancing can be attained, improving quality and minimizing over-all costs. Materials and Supplies. Here a total quality system may automatically
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
143
analyze data on purchased components and establish the type of action that should follow. The systems manager, then, would only need to re-evaluate specifications for an acceptable range of quality; the integrated quality information will show to what extent received materials come within the new specifications. This approach can also be most useful to suppliers, furnishing them with conformance-analysis reports. Such reports should detail where items fail to meet specifications, helping their recipient improve his techniques and quality and guarantee performance to the user. Figure 1 presents the results from a study in the aeronautical industry. It concerns three endurance parameters: • Survival curves • Mean life • Failure level Survival curves and the failure level have been calculated through both an experimental and a theoretical approach. The point here is that, should a continuous quality recording process exist, it would be possible to simulate and "feed forward" product assurance information. This, in turn, will help tailor a program that ensures technical requirements of the aircraft. Of what this program should consist, and what part it should play in the basic industry line (metal suppliers, for instance), is a management determination based on the relationship to other crucial design factors. That this quality-oriented data network should not be allowed to grow and develop to a size and shape that is beyond its financial boundaries is as evident as the fact that the lack of the proper weight is going to be detrimental to final product quality.
1.0
t
Endurance parameters
0.9 400"
0.8
7-S
f? 0.7 B .8 0.6 ~
Q.
"0 >
"E
300"
5
0.5 200"
0.4
:>
lJ)
Experimental
~
0.2
0
4 3
0.3
0.1
6 ~
Failure level " I
\
Th7retical
-1-- ..£--.--
.:::::::..,=--1-- \
I""'~_=::;:::':-~
50
--.I
\
100 1:l0 200 250 300350 400 450 500 Time
FiGURE
1
100"
2
~
.'!! ~
.=!
;f
144
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
What we just said brings forward the subject of providing the most meaningful definitions of quality, as an applied concept and as a reporting practice. The specific objectives to be attained in this connection should include: • Defining standard parameters of product assurance that would serve as a medium of effective communication on quality problems. • Defining measures compatible with the mathematical theory of product assurance, and providing practical parameters that could be measured in the field. • Providing measures of machine performance divorced as much as possible from operator's performance. • Conforming as closely as possible to the thus established industrial standards in quality and performance reporting, throughout the "use" domain of the equipment. . • Avoiding the application of terms that cause conflict and confusion.
USING PRODUCT ASSURANCE INDICATORS Our discussion in this chapter has specifically implied that process control computers can be of major help in establishing product assurance history and implementing "feed forward" concepts. But, for this to be true, quality has to be built into the product, within the whole developmental cycle: from design to prototype models, tests, manufacturing, and the performance of final acceptability evaluations; in a way consistent with the principle we have outlined. It is an organizational mistake when the functional services responsible for determining quality standards do not expand to include the development phases, manufacture, and the field usage. The concept of "reliability" must become a corollary to development and "data feedback," a corollary to customer application and use-just as "quality control" is a corollary to production. There exist, in fact, several aspects in the data integration for product dependability which are of practical importance. One is the direct result of a dependability conscious organization where there is a constant pressure, from top management on down, for reports of the very latest performance figures. In attempting to satisfy this demand for known information, sampling problems are encountered. As a fundamental concept this extends well beyond the designer's board and the manufacturing floor, as will be demonstrated in the following paragraph. When making field measurements of article performance, it is desirable to obtain precise estimates of the dependability parameters. This has as a prerequisite the pre-establishment of the criteria for choice and of the value
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
145
ranges these parameters can have. It is also necessary to avoid frequent changes of the nature of the data collection system and of the criteria of choice. It may take months or even years to accumulate the quantity of data necessary to provide a high degree of statistical precision in the calculations. This brings forward the double aspect evaluation procedures should acquire: (1) Scientific evaluation, or the determination and explanation of the reasons for the performance, and the discovery of any aspects in which improvements can be made. (2) User evaluation, that is, the establishment of how appropriate the whole system is, provided that the task of achieving the serviceability objectives set by the producer has not been altered in a significant manner. These two types of evaluation are not mutually exclusive. Economy in time and money demands that they be interwoven. As far as user evaluation is concerned, the precision with which any given "trial" can be recorded is limited by the accuracy of the field measuring techniques that must be used. A given article represents only one sample of a large population, all articles having manufacturing and setting up tolerances within normal engineering limits and, for these reasons, having a "standard" performance within tolerance. The field feedback we suggest must reveal the true performance of the article under operational conditions. Field information must provide sufficient basic data about the performance, and the factors that affect it to allow predictions and projections to be made with confidence in likely operational conditions. The collected data must reveal those deficiencies or limitations of the product that can be removed or alleviated by evolutionary development within its lifetime. Pertinent to this point is the need for the determination of "failure indicators," that is, information that can be interpreted as "evidence" and give rise to quality precalculations. We can better define the foregoing by stating that whenever a man-made system is not performing its assigned function "satisfactorily" this provides an "indicator." The data can be emitted by the system itself, or by a subsystem associated with it. The interpretation of failure, which in the past was open to argument, is now mathematically defined, so what interests us most is a method of operation. The idea in itself is not new, since failure indicators have been used as an aid in designing and in maintaining man-made equipment, though rarely has one been built into the system. This underlines another point, the double need for incorporating a failure indicator into supposedly reliable equipment and of providing it with signal emission, and possible transcription media. The need for such continuous indication infers that every part of the system is likely to fail. This implication is essentially an admission of our
146
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
inability to make all parts fail-proof. But the essential point here is that, since equipment fails, we need to build in the means for experimental evaluation and projection. Figure 2 illustrates this point. Failure rates can be reasonably well forecasted, provided a continuous collection is made of quality information. Here the experimental curve is compared to three alternative theoretical curves. For the first 100 hours of operation, actual failure rate data coincide with those of theoretical curve 1. For the next 100 hours, actual failure rates average around the values of theoretical curve 2--then a scatter in failure rates starts, which in itself might well be a predictable characteristic. With this, for instance, the failure rate point at the 250 hours-of-operation level could be taken as an indicator for "feed forward" action.
.. 300
b
.
;
250
~ 200 ~
.2
150
~
100
Experimental
curve
50 50
100 150 200 250 300 350 400 450 500 Time
FIG. 2. Estimates of failure rates.
Two basic types of failure indicators could be considered. One of them frequently occurs without any particular effort from the designer. It is in series with vital functions of a device and is itself vital to satisfactory performance. Rapid determination of the exact cause of failure for most series-type indicators would require a special gear. Or, failure information could be locally collected and transmitted to a computer which, from that point on, would be responsible for interpretation and call for action. The other type of failure indicator is a hardware "identifier" incorporated in the design with the explicit mission of indicating "failure" when it occurs. This identifier is connected in parallel with a subsystem or component which performs a vital operational function. Hence, its data transmission will automatically identify the part of the process that is in trouble. The product performance program can, hence, follow each vital unit that fails, and isolate the trouble in that unit. The problem of determining the optimum size of parallel quality assurance connections in itself involves technico-economic evaluations. Furthermore, if the combination of parts which performs one or more vital functions and a failure indicator which
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
147
monitors them are considered as a system, the possibility that the failure indicator itself may fail should also be considered. The total aspect then, constitutes a subject of optimal programming for redundant systems. With this, then, we can say that data selectively collected becomes a vital factor in the product quality organization. If properly handled, it can be used to develop methods for predicting system performance, realizing error analyses, measuring quality, developing sampling plans, providing process controls, evaluating progress and programs, and ascertaining reliability. The inferences and subsequent corrective action can in turn be used to improve the product. The data should be selected from a variety of sources, including inspection and test reports from vendors, engineering, factory, test bases, and the field. The following is a summary classification within the conceptual framework which is presented in Fig. 3.
Manufacturing
Information feedback
Quality control
FIGURE
3
Development and Design
Throughout the phases of conceptual evaluation and preliminary design, reliability should serve as the integrating agency which assures coordination and compatibility between the various section programs. Much research activity will be involved at this point, and it is imperative to assure that at least the specified environmental and life limits will be observed. To ensure proper coordination, configuration histories should be maintained on each subsystem and component unit. This should include not only items produced during the development program in question, but also component units now in use with other ensembles. Such a history can be compiled from design,
148
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
manufacturing, and inspection data, and may be used for analysis purposes. In the foreground of the subject effort is the fact that no system is totally new. Its materials, its components, or its subsystem would have been used somewhere, somehow with another system. This case was, for instance, recently faced by the writer when he was asked to evaluate the reliability of a receiver-emitter. The system was composed of six major units. Four of them have been in the field as subsystems of other ensembles for over three years. But no performance data were available. One unit was a prototype model, in use with military equipment. Here again, nothing was available about its quality behavior. Had there been data about these five subsystems, it would have been possible to proceed with the study, analyzing the sixth unit, the only one that was completely new, down to its most basic elements. In this sense, it is advantageous that, as a matter of policy, a design disclosure review should be conducted to insure that the designer's intent has been clearly put into effect, that the design prerequisites have been completely communicated to the people who make, test, and inspect the hardware. In addition, this evaluation should provide for the necessary design, manufacturing, procurement, and inspection corrective action. Design optimization should also consider parts application characteristics. If it is assumed that many of these parts originate outside the company, product assurance specialists should review the projected applications, and, based on careful study and evaluation of their documentation and test results, determine whether or not the part will satisfactorily meet the requirements of the design. In turn, these data should be used to establish the numerical reliability goals for the complete system and for each of its subsystems. During design evolution, as data on equipment reliability becomes available, a continuous reassessment of the initial allocation within each subsystem must take place. Trade-off analyses must be conducted, considering a balance between reliability and performance in which, say, weight, operability, safety, cost, and schedule need to be taken into account. Reapportionment of requirements may then result, to assure an adequate and reliable design. Some of these reviews, particularly those of an interim nature conducted as the design develops, might conceivably be performed by means of electronic data processing. What we foresee as an automation of product assurance is, at least for the time being, the initial review which will consider general factors, such as adherence to specifications, reliability, safety, adequacy to the environmental specifications, and general capability. The computer can evaluate such details as fit, tolerances, assembly notes, and test instruction.
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
149
A final design review will then be necessary to consider these evaluations and to insure that all requirements of the formal design checklist have been met. Manufacturing Quality Inspection The automation of this phase requires that the scope of acceptance inspection, necessary to insure that products conform to dimensional and process requirements, has been adequately defined. All comparisons, which are to be carried out using the data collected by standard measuring instruments, can be easily automated. This may be easier to visualize for a process industry, for instance, but there is no reason why other processes can not avail fertile grounds as well-provided that the proper analysis is made. The operation is, in fact, no different than the requirements for on-lineness, as can be seen in Fig. 4, which presents a block diagram for a soaking-pit-slabbing mill operation. abbing
~
Quality acceptance
~_m_il~1
~ •~
~~ Quality test
L-_....JI
Product quality indicators
Computer
Alarm ond control oct ion indicators
Quality management FIGURE
Slabs
4
150
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
Here we must admit that what is lacking most is experience in the field and initiative. The important thing to realize is that, once a production test plan has been prepared, it can be computer processed. The machine can be efficiently used to define the acceptance testing that is necessary to demonstrate continuing conformance to company and customer requirements. For a complex manufacturing industry, this is accomplished by determining, in conjunction with design and test specialists, the test requirements necessary for production hardware. In other cases, simpler setting of quality roles may suffice. In this way, computer implemented acceptance tests will need to be designed to determine the acceptability of a product by establishing whether or not the product complies with functional requirements. Those products having demonstrated throughout the process a high degree of conformance to specification would be inspected on the basis of statistical sampling techniques. To assure that the process quality data are accurate and precise, rigidly controlled calibration programs would also need to be implemented. Inspection and test are worthwhile only iffounded on a sound data collection system. Field Use
To assure that the inherent product quality and reliability will be in constant evolution, field follow-up is absolutely necessary. This in turn means effective media for information feedback. Here, again, the computer can be used in a rational manner to perform "forward-looking" evaluations and diagnostics on failed hardware. Only thus can the actual primary cause of failure be determined, which in itself is an essential part of the corrective action feedback loop. When actual failure causes, as distinguished from apparent failure causes, are known, corrective action can be taken to prevent recurrence of the defect. For information feedback to be effective, continuous pressure must be maintained to assure full coverage on failures, malfunctions, and replacements. This type of data collection is a basic necessity in the performance of failure analysis, as the failed components are often available for testing. With adequate failure data, the data processing system willbe able to analyze the failure and to inform on the necessary corrective action. Statistical treatment of data on "early" or minor troubles can often reveal failure trends that are not otherwise apparent. Potentially serious quality problems can then be investigated and corrected before these problems become catastrophic. With this, any industrial field can effectively establish a closed-loop system for product assurance, for the prevention of failure recurrence, and for timely spotting of actual incipient and potential troubles.
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
151
CASE STUDY IN A TIN PLATE PLANT We will consider a case of the organizational aspects of quality assurance taken from the tin plate industry. Can companies are increasingly shifting to coil form tin-plate orders. This switch induces tin-plate producers to install digital systems as quality analyzers for recording and examining the dimensional elements of the finished product and keeping a complete quality history. Digital automation starts from the entry section for loading and preparing the strip, goes through the processing section for doing the line's actual job, and finishes with the delivery section for finished product inspection and coil removal. Being continuous, each coil entered into the lines is welded to the tail of the preceding coil so that a continuous band of strip is in process from the entry uncoiler to the delivery and winding reels. In a "typical" tin plate plant, at the ingoing end of the line, there is a provision for welding the start of one coil of steel strip to the tail end of the preceding one. The looping tower acts as a reservoir to supply the electrolytic tinning unit, while the weld is made. As the strip emerges from the electrolytic tinning unit, it passes a number of automatic inspection devices, which detect pinholes and weld, and measure coating thickness and total thickness. There is also a length-measuring instrument, arranged to emit a signal as each "unit length" of tin plate passes. With respect to the quality history, the majority of the defects are of a type that cannot yet be automatically detected; scratches, oil spots, arcing marks, dirty steel, laminations, unflowed tin, anode streaks, dragout stains, wood grain, and wavy edges can only be identified by visual inspection. At the outgoing end there are at least two down-coilers, so that as the shear is operated a new coil can be started immediately. In the logging operation, the position of all defects must obviously be measured from the sheared end of the coil. Ideally, all detectors, automatic and human, should be situated at the shear blade; because this is not physically possible, a correction factor must be applied to each measurement in order to relate it to the common fiducial position of the shear. This calls for some simple computing facility. In an application along this line, the input system is designed to deal with three groups of variable data: • Manual shift and coil information • Automatic plant inputs • Manual actuations and settings The manual shift and coil information is channeled through an input console on which may be entered the date, the shift, ingoing and outgoing coil numbers, weights, width, gauge, and gauge tolerances, as well as the specified tin coating thicknesses for each side of the strip. There is also provision for setting a minimum acceptable figure for the proportion of prime
152
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
material contained in anyone coil. The automatic plant inputs include the pinhole and weld detectors, thickness gauges, and a counter to count the footage pulses, as well as a contact switch to signal the operation of the shear. Further, the specific application we are considering disposes of manual actuations and settings made up of pushbutton switches operated by the human inspectors who examine the product for "visual" defects. A digital clock included in the system allows operations to be related to real time. With respect to the throughput, each order must be carefully followed through the processing lines to be sure that the prescribed treatment is given to the coils within that order. The identity of each coil must also be carefully preserved for accounting and inventory reasons. In practice, this order tracking is reduced to tracking and identifying the welds joining coils. A computer control system can and must perform this operation in order to synchronize coil identity and process instructions with the actual material in process. The necessary input/throughput system includes an information machine, which stores coil data, and pickup elements along the line, that is, position measuring transducers. At the instant a weld is made, the computer reads the loop transducers and adds this strip of footage value to the known fixed strip distance between the welder and the shear. At the same time the coil data are read. With this, digital control has the identity and processing instruction for the coil following the weld and the footage from the weld to the delivery shear. To complete the forementioned pickup network, a footage pulse tachometer may need to be located at the delivery section. It transmits to the computer one pulse for each foot of strip that passes the delivery shear. The subject pulses are subtracted from the measured welder to shear length, so that the computer knows at all times the position of the weld with respect to the shear. With respect to systems and concepts, this is close enough a parallelism to the on-lineness for the steel industry which we have reviewed in Chapter XXVI. But other definitions are still necessary. Thus far we have given enough information to describe the basic philosophy of a very simple ensemble. The computer, knowing and tracking the position of each weld and also scanning line-operating speed, can warn the operators of the approach of the weld on a time of bias. * A warning light will be energized at the delivery desk, telling the operator that a weld isapproaching. At a calculated time, depending upon the deceleration rate of the delivery section, the slowdown light will be turned "on," telling the operator to initiate slowdown, so that the weld is just before the shear when transfer *This reference is to an on-line, open-loop operation.
XXIX.
153
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
speed is reached. The final cut light will be turned "on" when the weld is at the shear. The digital computer can track through its own memory system the order data pertaining to each charged coil. A finished coil ticket can then be punched or printed at the instant each finished coil is sheared. Therefore, the identity and inventory data of each coil can be retained. With respect to quality, one of the most important functions of digital control is, of course, that of alarm detection. Alarm detection is achieved by comparing the value of each point with preset digital numbers corresponding to the desired minimum and maximum values of the process variable. The limits are set up and stored in computer memory, providing the necessary actuation depending on the nature and criticality of an alarm point. Depending on the type of control that will be desired, a variety of quality control elements can be instituted along the line to provide the computer with sound, accurate data, for inferences, quality projections, and estimates (Fig. 5). The process
I
GY~1Lf1 ficShU~~ routines
t
1J IQ
Processi ng line
Q
I Inspection units I
r
'(