Instrumentation Reference Book
This page intentionally left blank
Instrumentation Reference Book Fourth Edition
Ed...
561 downloads
4096 Views
22MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Instrumentation Reference Book
This page intentionally left blank
Instrumentation Reference Book Fourth Edition
Edited by Walt Boyes
AMSTERDAM • BOSTON • HEIDELBERG • LONDON • NEW YORK • OXFORD • PARIS SAN DIEGO • SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Butterworth-Heinemann is an imprint of Elsevier
Butterworth-Heinemann is an imprint of Elsevier 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA Linacre House, Jordan Hill, Oxford OX2 8DP, UK Copyright © 2010 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/ permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data Instrumentation reference book / [edited by] Walt Boyes. —4th ed. p. cm. Includes bibliographical references and index. ISBN 978-0-7506-8308-1 1. Physical instruments—Handbooks, manuals, etc. 2. Engineering instruments—Handbooks, manuals, etc. I. Boyes, Walt. II. Title. QC53.I574 2010 530’.7—dc22
2009029513
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. ISBN: 978-0-7506-8308-1 For information on all Butterworth–Heinemann publications visit our Web site at www.elsevierdirect.com Printed in the United States of America 09 10 11 12 13 10 9 8 7 6 5 4 3 2 1 Typeset by: diacriTech, India
Contents
Preface Contributors Introduction
xvii xix xxi
Part I The Automation Knowledge Base 1. The Automation Practicum W. Boyes 1.1 Introduction 3 1.2 Job Descriptions 4 1.3 Careers and Career Paths 4 1.3.1 ISA Certified Automation Professional (CAP) Classification System 5 1.4 Where Automation Fits in the Extended Enterprise 13 1.5 Manufacturing Execution Systems and Manufacturing Operations Management 14 1.5.1 Introduction 14 1.5.2 Manufacturing Execution Systems (MES) and Manufacturing Operations Management (MOM) 15 1.5.3 The Connected Enterprise 15 Suggested Reading 18
2. Basic Principles of Industrial Automation W. Boyes 2.1 Introduction 2.2 Standards 2.3 Sensor and System Design, Installation, and Commissioning 2.3.1 The Basics 2.3.2 Identification of the Application 2.3.3 Selection of the Appropriate Sensor/Transmitter 2.3.4 Selection of the Final Control Element 2.3.5 Selection of the Controller and Control Methodology
19 19 20 20 20 20 20 20
2.3.6 Design of the Installation 2.3.7 Installing, Commissioning, and Calibrating the System 2.4 Maintenance and Operation 2.4.1 Introduction 2.4.2 Life-cycle Optimization 2.4.3 Reliability Engineering 2.4.4 Asset Management, Asset Optimization, and Plant Optimization Suggested Reading
20 21 21 21 21 21 21 21
3. Measurement Methods and Control Strategies W. Boyes 3.1 Introduction 3.2 Measurement and Field Calibration Methodology 3.3 Process Control Strategies 3.4 Advanced Control Strategies Suggested Reading
23 23 23 24 24
4. Simulation and Design Software M. Berutti 4.1 Introduction 4.2 Simulation 4.3 Best Practices for Simulation Systems in Automation 4.4 Ground-up Testing and Training 4.5 Simulation System Selection 4.6 Simulation for Automation in the Validated Industries 4.7 Conclusion
25 25 25 26 26 26 26
5. Security for Industrial Automation W. Boyes and J. Weiss 5.1 The Security Problem 5.2 An Analysis of the Security Needs of Industrial Automation 5.3 Some Recommendations for Industrial Automation Security
27 28 28
v
vi
CONTENTS
Part II Mechanical Measurements
7.8 Accuracy and Range References Further Reading
6. Measurement of Flow
8. Measurement of Length
G. Fowles and W. H. Boyes 6.1 Introduction 6.2 Basic Principles of Flow Measurement 6.2.1 Streamlined and Turbulent Flow 6.2.2 Viscosity 6.2.3 Bernoulli’s Theorem 6.2.4 Practical Realization of Equations 6.2.5 Modification of Flow Equations to Apply to Gases 6.3 Fluid Flow in Closed Pipes 6.3.1 Differential-Pressure Devices 6.3.2 Rotating Mechanical Meters for Liquids 6.3.3 Rotating Mechanical Meters for Gases 6.3.4 Electronic Flowmeters 6.3.5 Mass Flowmeters 6.4 Flow in Open Channels 6.4.1 Head/Area Method 6.4.2 Velocity/Area Methods 6.4.3 Dilution Gauging 6.5 Point Velocity Measurement 6.5.1 Laser Doppler Anemometer 6.5.2 Hotwire Anemometer 6.5.3 Pitot Tube 6.5.4 Electromagnetic Velocity Probe 6.5.5 Insertion Turbine 6.5.6 Propeller-Type Current Meter 6.5.7 Insertion Vortex 6.5.8 Ultrasonic Doppler Velocity Probe 6.6 Flowmeter Calibration Methods 6.6.1 Flowmeter Calibration Methods for Liquids 6.6.2 Flowmeter Calibration Methods for Gases References Further Reading
P. H. Sydenham 31 31 31 32 33 34 35 36 36 43 48 51 58 60 60 63 64 64 64 64 64 65 65 66 66 66 66 66 67 68 68
7. Measurement of Viscosity K. Walters and W. M. Jones 7.1 Introduction 7.2 Newtonian and Non-Newtonian Behavior 7.3 Measurement of the Shear Viscosity 7.3.1 Capillary Viscometer 7.3.2 Couette Viscometer 7.3.3 Cone-and-plate Viscometer 7.3.4 Parallel-plate Viscometer 7.4 Shop-Floor Viscometers 7.5 Measurement of the Extensional Viscosity 7.6 Measurement of Viscosity Under Extremes of Temperature and Pressure 7.7 Online Measurements
74 75 75
69 69 71 71 72 72 73 73 74 74 74
8.1 Introduction 8.2 The Nature of Length 8.3 Derived Measurements 8.3.1 Derived from Length Measurement Alone 8.4 Standards and Calibration of Length 8.5 Practice of Length Measurement for Industrial Use 8.5.1 General Remarks 8.5.2 Mechanical Length-Measuring Equipment 8.5.3 Electronic Length Measurement 8.5.4 Use of Electromagnetic and Acoustic Radiation 8.5.5 Miscellaneous Methods 8.6 Automatic Gauging Systems References Further Reading
77 78 79 79 80 81 81 81 82 87 90 91 92 92
9. Measurement of Strain B. E. Noltingk 9.1 Strain 9.2 Bonded Resistance Strain Gauges 9.2.1 Wire Gauges 9.2.2 Foil Gauges 9.2.3 Semiconductor Gauges 9.2.4 Rosettes 9.2.5 Residual Stress Measurement 9.3 Gauge Characteristics 9.3.1 Range 9.3.2 Cross-sensitivity 9.3.3 Temperature Sensitivity 9.3.4 Response Times 9.4 Installation 9.5 Circuits for Strain Gauges 9.6 Vibrating Wire Strain Gauge 9.7 Capacitive Strain Gauges 9.8 Surveys of Whole Surfaces 9.8.1 Brittle Lacquer 9.8.2 Patterns on Surfaces 9.9 Photoelasticity References
93 93 94 94 94 95 95 95 95 96 96 96 96 98 98 99 99 99 99 100 101
10. Measurement of Level and Volume P. H. Sydenham and W. Boyes 10.1 Introduction 10.2 Practice of Level Measurement
103 103
vii
CONTENTS
10.2.1 Installation 10.2.2 Sources of Error 10.3 Calibration of Level-Measuring Systems 10.4 Methods Providing Full-Range Level Measurement 10.4.1 Sight Gauges 10.4.2 Float-driven Instruments 10.4.3 Capacitance Probes 10.4.4 Upthrust Buoyancy 10.4.5 Pressure Sensing 10.4.6 Microwave and Ultrasonic, Time-Transit Methods 10.4.7 Force or Position Balance 10.5 Methods Providing Short-Range Detection 10.5.1 Magnetic 10.5.2 Electrical Conductivity 10.5.3 Infrared 10.5.4 Radio Frequency 10.5.5 Miscellaneous Methods References
103 104 106 107 107 107 108 109 109 109 110 110 110 110 111 111 112 112
11. Vibration P. H. Sydenham 11.1 Introduction 11.1.1 Physical Considerations 11.1.2 Practical Problems of Installation 11.1.3 Areas of Application 11.2 Amplitude Calibration 11.2.1 Accelerometer Calibration 11.2.2 Shock Calibration 11.2.3 Force Calibration 11.3 Sensor Practice 11.3.1 Mass-Spring Seismic Sensors 11.3.2 Displacement Measurement 11.3.3 Velocity Measurement 11.3.4 Acceleration Measurement 11.3.5 Measurement of Shock 11.4 Literature References Further Reading
113 113 116 116 117 117 117 117 118 118 120 120 121 124 124 125 125
12. Measurement of Force C. S. Bahra and J. Paros 12.1 Basic Concepts 12.2 Force Measurement Methods 12.3 Lever-Balance Methods 12.3.1 Equal-lever Balance 12.3.2 Unequal-lever Balance 12.3.3 Compound lever Balance 12.4 Force-Balance Methods 12.5 Hydraulic Pressure Measurement 12.6 Acceleration Measurement 12.7 Elastic Elements 12.7.1 Spring Balances
127 127 127 127 128 128 128 129 129 129 129
12.7.2 Proving Rings 12.7.3 Piezoelectric Transducers 12.7.4 Strain-gauge Load Cells 12.8 Further Developments References
129 130 130 133 133
13. Measurement of Density E. H. Higham and W. Boyes 13.1 General 13.2 Measurement of Density Using Weight 13.3 Measurement of Density Using Buoyancy 13.4 Measurement of Density Using a Hydrostatic Head 13.4.1 General Differential Pressure Transmitter Methods 13.4.2 DP Transmitter with Overflow Tank 13.4.3 DP Transmitter with a Wet Leg 13.4.4 DP Transmitter with a Pressure Repeater 13.4.5 DP Transmitter with Flanged or Extended Diaphragm 13.4.6 DP Transmitter with Pressure Seals 13.4.7 DP Transmitter with Bubble Tubes 13.4.8 Other Process Considerations 13.5 Measurement of Density Using Radiation 13.6 Measurement of Density Using Resonant Elements 13.6.1 Liquid Density Measurement 13.6.2 Gas Density Measurements 13.6.3 Relative Density of Gases Further Reading
135 135 136 137 137 138 138 139 139 139 139 140 140 140 140 141 143 143
14. Measurement of Pressure E. H. Higham and J. M. Paros 14.1 What is Pressure? 14.2 Pressure Measurement 14.2.1 Pressure Measurements by Balancing a Column of Liquid of Known Density 14.2.2 Pressure Measurements by Allowing the Unknown Pressure to Act on a Known Area and Measuring the Resultant Force 14.2.3 Pressure Measurement by Allowing the Unknown Pressure to Act on a Flexible Member and Measuring the Resultant Motion 14.2.4 Pressure Measurement by Allowing the Unknown Pressure to Act on an Elastic Member and Measuring the Resultant Stress or Strain 14.3 Pressure transmitters 14.3.1 Pneumatic Motion-Balance Pressure Transmitters
145 145 145
147
149
155 158 159
viii
CONTENTS
14.3.2 Pneumatic Force-Balance Pressure Transmitters 14.3.3 Force-Measuring Pressure Transmitters 14.3.4 Digital Pressure Transducers References Further Reading
159 160 162 163 163
15. Measurement of Vacuum D. J. Pacey 15.1 Introduction 15.1.1 Systems of Measurement 15.1.2 Methods of Measurement 15.1.3 Choice of Nonabsolute Gauges 15.1.4 Accuracy of Measurement 15.2 Absolute Gauges 15.2.1 Mechanical Gauges 15.2.2 Liquid Manometers 15.2.3 The McLeod Gauge (1878) 15.3 Nonabsolute Gauges 15.3.1 Thermal Conductivity Gauges 15.3.2 Ionization Gauges References
165 165 165 166 166 166 166 167 167 169 169 170 173
16. Particle Sizing
B. T. Meggitt 17.1 Introduction 17.2 Principles of Optical Fiber Sensing 17.2.1 Sensor Classification 17.2.2 Modulation Parameters 17.2.3 Performance Criteria 17.3 Interferometric Sensing Approach 17.3.1 Heterodyne Interferometry 17.3.2 Pseudoheterodyne Interferometry 17.3.3 White-Light Interferometry 17.3.4 Central Fringe Identification 17.4 Doppler Anemometry 17.4.1 Introduction 17.4.2 Particle Size 17.4.3 Fluid Flow 17.4.4 Vibration Monitoring 17.5 In-Fiber Sensing Structures 17.5.1 Introduction 17.5.2 Fiber Fabry–Perot Sensing Element 17.5.3 Fiber Bragg Grating Sensing Element References
191 192 192 192 193 193 194 194 195 201 202 202 203 204 206 210 210 210 212 215
18. Nanotechnology for Sensors
W. L. Snowsill 16.1 Introduction 16.2 Characterization of Particles 16.2.1 Statistical Mean Diameters 16.3 Terminal Velocity 16.4 Optical Effects Caused by Particles 16.5 Particle Shape 16.6 Methods for Characterizing a Group of Particles 16.6.1 Gaussian or Normal Distributions 16.6.2 Log-Normal Distributions 16.6.3 Rosin–Rammler Distributions 16.7 Analysis Methods that Measure Size Directly 16.7.1 Sieving 16.7.2 Microscope Counting 16.7.3 Direct Optical Methods 16.8 Analysis Methods that Measure Terminal Velocity 16.8.1 Sedimentation 16.8.2 Elutriation 16.8.3 Impaction 16.9 Analysis Methods that Infer Size from Some Other Property 16.9.1 Coulter Counter 16.9.2 Hiac Automatic Particle Sizer 16.9.3 Climet 16.9.4 Adsorption Methods References Further Reading
17. Fiber Optics in Sensor Instrumentation
W. Boyes 175 175 176 176 177 177 178 178 179 180 180 180 181 183 183 183 187 188 188 188 188 189 189 189 189
18.1 Introduction 18.2 What is Nanotechnology? 18.3 Nanotechnology for Pressure Transmitters 18.4 Microelectromechanical Systems (MEMS) 18.5 MEMS Sensors Today
217 217 217 217 218
19. Microprocessor-Based and Intelligent Transmitters E. H. Higham and J. Berge 19.1 Introduction 19.2 Terminology 19.3 Background Information 19.4 Attributes and Features of Microprocessor-Based and Intelligent Transmitters 19.4.1 Microprocessor-Based Features 19.4.2 Intelligent Features 19.5 Microprocessor-Based and Intelligent Temperature Transmitters 19.6 Microprocessor-Based and Intelligent Pressure and Differential Transmitters 19.7 Microprocessor-Based and Intelligent Flowmeters 19.7.1 Coriolis Mass Flowmeters 19.7.2 Electromagnetic Flowmeters 19.7.3 Vortex Flowmeters
219 220 221 222 222 223 224 226 229 229 233 234
ix
CONTENTS
19.8 Other Microprocessor-Based and Intelligent Transmitters 19.8.1 Density Transmitters 19.8.2 Microprocessor-Based and Intelligent Liquid Level Measurement Systems 19.9 Other Microprocessor-Based and Intelligent Measurement Systems 19.10 Fieldbus 19.10.1 Background 19.10.2 Introduction to the Concept of a Fieldbus 19.10.3 Current Digital Multiplexing Technology 19.10.4 The HART Protocol 19.11 User Experience with MicroprocessorBased and Intelligent Transmitters 19.12 Fieldbus Function and Benefits 19.12.1 Foundation Fieldbus and Profibus-PA 19.12.2 Field-Mounted Control 19.12.3 Future of Analog Instruments 19.12.4 Sensor Validation 19.12.5 Plant Diagnostics 19.12.6 Handheld Interfaces (Handheld Terminals or Handheld Communicators) 19.12.7 Measuring Directives 19.12.8 Further Developments of Intelligent Transmitters 19.12.9 Integration of Intelligent Transmitters into Instrument Management Systems References
236 236 239 240 241 241 241 241 243 246 247 247 248 249 249 249 249 250 250 250 251
20. Industrial Wireless Technology and Planning D. R. Kaufman 20.1 Introduction 20.2 The History of Wireless 20.3 The Basics 20.3.1 Radio Frequency Signals 20.3.2 Radio Bands 20.3.3 Radio Noise 20.3.4 Radio Signal-to-Noise Ratio (SNR) 20.3.5 Wireless Reliability 20.3.6 Fixed Frequencies 20.3.7 Spread Spectrum 20.3.8 Security 20.3.9 Antennas 20.3.10 Antenna Connection 20.3.11 Commissioning 20.3.12 Mesh Technologies 20.3.13 System Management
253 253 254 254 254 255 255 256 256 256 257 258 260 261 262 262
20.3.14 System Interfaces 20.3.15 Standards and Specifications 20.4 Planning for Wireless 20.4.1 Imagine the Possibilities 20.4.2 Getting Ready for Wireless References
262 263 263 264 264 265
Part III Measurement of Temperature and Chemical Composition 21. Temperature Measurement C. Hagart-Alexander 21.1 Temperature and Heat 21.1.1 Application Considerations 21.1.2 Definitions 21.1.3 Radiation 21.2 Temperature Scales 21.2.1 Celsius Temperature Scale 21.2.2 Kelvin, Absolute, or Thermodynamic Temperature Scale 21.2.3 International Practical Temperature Scale of 1968 (IPTS-68) 21.2.4 Fahrenheit and Rankine Scales 21.2.5 Realization of Temperature Measurement 21.3 Measurement Techniques: Direct Effects 21.3.1 Liquid-in-Glass Thermometers 21.3.2 Liquid-Filled Dial Thermometers 21.3.3 Gas-Filled Instruments 21.3.4 Vapor Pressure Thermometers 21.3.5 Solid Expansion 21.4 Measurement Techniques: Electrical 21.4.1 Resistance Thermometers 21.4.2 Thermistors 21.4.3 Semiconductor Temperature Measurement 21.5 Measurement Techniques: Thermocouples 21.5.1 Thermoelectric Effects 21.5.2 Thermocouple Materials 21.5.3 Thermocouple Construction 21.6 Measurement Techniques: Radiation Thermometers 21.6.1 Introduction 21.6.2 Radiation Thermometer Types 21.7 Temperature Measurement Considerations 21.7.1 Readout 21.7.2 Sensor Location Considerations 21.7.3 Miscellaneous Measurement Techniques References Further Reading
269 269 269 271 272 272 272 273 273 274 274 274 278 281 282 285 286 286 290 291 293 293 299 301 306 306 307 319 319 320 324 326 326
x
CONTENTS
22. Chemical Analysis: Introduction W. G. Cummings; edited by I. Verhappen 22.1 Introduction to Chemical Analysis 22.2 Chromatography 22.2.1 General Chromatography 22.2.2 Paper Chromatography and Thin-Layer Chromatography 22.3 Polarography and Anodic Stripping Voltammetry 22.3.1 Polarography 22.3.2 Anodic Stripping Voltammetry 22.4 Thermal Analysis Further Reading
327 328 328 328 331 331 334 335 339
23. Chemical Analysis: Spectroscopy A. C. Smith; edited by I. Verhappen 23.1 Introduction 23.2 Absorption and Reflection Techniques 23.2.1 Infrared 23.2.2 Absorption in UV, Visible, and IR 23.2.3 Absorption in the Visible and Ultraviolet 23.2.4 Measurements Based on Reflected Radiation 23.2.5 Chemiluminescence 23.3 Atomic Techniques: Emission, Absorption, and Fluorescence 23.3.1 Atomic Emission Spectroscopy 23.3.2 Atomic Absorption Spectroscopy 23.3.3 Atomic Fluorescence Spectroscopy 23.4 X-Ray Spectroscopy 23.4.1 X-ray Fluorescence Spectroscopy 23.4.2 X-ray Diffraction 23.5 Photo-Acoustic Spectroscopy 23.6 Microwave Spectroscopy 23.6.1 Electron Paramagnetic Resonance (EPR) 23.6.2 Nuclear Magnetic Resonance Spectroscopy 23.7 Neutron Activation 23.8 Mass Spectrometers 23.8.1 Principle of the Classical Instrument 23.8.2 Inlet Systems 23.8.3 Ion Sources 23.8.4 Separation of the Ions 23.8.5 Other Methods of Separation of Ions References Further Reading
341 341 341 346 348 348 349 349 349 351 352 353 353 355 355 355 356 357 357 357 358 359 359 359 361 362 362
24. Chemical Analysis: Electrochemical Techniques W. G. Cummings and K. Torrance; edited by I. Verhappen 24.1 Acids and Alkalis
363
24.2 Ionization of Water 24.3 Electrical Conductivity 24.3.1 Electrical Conduction in Liquids 24.3.2 Conductivity of Solutions 24.3.3 Practical Measurement of Electrical Conductivity 24.3.4 Applications of Conductivity Measurement 24.4 The Concept of pH 24.4.1 General Theory 24.4.2 Practical Specification of a pH Scale 24.4.3 pH Standards 24.4.4 Neutralization 24.4.5 Hydrolysis 24.4.6 Common Ion Effect 24.4.7 Buffer Solutions 24.5 Electrode Potentials 24.5.1 General Theory 24.5.2 Variation of Electrode Potential with Ion Activity (The Nernst Equation) 24.6 Ion-Selective Electrodes 24.6.1 Glass Electrodes 24.6.2 Solid-State Electrodes 24.6.3 Heterogeneous Membrane Electrodes 24.6.4 Liquid Ion Exchange Electrodes 24.6.5 Gas-Sensing Membrane Electrodes 24.6.6 Redox Electrodes 24.7 Potentiometry and Specific Ion Measurement 24.7.1 Reference Electrodes 24.7.2 Measurement of pH 24.7.3 Measurement of Redox Potential 24.7.4 Determination of Ions by Ion-Selective Electrodes 24.8 Common Electrochemical Analyzers 24.8.1 Residual Chlorine Analyzer 24.8.2 Polarographic Process Oxygen Analyzer 24.8.3 High-temperature Ceramic Sensor Oxygen Probes 24.8.4 Fuel Cell Oxygen-measuring Instruments 24.8.5 Hersch Cell for Oxygen Measurement 24.8.6 Sensor for Oxygen Dissolved in Water 24.8.7 Coulometric Measurement of Moisture in Gases and Liquids Further Reading
364 364 364 365 365 372 375 375 376 376 376 376 378 378 378 378 380 380 381 381 381 381 381 382 382 382 384 390 390 393 393 395 396 397 397 397 399 399
25. Chemical Analysis: Gas Analysis C. K. Laird; edited by I. Verhappen 25.1 Introduction 25.2 Separation of Gaseous Mixtures 25.2.1 Gas Chromatography 25.3 Detectors 25.3.1 Thermal Conductivity Detector (TCD) 25.3.2 Flame Ionization Detector (FID) 25.3.3 Photo-Ionization Detector (PID)
401 402 402 404 404 406 407
xi
CONTENTS
25.3.4 Helium Ionization Detector 25.3.5 Electron Capture Detector 25.3.6 Flame Photometric Detector (FPD) 25.3.7 Ultrasonic Detector 25.3.8 Catalytic Detector (Pellistor) 25.3.9 Semiconductor Detector 25.3.10 Properties and Applications of Gas Detectors 25.4 Process Chromatography 25.4.1 Sampling System 25.4.2 Carrier Gas 25.4.3 Chromatographic Column 25.4.4 Controlled Temperature Enclosures 25.4.5 Detectors 25.4.6 Programmers 25.4.7 Data-Processing Systems 25.4.8 Operation of a Typical Process Chromatograph 25.5 Special Gas Analyzers 25.5.1 Paramagnetic Oxygen Analyzers 25.5.2 Ozone Analyzer 25.5.3 Oxides of Nitrogen Analyzer 25.5.4 Summary of Special Gas Analyzers 25.6 Calibration of Gas Analyzers 25.6.1 Static Methods 25.6.2 Dynamic Methods Further Reading
408 409 409 410 411 411 412 412 414 416 417 417 417 418 418 419 421 421 424 425 426 426 427 427 428
26. Chemical Analysis: Moisture Measurement
27.2
27.3
27.4
27.5 27.6 27.7
D. B. Meadowcroft; edited by I. Verhappen 26.1 Introduction 26.2 Definitions 26.2.1 Gases 26.2.2 Liquids and Solids 26.3 Measurement Techniques 26.3.1 Gases 26.3.2 Liquids 26.3.3 Solids 26.4 Calibration 26.4.1 Gases 26.4.2 Liquids 26.4.3 Solids References
429 429 429 430 431 431 433 434 435 435 436 436 436
Part IV Electrical and Radiation Measurements 27. Electrical Measurements M. L. Sanderson 27.1 U nits and Standards of Electrical Measurement 27.1.1 SI Electrical Units
439 439
27.8
27.9
27.1.2 Realization of the SI Base Unit 27.1.3 National Primary Standards Measurement of DC and Ac current and Voltage Using Indicating Instruments 27.2.1 Permanent Magnet-Moving Coil Instruments 27.2.2 Moving-Iron Instruments 27.2.3 AC Range Extension Using Current and Voltage Transformers 27.2.4 Dynamometer Instruments 27.2.5 Thermocouple Instruments 27.2.6 Electrostatic Instruments Digital Voltmeters and Digital Multimeters 27.3.1 Analog-to-Digital Conversion Techniques 27.3.2 Elements in DVMs and DMMs 27.3.3 DVM and DMM Specifications Power Measurement 27.4.1 The Three-Voltmeter Method of Power Measurement 27.4.2 Direct-Indicating Analog Wattmeters 27.4.3 Connection of Wattmeters 27.4.4 Three-Phase Power Measurement 27.4.5 Electronic Wattmeters 27.4.6 High-Frequency Power Measurement Measurement of Electrical Energy Power-Factor Measurement The Measurement of Resistance, Capacitance, and Inductance 27.7.1 DC Bridge Measurements 27.7.2 AC Equivalent Circuits of Resistors, Capacitors, and Inductors 27.7.3 Four-Arm AC Bridge Measurements 27.7.4 Transformer Ratio Bridges 27.7.5 High-Frequency Impedance Measurement Digital Frequency and Period/Time-Interval Measurement 27.8.1 Frequency Counters and Universal Timer/Counters 27.8.2 Time-Interval Averaging 27.8.3 Microwave-Frequency Measurement Frequency and Phase Measurement Using an Oscilloscope References Further Reading
439 440 444 445 448 452 454 454 455 455 456 460 464 465 465 465 467 468 469 470 472 473 474 474 477 478 482 487 489 489 492 495 497 497 498
28. Optical Measurements A. W. S. Tarrant 28.1 Introduction 28.2 Light Sources 28.2.1 Incandescent Lamps 28.2.2 Discharge Lamps 28.2.3 Electronic Sources: Light-emitting Diodes 28.2.4 Lasers
499 499 500 500 501 501
xii
CONTENTS
28.3 Detectors 28.3.1 Photomultipliers 28.3.2 Photovoltaic and Photoconductive Detectors (Photodiodes) 28.3.3 Pyroelectric Detectors 28.3.4 Array Detectors 28.4 Detector Techniques 28.4.1 Detector Circuit Time Constants 28.4.2 Detector Cooling 28.4.3 Beam Chopping and Phase-Sensitive Detection 28.4.4 The Boxcar Detector 28.4.5 Photon Counting 28.5 Intensity Measurement 28.5.1 Photometers 28.5.2 Ultraviolet Intensity Measurements 28.5.3 Color-Temperature Meters 28.6 Wavelength and Color 28.6.1 Spectrophotometers 28.6.2 Spectroradiometers 28.6.3 The Measurement of Color 28.7 Measurement of Optical Properties 28.7.1 Refractometers 28.7.2 Polarimeters 28.8 Thermal Imaging Techniques References
502 502 503 504 505 506 506 506 507 507 508 508 509 509 510 510 510 512 512 514 514 516 518 519
29. Nuclear Instrumentation Technology D. Aliaga Kelly and W. Boyes 29.1 Introduction 29.1.1 Statistics of Counting 29.1.2 Classification of Detectors 29.1.3 Health and Safety 29.2 Detectors 29.2.1 Gas Detectors 29.2.2 Scintillation Detectors 29.2.3 Solid-state Detectors 29.2.4 Detector Applications 29.3 Electronics 29.3.1 Electronics Assemblies 29.3.2 Power Supplies 29.3.3 Amplifiers 29.3.4 Sealers 29.3.5 Pulse-Height Analyzers 29.3.6 Special Electronic Units References Further Reading
521 521 524 525 526 526 528 532 533 541 541 542 543 543 543 544 547 547
30. Measurements Employing Nuclear Techniques D. Aliaga Kelly and W. Boyes 30.1 Introduction 30.1.1 Radioactive Measurement Relations
549 550
30.1.2 Optimum Time of Measurement 30.1.3 Accuracy/Precision of Measurements 30.1.4 Measurements on Fluids in Containers 30.2 Materials Analysis 30.2.1 Activation Analysis 30.2.2 X-ray Fluorescence Analysis 30.2.3 Moisture Measurement: By Neutrons 30.2.4 Measurement of Sulfur Contents of Liquid Hydrocarbons 30.2.5 The Radioisotope Calcium Monitor 30.2.6 Wear and Abrasion 30.2.7 Leak Detection 30.3 Mechanical Measurements 30.3.1 Level Measurement 30.3.2 Measurement of Flow 30.3.3 Mass and Thickness 30.4 Miscellaneous Measurements 30.4.1 Field-survey Instruments 30.4.2 Dating of Archaeological or Geological Specimens 30.4.3 Static Elimination References
551 551 551 552 552 553 555 557 558 559 559 559 559 560 561 563 563 563 565 565
31. Non-Destructive Testing Scottish School of Non-Destructive Testing 31.1 Introduction 31.2 Visual Examination 31.3 Surface-Inspection Methods 31.3.1 Visual Techniques 31.3.2 Magnetic Flux Methods 31.3.3 Potential Drop Techniques 31.3.4 Eddy-Current Testing 31.4 Ultrasonics 31.4.1 General Principles of Ultrasonics 31.4.2 The Ultrasonic Test Equipment Controls and Visual Presentation 31.4.3 Probe Construction 31.4.4 Ultrasonic Spectroscopy Techniques 31.4.5 Applications of Ultrasonic Spectroscopy 31.4.6 Other Ways of Presenting Information from Ultrasonics 31.4.7 Automated Ultrasonic Testing 31.4.8 Acoustic Emission 31.5 Radiography 31.5.1 Gamma Rays 31.5.2 X-rays 31.5.3 Sensitivity and IQI 31.5.4 Xerography 31.5.5 Fluoroscopic and Image-Intensification Methods 31.6 Underwater Non-Destructive Testing 31.6.1 Diver Operations and Communication 31.6.2 Visual Examination 31.6.3 Photography 31.6.4 Magnetic Particle Inspection (MPI)
567 568 568 568 569 570 570 571 571 574 576 577 578 579 580 580 580 581 582 582 585 585 586 587 587 587 588
xiii
CONTENTS
31.6.5 Ultrasonics 31.6.6 Corrosion Protection 31.6.7 Other Non-Destructive Testing Techniques 31.7 Developments 31.8 Certification of Personnel References Further Reading
588 588 589 590 590 591 592
32. Noise Measurement J. Kuehn 32.1 Sound and Sound Fields 32.1.1 The Nature of Sound 32.1.2 Quantities Characterizing a Sound Source or Sound Field 32.1.3 Velocity of Propagation of Sound Waves 32.1.4 Selecting the Quantities of Interest 32.2 Instrumentation for the Measurement of Sound-Pressure Level 32.2.1 Microphones Appendix 32.1 32.2.2 Frequency Weighting Networks and Filters 32.2.3 Sound-Level Meters 32.2.4 Noise-Exposure Meters/Noise-Dose Meters 32.2.5 Acoustic Calibrators 32.3 Frequency Analyzers 32.3.1 Octave Band Analyzers 32.3.2 Third-Octave Analyzers 32.3.3 Narrow-Band Analyzers 32.3.4 Fast Fourier Transform Analyzers 32.4 Recorders 32.4.1 Level Recorders 32.4.2 XY Plotters 32.4.3 Digital Transient Recorders 32.4.4 Tape Recorders 32.5 Sound-Intensity Analyzers 32.6 Calibration of Measuring Instruments 32.6.1 Formal Calibration 32.6.2 Field Calibration 32.6.3 System Calibration 32.6.4 Field-System Calibration 32.7 The Measurement of Sound-Pressure Level and Sound Level 32.7.1 Time Averaging 32.7.2 Long Time Averaging 32.7.3 Statistical Distribution and Percentiles 32.7.4 Space Averaging 32.7.5 Determination of Sound Power 32.7.6 Measurement of Sound Power by Means of Sound Intensity 32.8 Effect of Environmental Conditions on Measurements 32.8.1 Temperature
593 593 594 594 595 596 596 597 600 601 603 603 604 604 605 606 607 607 607 607 607 608 608 609 609 609 609 609 609 610 611 611 611 611 612 613 613
32.8.2 Humidity and Rain 32.8.3 Wind 32.8.4 Other Noises References Further Reading
613 613 613 614 614
Part V Controllers, Actuators, and Final Control Elements 33. Field Controllers, Hardware and Software W. Boyes 33.1 Introduction 33.2 Field Controllers, Hardware, and Software
617 617
34. Advanced Control for the Plant Floor Dr. James R. Ford, P. E. 34.1 Introduction 34.2 Early Developments 34.3 The need for Process Control 34.4 Unmeasured Disturbances 34.5 Automatic Control Valves 34.6 Types of Feedback Control 34.7 Measured Disturbances 34.8 The Need for Models 34.9 The Emergence of MPC 34.10 MPC vs. ARC 34.11 Hierarchy 34.12 Other Problems with MPC 34.13 Where We Are Today 34.14 Recommendations for Using MPC 34.15 What’s in Store for the Next 40 Years?
619 619 619 620 620 621 621 623 623 623 624 625 626 626 627
35. Batch Process Control W. H. Boyes 35.1 Introduction Further Reading
629 630
36. Applying Control Valves B. G. Liptak; edited by W. H. Boyes 36.1 36.2 36.3 36.4 36.5 36.6 36.7 36.8
Introduction Valve Types and Characteristics Distortion of Valve Characteristics Rangeability Loop Tuning Positioning Positioners Smarter Smart Valves Valves Serve as Flowmeters Further Reading
631 631 633 634 634 635 635 635 636
xiv
CONTENTS
Part VI Automation and Control Systems
38.8.3 Piping and Cable Testing 38.8.4 Loop Testing 38.9 Plant Commissioning References
37. Design and Construction of Instruments
39. Sampling J. G. Giles
C. I. Daykin and W. H. Boyes 37.1 Introduction 37.2 Instrument Design 37.2.1 The Designer’s Viewpoint 37.2.2 Marketing 37.2.3 Special Instruments 37.3 Elements of Construction 37.3.1 Electronic Components and Printed Circuits 37.3.2 Surface-Mounted Assemblies 37.3.3 Interconnections 37.3.4 Materials 37.3.5 Mechanical Manufacturing Processes 37.3.6 Functional Components 37.4 Construction of Electronic Instruments 37.4.1 Site Mounting 37.4.2 Panel Mounting 37.4.3 Bench-Mounting Instruments 37.4.4 Rack-Mounting Instruments 37.4.5 Portable Instruments 37.4.6 Encapsulation 37.5 Mechanical Instruments 37.5.1 Kinematic Design 37.5.2 Proximity Transducer 37.5.3 Load Cell 37.5.4 Combined Actuator Transducer References
639 639 639 640 640 640 640 642 642 643 644 646 647 647 647 647 649 649 650 650 650 651 651 652 653
38. Instrument Installation and Commissioning A. Danielsson 38.1 38.2 38.3 38.4 38.5
Introduction General Requirements Storage and Protection Mounting and Accessibility Piping Systems 38.5.1 Air Supplies 38.5.2 Pneumatic Signals 38.5.3 Impulse Lines 38.6 Cabling 38.6.1 General Requirements 38.6.2 Cable Types 38.6.3 Cable Segregation 38.7 Grounding 38.7.1 General Requirements 38.8 Testing and Pre-Commissioning 38.8.1 General 38.8.2 Pre-Installation Testing
659 659 660 660
655 655 655 655 656 656 656 656 657 657 658 658 658 658 658 658 658
39.1 Introduction 39.1.1 Importance of Sampling 39.1.2 Representative Sample 39.1.3 Parts of Analysis Equipment 39.1.4 Time Lags 39.1.5 Construction Materials 39.2 Sample System Components 39.2.1 Probes 39.2.2 Filters 39.2.3 Coalescers 39.2.4 Coolers 39.2.5 Pumps, Gas 39.2.6 Pumps, Liquid 39.2.7 Flow Measurement and Indication 39.2.8 Pressure Reduction and Vaporization 39.2.9 Sample Lines, Tube and Pipe Fitting 39.3 Typical Sample Systems 39.3.1 Gases 39.3.2 Liquids References
661 661 661 662 662 663 664 664 665 666 666 666 668 669 670 670 672 672 674 676
40. Telemetry M. L. Sanderson 40.1 Introduction 40.2 Communication Channels 40.2.1 Transmission Lines 40.2.2 Radio Frequency Transmission 40.2.3 Fiber-Optic Communication 40.3 Signal Multiplexing 40.4 Pulse Encoding 40.5 Carrier Wave Modulation 40.6 Error Detection and Correction Codes 40.7 Direct Analog Signal Transmission 40.8 Frequency Transmission 40.9 Digital Signal Transmission 40.9.1 Modems 40.9.2 Data Transmission and Interfacing Standards References Further Reading
677 679 679 681 681 684 685 687 688 689 690 690 692 693 697 697
41. Display and Recording M. L. Sanderson 41.1 41.2 41.3 41.4
Introduction Indicating Devices Light-Emitting Diodes (LcDs) Liquid Crystal Displays (LCDs)
699 699 700 702
xv
CONTENTS
41.5 Plasma Displays 41.6 Cathode Ray Tubes (CRTs) 41.6.1 Color Displays 41.6.2 Oscilloscopes 41.6.3 Storage Oscilloscopes 41.6.4 Sampling Oscilloscopes 41.6.5 Digitizing Oscilloscopes 41.6.6 Visual Display Units (VDUs) 41.6.7 Graphical Displays 41.7 Graphical Recorders 41.7.1 Strip Chart Recorders 41.7.2 Circular Chart Recorders 41.7.3 Galvanometer Recorders 41.7.4 x–y Recorders 41.8 Magnetic Recording 41.9 Transient/Waveform Recorders 41.10 Data Loggers References
703 704 705 706 707 708 708 709 709 709 710 711 711 712 712 713 713 714
43.2
42. Pneumatic Instrumentation E. H. Higham; edited by W. L. Mostia Jr., PE 42.1 Basic Characteristics 42.2 Pneumatic Measurement and Control Systems 42.3 Principal Measurements 42.3.1 Introduction 42.3.2 Temperature 42.3.3 Pressure Measurement 42.3.4 Level Measurements 42.3.5 Buoyancy Measurements 42.3.6 Target Flow Transmitter 42.3.7 Speed 42.4 Pneumatic Transmission 42.5 Pneumatic Controllers 42.5.1 Motion-Balance Controllers 42.5.2 Force-Balance Controllers 42.6 Signal Conditioning 42.6.1 Integrators 42.6.2 Analog Square Root Extractor 42.6.3 Pneumatic Summing Unit and Dynamic Compensator 42.6.4 Pneumatic-to-Current Converters 42.7 Electropneumatic Interface 42.7.1 Diaphragm Motor Actuators 42.7.2 Pneumatic Valve Positioner 42.7.3 Electropneumatic Converters 42.7.4 Electropneumatic Positioners References
715 716 717 717 717 718 721 721 721 722 722 723 723 725 729 729 729
43.3
43.4
729 730 732 732 733 734 735 735
43.5
737 737
43.7
43.6
43. Reliability in Instrumentation and Control J. Cluley 43.1 Reliability Principles and Terminology 43.1.1 Definition of Reliability
43.1.2 Reliability and MTBF 43.1.3 The Exponential Failure Law 43.1.4 Availability 43.1.5 Choosing Optimum Reliability 43.1.6 Compound Systems Reliability Assessment 43.2.1 Component Failure Rates 43.2.2 Variation of Failure Rate with Time 43.2.3 Failure Modes 43.2.4 The Effect of Temperature on Failure Rates 43.2.5 Estimating Component Temperature 43.2.6 The Effect of Operating Voltage on Failure Rates 43.2.7 Accelerated Life Tests 43.2.8 Component Screening 43.2.9 Confidence Limits and Confidence Level 43.2.10 Assembly Screening 43.2.11 Dealing with the Wear-out Phase 43.2.12 Estimating System Failure Rate 43.2.13 Parallel Systems 43.2.14 Environmental Testing System Design 43.3.1 Signal Coding 43.3.2 Digitally Coded Systems 43.3.3 Performance Margins in System Design 43.3.4 Coping with Tolerance 43.3.5 Component Tolerances 43.3.6 Temperature Effects 43.3.7 Design Automation 43.3.8 Built-in Test Equipment 43.3.9 Sneak Circuits Building High-Reliability Systems 43.4.1 Reliability Budgets 43.4.2 Component Selection 43.4.3 The Use of Redundancy 43.4.4 Redundancy with Majority Voting 43.4.5 The Level of Redundancy 43.4.6 Analog Redundancy 43.4.7 Common Mode Faults The Human Operator in Control and Instrumentation 43.5.1 The Scope for Automation 43.5.2 Features of the Human Operator 43.5.3 User-Friendly Design 43.5.4 Visual Displays 43.5.5 Safety Procedures Safety Monitoring 43.6.1 Types of Failure 43.6.2 Designing Fail-Safe Systems 43.6.3 Relay Tripping Circuits 43.6.4 Mechanical Fail-Safe Devices 43.6.5 Control System Faults 43.6.6 Circuit Fault Analysis Software Reliability 43.7.1 Comparison with Hardware Reliability
737 738 739 739 740 742 742 742 743 743 744 745 745 746 746 746 747 747 748 748 749 749 750 750 751 751 752 753 754 754 755 755 755 756 757 758 758 759 760 760 760 762 764 764 765 765 765 766 766 767 767 768 768
xvi
CONTENTS
43.7.2 The Distinction between Faults and Failures 43.7.3 Typical Failure Intensities 43.7.4 High-Reliability Software 43.7.5 Estimating the Number of Faults 43.7.6 Structured Programming 43.7.7 Failure-Tolerant Systems 43.8 Electronic and Avionic Systems 43.8.1 Radio Transmitters 43.8.2 Satellite Links 43.8.3 Aircraft Control Systems 43.8.4 Railway Signaling and Control 43.8.5 Robotic Systems 43.9 Nuclear Reactor Control Systems 43.9.1 Requirements for Reactor Control 43.9.2 Principles of Reactor Control 43.9.3 Types of Failure 43.9.4 Common Mode Faults 43.9.5 Reactor Protection Logic 43.10 Process and Plant Control 43.10.1 Additional Hazards in Chemical Plants 43.10.2 Hazardous Areas 43.10.3 Risks to Life 43.10.4 The Oil Industry 43.10.5 Reliability of Oil Supply 43.10.6 Electrostatic Hazards 43.10.7 The Use of Redundancy References British Standards British Standard Codes of Practice European and Harmonized Standards
769 769 769 769 770 771 771 771 772 772 774 775 776 776 776 779 779 781 782 782 782 783 783 784 785 786 786 787 787 787
44. Safety L. C. Towle 44.1 Introduction 44.2 Electrocution Risk 44.2.1 Earthing (Grounding) and Bonding 44.3 Flammable Atmospheres 44.4 Other Safety Aspects 44.5 Conclusion References Further Reading
789 790 791 791 795 796 796 796
45. EMC T. Williams 45.1 Introduction 45.1.1 Compatibility between Systems 45.1.2 The Scope of EMC 45.2 Interference Coupling Mechanisms 45.2.1 Source and Victim 45.2.2 Emissions 45.2.3 Susceptibility
797 797 798 801 801 805 808
45.3 Circuits, Layout, and Grounding 45.3.1 Layout and Grounding 45.3.2 Digital and Analog Circuit Design 45.4 Interfaces, Filtering, and Shielding 45.4.1 Cables and Connectors 45.4.2 Filtering 45.4.3 Shielding 45.5 The Regulatory Framework 45.5.1 Customer Requirements 45.5.2 The EMC Directive 45.5.3 Standards Relating to the EMC Directive References Further Reading
814 815 824 841 841 848 858 865 865 865 869 870 871
Appendices A. General Instrumentation Books B. Professional Societies and Associations C. The Institute of Measurement and Control Role and Objectives History Qualifications Chartered Status for Individuals Incorporated Engineers and Engineering Technicians Membership Corporate Members Honorary Fellow Fellows Members Noncorporate Members Companions Graduates Licentiates Associates Students Affiliates Subscribers Application for Membership National and International Technical Events Local Sections Publications Advice and Information Awards and Prizes Government and Administration D. International Society of Automation, Formerly Instrument Society of America Training Standards and Practices Publications
887 887 887 888
Index
889
873 879 883 883 883 884 884 884 884 884 884 884 884 885 885 885 885 885 885 885 885 885 885 885 885 886 886 886
Preface
Preface to the Fourth Edition In this fourth edition of the Instrumentation Reference Book we have attempted to maintain the one-volume scheme with which we began while expanding the work to match the current view of the automation practitioner in the process industries. In the process industries, practitioners are now required to have knowledge and skills far outside the “instrumentation and control” area. Typically, automation practitioners have been required to be familiar with enterprise organization and integration, so the instruments and control systems under their purview can easily transfer and receive needed information and instructions from anywhere throughout the extended enterprise. They have needed substantially more experience in programming and use of computers and, since the first edition of this work was published, an entirely new subdiscipline of automation has been created: industrial networking. In fact, the very name of the profession has changed. In 2008, the venerable Instrumentation Society of America changed its official name to the International Society of Automation in recognition of this fact. The authors and the editor hope that this volume and the guidance it provides will be of benefit to all practitioners of automation in the process industries. The editor wishes to thank Elsevier, and his understanding and long-suffering publisher, Matthew Hart, and the authors who contributed to this volume. —W. H. Boyes, ISA Fellow Editor
Preface to the Third Edition This edition is not completely new. The second edition built on the first, and so does this edition. This work has been almost entirely one of “internationalizing” a work mainly written for the United Kingdom. New matter has been added, especially in the areas of analyzers, level and flowmeters, and fieldbus. References to standards are various, and British Standards are often referenced. International standards are in flux, and most standards bodies are striving to have equivalent standards throughout the world. The reader is encouraged to refer to IEC, ANSI, or other standards when only a British Standard is shown. The ubiquity of the World
Wide Web has made it possible for any standard anywhere to be located and purchase or, in some cases, read online free, so it has not been necessary to cross-reference standards liberally in this work. The editor wants to thank all the new contributors, attributed and not, for their advice, suggestions, and corrections. He fondly wishes that he has caught all the typographical errors, but knows that is unlikely. Last, the Editor wants to thank his several editors at Butterworth-Heinemann for their patience, as well as Michael Forster, the publisher. —W. H. Boyes Maple Valley, Washington 2002
Preface to the Second Edition E. B. Jones’s writings on instrument technology go back at least to 1953. He was something of a pioneer in producing high-level material that could guide those studying his subjects. He had both practical experience of his subject and had taught it at college, and this enabled him to lay down a foundation that could be built on for more than 40 years. I must express my thanks that the first edition of the Instrumentation Reference Book, which E. B. Jones’s work was molded into, has sold well from 1988 to 1994. This book has been accepted as one of the ButterworthHeinemann series of reference books—a goodly number of volumes covering much of technology. Such books need updating to keep abreast of developments, and this first updating calls for celebration! There were several aspects that needed enlarging and several completely new chapters were needed. It might be remarked that a number of new books, relevant to the whole field of instrumentation, have appeared recently, and these have been added to the list. Does this signify a growing recognition of the place of instrumentation? Many people should be thanked for their work that has brought together this new edition. Collaboration with the Institute of Measurement and Control has been established, and this means that the book is now produced under their sponsorship. Of course, those who have written, or revised what they had written before, deserve my gratitude for their response. I would also like to say thank you to the Butterworth-Heinemann staff for their cooperation. —B. E. N. Dorking
xvii
xviii
Preface to the First Edition Instrumentation is not a clearly defined subject, having what might be called a “fuzzy frontier” with many other subjects. Look for books about it, and in most libraries you are liable to find them widely separated along the shelves, classified under several different headings. Instrumentation is barely recognized as a science or technology in its own right. That raises some difficulties for writers in the field and indeed for would-be readers. We hope that what we are offering here will prove to have helped with clarification. A reference book should of course be there for people to refer to for the information they need. The spectrum is wide: students, instrument engineers, instrument users, and potential users who just want to explore possibilities. And the information needed in real life is a mixture of technical and commercial matters. So while the major part of the Instrumentation Reference Book is a technical introduction to many facets of the subject, there is also a commercial part where manufacturers and so on are listed. Instrumentation is evolving, perhaps even faster than most technologies,
Preface
emphasizing the importance of relevant research; we have tried to recognize that by facilitating contact with universities and other places spearheading development. One need for information is to ascertain where more information can be gained. We have catered for this with references at the ends of chapters to more specialized books. Many agents have come together to produce the Instrumentation Reference Book and to whom thanks are due: those who have written, those who have drawn, and those who have painstakingly checked facts. I should especially thank Caroline Mallinder and Elizabeth Alderton who produced order out of chaos in the compilation of long lists of names and addresses. Thanks should also go elsewhere in the Butterworth hierarchy for the original germ of the idea that this could be a good addition to their family of reference books. In a familiar tradition, I thank my wife for her tolerance and patience about time-consuming activities such as telephoning, typing, and traveling–or at the least for limiting her natural intolerance and impatience of my excessive indulgence in them! —B. E. N. Dorking
Contributors
C. S. Bahra, BSc, MSc, CEng, MIMechE, was formerly Development Manager at Transducer Systems Ltd.
Control Universal, interfacing computers to real-world applications.
J. Barron, BA, MA (Cantab), is a Lecturer at the University of Cambridge.
G. Fowles was formerly a Senior Development Engineer with the Severn-Trent Water Authority after some time as a Section Leader in the Instrumentation Group of the Water Research Centre.
Jonas Berge, Senior Management.
Engineer,
Emerson
Process
Martin Berutti, Director of Marketing, Mynah Technologies Inc., Chesterfield, MO, is an expert on medium and high resolution process simulation systems. Walt Boyes, Principal, Spitzer and Boyes LLC, Aurora, Ill., is an ISA Fellow and Editor in Chief of Control magazine and www.controlglobal.com and is a recognized industry analyst and consultant. He has over 30 years experience in sales, marketing, technical support, new product development, and management in the instrumentation industries. G. Burns, BSc, PhD, AMIEE, Glasgow College of Technology.
Charlie Gifford, 21st Century Manufacturing Technologies, Hailey, Id., is a leading expert on Manufacturing Operations Management and the chief editor of Hitchhiking through Manufacturing, ISA Press, 2008. J. G. Giles, TEng, has been with Ludlam Sysco Ltd. for a number of years. Sir Claud Hagart-Alexander, Bt, BA, MInstMC, DL, formerly worked in instrumentation with ICI Ltd. He was then a director of Instrumentation Systems Ltd. He is now retired. D. R. Heath, BSc, PhD, is with Rank Xerox Ltd.
J. C. Cluley, MSc, CEng, MIEE, FBCS, was formerly a Senior Lecturer in the Department of Electronic and Electrical Engineering, University of Birmingham.
E. H. Higham, MA, CEng, FIEE, MIMechE, MInstMC, is a Senior Research Fellow in the School of Engineering at the University of Sussex, after a long career with Foxboro Great Britain Ltd.
R. Cumming, BSc, FIQA, Scottish School of Nondestructive Testing.
W. M. Jones, BSc, DPhil, FInstP, is a Reader in the Physics Department at the University College of Wales.
W. G. Cummings, BSc, CChem, FRSC, MInstE, MinstMC, former Head of the Analytical Chemistry Section at Central Electricity Research Laboratories.
David Kaufman, Director of New Business Development, Honeywell Process Solutions, Phoenix, Az., is an officer of the Wireless Compliance Institute, a member of the leadership of ISA100, the industrial wireless standard, and an expert on industrial wireless networking.
A. Danielsson, CEng, FIMechE, FInstMC, is with Wimpey Engineering Ltd. He was a member of the BS working party developing the Code of Practice for Instrumentation in Process Control Systems: Installation-Design. C. I. Daykin, MA, is Director of Research and Development at Automatic Systems Laboratories Ltd. Dr. Stanley Dolin, Scientist, Omega Engineering, Stamford, Conn., is an expert on the measurement of temperature. James R. Ford, PhD, Director of Strategic Initiatives, Maverick Technologies Inc., Columbia, Ill., is a professional engineer and an expert on Advanced Process Control techniques and practices. T. Fountain, BEng, AMIEE, is the Technical Manager for National Instruments UK Corp., where he has worked since 1989. Before that he was a design engineer for
D. Aliaga Kelly, BSc, CPhys, MInstP, MAmPhys-Soc, MSRP, FSAS, is now retired after working for many years as Chief Physicist with Nuclear Enterprises Ltd. C. Kindell is with AMP of Great Britain Ltd. E. G. Kingham, CEng, FIEE, was formerly at the Central Electricity Research Laboratories. T. Kingham, is with AMP of Great Britain Ltd. J. Kuehn, FInst Accoust, is Managing Director of Bruel & Kjaer (UK) Ltd. C. K. Laird, BSc, PhD, CChem, MRSC, works in the Chemistry Branch at Central Electricity Research Laboratories.
xix
xx
F. F. Mazda, DFH, MPhil, CEng, MIEE, MBIM, is with Rank Xerox Ltd. W. McEwan, BSc, CEng, MIMechE, FweldInst, Director Scottish School of Non-destructive Testing. A. McNab, BSc, PhD, University of Strathclyde. D. B. Meadowcroft, BSc, PhD, CPhys, FInstP, FICorrST works in the Chemistry Branch at Central Electricity Research Laboratories. B. T. Meggitt, BSc, MSc, PhD, is Development Manager of LM Technology Ltd. and Visiting Professor in the Department of Electronic and Electrical Engineering, City University, London. Alan Montgomery, Sales Manager, Lumberg Canada Ltd., is a long-time sales and marketing expert in the instrumentation field, and is an expert on modern industrial connectors. William L. Mostia, Principal, WLM Engineering, Kemah, Tex. Mr. Mostia is an independent consulting engineer and an expert on pneumatic instrumentation, among other specialties. G. Muir, BSc, MSc, MIM, MInstNDT, MWeldInst, CEng, FIQA, Scottish School of Non-destructive Testing. B. E. Noltingk, BSc, PhD, CEng, FIEE, FInstP, is now a Consultant after some time as Head of the Instrumentation Section at the Central Electricity Research Laboratories. Eoin O’Riain, Publisher, Readout Magazine. D. J. Pacey, BSc, FInst P, was, until recently, a Senior Lecturer in the Physics Department at Brunel University. Dr. Jerry Paros, President, Paroscientific Corp., Redmond, Wash., is founder of Paroscientific, a leading-edge pressure sensor manufacturer, and one of the leading experts on pressure measurement. J. Riley is with AMP of Great Britain Ltd. M. L. Sanderson, BSc, PhD, is Director of the Centre for Fluid Instrumentation at Cranfield Institute of Technology. M. G. Say, MSc, PhD, CEng, ACGI, DIC, FIEE, FRSE, is Professor Emeritus of Electrical Engineering at HeriotWatt University. R. Service, MSc, FInstNDT, MWeldInst, MIM, MICP, CEng, FIQA, Scottish School of Non-destructive Testing.
Contributors
A. C. Smith, BSc, CChem, FRSC, MInstP, former Head of the Analytical Chemistry Section at Central Electricity Research Laboratories. W. L. Snowsill, BSc, was formerly a Research Officer in the Control and Instrumentation Branch of the Central Electricity Research Laboratories. K. R. Sturley, BSc, PhD, FIEE, FIEEE, is a Telecommunications Consultant. P. H. Sydenham, ME, PhD, FInstMC, FIIC, AMIAust, is Head of and Professor at the School of Electronic Engineering in the South Australian Institute of Technology. A. W. S. Tarrant, BSc, PhD, CPhys, FInstP, FCIBSE, is Director of the Engineering Optics Research Group at the University of Surrey. M. Tooley, BA, is Dean of the Technology Department at Brooklands College and the author of numerous electronics and computing books. K. Torrance, BSc, PhD, is in the Materials Branch at Central Electricity Research Laboratories. L. C. Towle, BSc, CEng, MIMechE, MIEE, MInstMC, is a Director of the MTL Instruments Group Ltd. L. W. Turner, CEng, FIEE, FRTS, is a Consultant Engineer. Ian Verhappen, ICE-Pros Ltd., Edmonton, AB Canada, is the former Chair of the Fieldbus Foundation User Group, and an industrial networking consultant. Verhappen is an ISA Fellow, and is an expert on all manner of process analyzers. K. Walters, MSc, PhD, is a Professor in the Department of Mathematics at the University College of Wales. Joseph Weiss, principal, Applied Control Solutions LLC, is an industry expert on control systems and electronic security of control systems, with more than 35 years of experience in the energy industry. He is a member of the Standards and Practices Board of ISA, the International Society of Automation. T. Williams, BSc, CEng, MIEE, formerly with Rosemount, is a consultant in electromagnetic compatability design and training with Elmac Services, Chichester. Shari L. S. Worthington, President, Telesian Technology Inc., is an expert on electronic enablement of manufacturing and marketing in the high technology industries.
Introduction
1. Techniques and applications We can look at instrumentation work in two ways: by techniques or by applications. When we consider instrumentation by technique, we survey one scientific field, such as radioactivity or ultrasonics, and look at all the ways in which it can be used to make useful measurements. When we study instrumentation by application, we cover the various techniques to measure a particular quantity. Under flowmetering, for instance, we look at many methods, including tracers, ultrasonics, or pressure measurement. This book is mainly applications oriented, but in a few cases, notably pneumatics and the employment of nuclear technology, the technique has been the primary unifying theme.
2. Accuracy The most important question in instrumentation is the accuracy with which a measurement is made. It is such a universal issue that we will talk about it now as well as in the individual chapters to follow. Instrument engineers should be skeptical of accuracy claims, and they should hesitate to accept their own reasoning about the systems they have assembled. They should demand evidence—and preferably proof. Above all, they should be clear in their own minds about the level of accuracy needed to perform a job. Too much accuracy will unnecessarily increase costs; too little may cause performance errors that make the project unworkable. Accuracy is important but complex. We must first distinguish between systematic and random errors in an instrument. Systematic error is the error inherent in the operation of the instrument, and calibrating can eliminate it. We discuss calibration in several later chapters. Calibration is the comparison of the reading of the instrument in question to a known standard and the maintenance of the evidentiary chain from that standard. We call this traceability. The phrase random errors implies the action of probability. Some variations in readings, though clearly observed, are difficult to explain, but most random errors can be treated statistically without knowing their cause. In most cases it is assumed that the probability of error is such that errors in individual measurements have a normal distribution about the mean, which is zero if there is no systematic error. This implies that we should quote errors based on a certain probability of the whereabouts of the true value. The
probability grows steadily wider as the range where it might be also grows wider. When we consider a measurement chain with several links, the two approaches give increasingly different figures. For if we think of possibilities/impossibilities, we must allow that the errors in each link can be extreme and in the same direction, calling for a simple addition when calculating the possible total error. On the other hand, this is improbable, so the “chain error” that corresponds to a given probability, ec, is appreciably smaller. In fact, statistically, ec =
e 21 + e 22 + g
where e1, e2, and so on are the errors in the different links, each corresponding to the same probability as ec. We can think of influence quantities as the causes of random errors. Most devices that measure a physical quantity are influenced by other quantities. Even in the simple case of a tape measure, the tape itself is influenced by temperature. Thus, a tape measure will give a false reading unless the influence is allowed for. Instruments should be as insensitive as possible to influence quantities, and users should be aware of them. The effects of these influence quantities can often be reduced by calibrating under conditions as close as possible to the live measurement application. Influence quantities can often be quite complicated. It might not only be the temperature than can affect the instrument, but the change in temperature. Even the rate of change of the temperature can be the critical component of this influence quantity. To make it even more complex, we must also consider the differential between the temperatures of the various instruments that make up the system. One particular factor that could be thought of as an influence quantity is the direction in which the quantity to be measured is changing. Many instruments give slightly different readings according to whether, as it changes, the particular value of interest is approached from above or below. This phenomenon is called hysteresis. If we assume that the instrument output is exactly proportional to a quantity, and we find discrepancies, this is called nonlinearity error. Nonlinearity error is the maximum departure of the true input/output curve from the idealized straight line approximating it. It may be noted that this does not cover changes in incremental gain, the term used for the local slope of the input/ output curve. Special cases of the accuracy of conversion from digital to analog signals, and vice versa, are discussed xxi
xxii
in Sections 29.3.1 and 29.4.5 of Part 4. Calibration at sufficient intermediate points in the range of an instrument can cover systematic nonlinearity. Microprocessor-based instrumentation has reduced the problem of systematic nonlinearity to a simple issue. Most modern instruments have the internal processing capability to do at least a multipoint breakpoint linearization. Many can even host and process complex linearization equations of third order or higher. Special terms used in the preceding discussion are defined in BS 5233, several ANSI standards, and in the ISA Dictionary of Instrumentation, along with numerous others. The general approach to errors that we have outlined follows a statistical approach to a static situation. Communications theory emphasizes working frequencies and time available, and this approach to error is gaining importance in instrumentation technology as instruments become more intelligent. Sensors connected to digital electronics have little or no error from electronic noise, but most accurate results can still be expected from longer measurement times. Instrument engineers must be very wary of measuring the wrong thing! Even a highly accurate measurement of the wrong quantity may cause serious process upsets. Significantly for instruments used for control, Heisenberg’s law applies on the macro level as well as on the subatomic. The operation of measurement can often disturb the quantity measured. This can happen in most fields: A flowmeter can obstruct flow and reduce the velocity to be measured, an over-large temperature sensor can cool the material studied, or a lowimpedance voltmeter can reduce the potential it is monitoring. Part of the instrument engineer’s task is to foresee and avoid errors resulting from the effect instrument has on the system it is being used to study.
3. Environment Instrument engineers must select their devices based on the environment in which they will be installed. In plants there will be extremes of temperature, vibration, dust, chemicals,
Introduction
and abuse. Instruments for use in plants are very different from those that are designed for laboratory use. Two kinds of ill effects arise from badly selected instruments: false readings from exceptional values of influence quantities and the irreversible failure of the instrument itself. Sometimes manufacturers specify limits to working conditions. Sometimes instrument engineers must make their own judgments. When working close to the limits of the working conditions of the equipment, a wise engineer derates the performance of the system or designs environmental mitigation. Because instrumentation engineering is a practical discipline, a key feature of any system design must be the reliability of the equipment. Reliability is the likelihood of the instrument, or the system, continuing to work satisfactorily over long periods. We discuss reliability deeply in Part 4. It must always be taken into account in selecting instruments and designing systems for any application.
4. Units The introductory chapters to some books have discussed the theme of what systems of units are used therein. Fortunately the question is becoming obsolete because SI units are adopted nearly everywhere, and certainly in this book. In the United States and a few other areas, where other units still have some usage, we have listed the relationships for the benefit of those who are still more at home with the older expressions.
References British Standards Institution, Glossary of terms used in Metrology, BS 5233 (1975). Dietrich, D. F., Uncertainty, Calibration and Probability: the Statistics of Scientific and Industrial Measurement, Adam Hilger, London (1973). Instrumentation, Systems and Automation Society (ISA), The ISA Comprehensive Dictionary of Measurement and Control, 3rd ed.; online edition, www.isa.org. Topping, J., Errors of Observation and their Treatment, Chapman and Hall, London (1972).
Part I
The Automation Knowledge Base
This page intentionally left blank
Chapter 1
The Automation Practicum W. Boyes
1.1 Introduction In the years since this book was first published, there have been incredible changes in technology, in sociology, and in the way we work, based on those changes. Who in the early 1970s would have imagined that automation professionals would be looking at the outputs of sensors on handheld devices the size of the “communicators” on the science fiction TV show Star Trek? Yet by late 2007, automation professionals could do just that (see Figure 1.1). There is now no way to be competitive in manufacturing, or no way to do science or medicine, without sensors, instruments, transmitters, and automation. The broad practice of automation, which includes instrumentation, control, measurement, and integration of plant floor and manufacturing operations management data, has grown up entirely since the first edition of this book was published. So, what exactly is automation, and why do we do it? According to the dictionary,1 automation has three definitions: 1: the technique of making an apparatus, a process, or a system operate automatically; 2: the state of being operated automatically; 3: automatically controlled operation of an apparatus, process, or system by mechanical or electronic devices that take the place of human labor. How do we do this? We substitute sensors for the calibrated eyeball of human beings. We connect those sensors to input/output devices that are, in turn, connected to controllers. The controllers are programmed to make sense of the sensor readings and convert them into actions to be taken by final control elements. Those actions, in turn, are measured by the sensors, and the process repeats. Although it is true that automation has replaced much human labor, it is not a replacement for human beings. Rather, the human ability to visualize, interpret, rationalize, and codify has been moved up the value chain from actually pushing buttons and pulling levers on the factory floor to designing and operating sensors, controllers, computers, and 1. Merriam-Webster Online Dictionary, 2008. ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00001-2
final control elements that can do those things. Meanwhile, automation has become ubiquitous and essential. Yet as of this writing there is a serious, worldwide shortage of automation professionals who have training, experience, and interest in working with sensors, instrumentation, field controllers, control systems, and manufacturing automation in general. There are a number of reasons for this shortage, inclu ding the generally accepted misunderstanding that the profession of automation is not necessarily recognized as a profession at all. Electrical engineers practice automation.
Figure 1.1 Courtesy of Transpara Corporation.
3
4
PART | I The Automation Knowledge Base
Some mechanical engineers do, too. Many nonengineers also practice automation, as technicians. Many people who are not engineers but have some other technical training have found careers in automation. Automation is really a multidisciplinary profession, pulling its knowledge base from many different disciplines, including mechanical engineering, electrical engineering, systems engineering, safety engineering, chemical engineering, and many more areas. What we hope to present in this book is a single-volume overview of the automation profession—specifically, the man ufacturing automation profession and the tools and techniques of the automation professional. It must be said that there is a closely allied discipline that is shared by both manufacturing and theater. It is the discipline of mechatronics. Gerhard Schweitzer, Emeritus Professor of Mechanics and Professor of Robotics at the ETH Zurich, defines mechatronics this way: Mechatronics is an interdisciplinary area of engineering that combines mechanical and electrical engineering and computer science. A typical mechatronic system picks up signals from the environment, processes them to generate output signals, transforming them for example into forces, motions and actions. It is the extension and the completion of mechanical systems with sensors and microcomputers which is the most important aspect. The fact that such a system picks up changes in its environment by sensors, and reacts to their signals using the appropriate information processing, makes it different from conventional machines. Examples of mechatronic systems are robots, digitally controlled combustion engines, machine tools with self-adaptive tools, contact-free magnetic bearings, automated guided vehicles, etc. Typical for such a product is the high amount of system knowledge and software that is necessary for its design. Furthermore, and this is most essential, software has become an integral part of the product itself, necessary for its function and operation. It is fully justified to say software has become an actual “machine element.”2
This interdisciplinary area, which so obviously shares so many of the techniques and components of manufacturing automation, has also shared in the reluctance of many engineering schools to teach the subject as a separate discipline. Fewer than six institutions of higher learning in North America, for example, teach automation or mechatronics as separate disciplines. One university, the University of California at Santa Cruz, actually offers a graduate degree in mechatronics—from the Theater Arts Department. In this text we also try to provide insight into ways to enter into a career in manufacturing automation other than “falling into it,” as so many practitioners have.
1.2 Job descriptions Because industrial automation, instrumentation, and controls are truly multidisciplinary, there are many potential job 2. www.mcgs.ch/mechatronics_definition.html.
descriptions that plant the job holder squarely in the role of automation professional. There are electricians, electrical engineers, chemical engineers, biochemical engineers, control system technicians, maintenance technicians, operators, reliability engineers, asset and management engineers, biologists, chemists, statisticians, manufacturing, industrial, and civil and mechanical engineers who have become involved in automation and consider themselves automation professionals. System engineers, system analysts, system integrators—all are automation professionals working in industrial automation.
1.3 Careers and career paths A common thread that runs through surveys of how practitioners entered the automation profession is that they were doing something else, got tapped to do an automation pro ject, found they were good at it, and fell into doing more automation projects. There are very few schools that offer careers in automation. Some technical schools and trade schools do, but few universities do. Many automation professionals enter nonengineeringlevel automation careers via the military. Training in ele ctronics, maintenance, building automation, and automation in most of the Western militaries is excellent and can easily transfer to a career in industrial automation. For example, the building automation controls on a large military base are very similar to those found in the office and laboratory space in an industrial plant. Reactor control technicians from the nuclear navies of the world are already experienced process control technicians, and their skills transfer to industrial automation in the process environment. Engineering professionals usually also enter the automation profession by studying something else. Many schools, such as Visvesaraya Technological University in India, offer courses in industrial automation (usually focusing on robotics and mechatronics) as part of another degree course. Southern Alberta Institute of Technology (SAIT) in Canada, the University of Greenwich in the United Kingdom, and several others offer degrees and advanced degrees in control or automation. Mostly, control is covered in electrical engineering and in chemical engineering curricula, if it is covered at all. The International Society of Automation (or ISA, formerly the Instrumentation, Systems and Automation Society and before that the Instrument Society of America) has addressed the multidisciplinary nature of the automation professional by establishing two global certification programs. The Certified Control Systems Technician, or CCST, program serves to benchmark skills in the process industries for technicians and operator-level personnel: “ISA’s Certified Control Systems Technician Program (CCST) offers third-party recognition of technicians’ knowledge and skills
5
Chapter | 1 The Automation Practicum
in automation and control.”3 The certification is divided into seven functional domains of expertise: calibration, loop checking, troubleshooting, startup, maintenance/repair, pro ject organization, and administration. The CCST is achieving global recognition as an employment certification. Although ISA is the Accreditation Board for Engineering and Technology (ABET) curriculum designer for the Control System Engineering examination, ISA recognized in the early 2000s the fact that the U.S. engineering licensure program was not global and didn’t easily transfer to the global engineering environment. Even in the United States, only 44 of 50 states offer licensing to control system engineers. ISA set out to create a nonlicensure-based certification program for automation professionals. The Certified Automation Professional (CAP) program was designed to offer an accreditation in automation on a global basis: “ISA certification as a Certified Automation Professional (CAP ) will provide an unbiased, third-party, objective assessment and confirmation of your skills as an automation professional. Automation professionals are responsible for the direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting.”4 The following guidelines list the experience and expertise required to achieve CAP certification. They are offered here by permission from ISA as a guide to the knowledge required to become an automation professional.
1.3.1 ISA Certified Automation Professional (CAP) Classification System5 Domain I: Feasibility Study. Identify, scope, and justify the automation project. Task 1. Define the preliminary scope through currently esta blished work practices in order to meet the business need. Knowledge of: . Established work practices 1 2. Basic process and/or equipment 3. Project management methodology 4. Automation opportunity identification techniques (e.g., dynamic performance measures) 5. Control and information technologies (MES) and equipment Skill in: . Automating process and/or equipment 1 2. Developing value analyses 3. From www.isa.org. 4. From www.isa.org. 5. ISA Certified Automation Professional (CAP) Classification System, from www.isa.org/~/CAPClassificationSystemWEB.pdf.
Task 2. Determine the degree of automation required through cost/benefit analysis in order to meet the business need. Knowledge of: . Various degrees of automation 1 2. Various cost/benefit tools 3. Control and information technologies (MES) and equipment 4. Information technology and equipment Skill in: . Analyzing cost versus benefit (e.g., life cycle analysis) 1 2. Choosing the degree of automation 3. Estimating the cost of control equipment and software Task 3. Develop a preliminary automation strategy that matches the degree of automation required by considering an array of options and selecting the most reasonable option in order to prepare feasibility estimates. Knowledge of: . Control strategies 1 2. Principles of measurement 3. Electrical components 4. Control components 5. Various degrees of automation Skill in: . Evaluating different control strategies 1 2. Selecting appropriate measurements 3. Selecting appropriate components 4. Articulating concepts Task 4. Conduct technical studies for the preliminary automation strategy by gathering data and conducting an appropriate analysis relative to requirements in order to define development needs and risks. Knowledge of: . Process control theories 1 2. Machine control theories and mechatronics 3. Risk assessment techniques Skill in: . Conducting technical studies 1 2. Conducting risk analyses 3. Defining primary control strategies Task 5. Perform a justification analysis by generating a feasibility cost estimate and using an accepted financial model to determine project viability. Knowledge of: . Financial models (e.g., ROI, NPV) 1 2. Business drivers 3. Costs of control equipment 4. Estimating techniques
6
Skill in: . Estimating the cost of the system 1 2. Running the financial model 3. Evaluating the results of the financial analysis for the automation portion of the project Task 6. Create a conceptual summary document by reporting preliminary decisions and assumptions in order to facilitate “go/no go” decision making. Knowledge of: 1. Conceptual summary outlines Skill in: . Writing in a technical and effective manner 1 2. Compiling and summarizing information efficiently 3. Presenting information Domain II Task 1. Determine operational strategies through discussion with key stakeholders and using appropriate documentation in order to create and communicate design requirements. Knowledge of: . Interviewing techniques 1 2. Different operating strategies 3. Team leadership and alignment Skill in: . Leading an individual or group discussion 1 2. Communicating effectively 3. Writing in a technical and effective manner 4. Building consensus 5. Interpreting the data from interviews Task 2. Analyze alternative technical solutions by conducting detailed studies in order to define the final automation strategy. Knowledge of: . Automation techniques 1 2. Control theories 3. Modeling and simulation techniques 4. Basic control elements (e.g., sensors, instruments, actuators, control systems, drive systems, HMI, batch control, machine control) 5. Marketplace products available 6. Process and/or equipment operations Skill in: . Applying and evaluating automation solutions 1 2. Making intelligent decisions 3. Using the different modeling tools 4. Determining when modeling is needed Task 3. Establish detailed requirements and data including network architecture, communication concepts, safety
PART | I The Automation Knowledge Base
concepts, standards, vendor preferences, instrument and equipment data sheets, reporting and information needs, and security architecture through established practices in order to form the basis of the design. Knowledge of: . Network architecture 1 2. Communication protocols, including field level 3. Safety concepts 4. Industry standards and codes 5. Security requirements 6. Safety standards (e.g., ISAM, ANSI, NFPA) 7. Control systems security practices Skill in: . Conducting safety analyses 1 2. Determining which data is important to capture 3. Selecting applicable standards and codes 4. Identifying new guidelines that need to be developed 5. Defining information needed for reports 6. Completing instrument and equipment data sheets Task 4. Generate a project cost estimate by gathering cost information in order to determine continued project viability. Knowledge of: . Control system costs 1 2. Estimating techniques 3. Available templates and tools Skill in: . Creating cost estimates 1 2. Evaluating project viability Task 5. Summarize project requirements by creating a basisof-design document and a user-requirements document in order to launch the design phase. Knowledge of: 1. Basis of design outlines . User-requirements document outlines 2 Skill in: . Writing in a technical and effective manner 1 2. Compiling and summarizing information 3. Making effective presentations Domain III Task 1. Perform safety and/or hazard analyses, security analyses, and regulatory compliance assessments by identifying key issues and risks in order to comply with applicable standards, policies, and regulations. Knowledge of: 1. Applicable standards (e.g., ISA S84, IEC 61508, 21 CFR Part 11, NFPA)
7
Chapter | 1 The Automation Practicum
. Environmental standards (EPA) 2 3. Electrical, electrical equipment, enclosure, and electrical classification standards (e.g., UL/FM, NEC, NEMA) Skill in: . Participating in a Hazard Operability Review 1 2. Analyzing safety integrity levels 3. Analyzing hazards 4. Assessing security requirements or relevant security issues 5. Applying regulations to design Task 2. Establish standards, templates, and guidelines as applied to the automation system using the information gathered in the definition stage and considering humanfactor effects in order to satisfy customer design criteria and preferences. Knowledge of: 1. Process Industry Practices (PIP) (Construction Industry Institute) 2. IEC 61131 programming languages 3. Customer standards 4. Vendor standards 5. Template development methodology 6. Field devices 7. Control valves 8. Electrical standards (NEC) 9. Instrument selection and sizing tools 10. ISA standards (e.g., S88) Skill in: . Developing programming standards 1 2. Selecting and sizing instrument equipment 3. Designing low-voltage electrical systems 4. Preparing drawings using AutoCAD software Task 3. Create detailed equipment specifications and instrument data sheets based on vendor selection criteria, characteristics and conditions of the physical environment, regulations, and performance requirements in order to purchase equipment and support system design and development.
. Evaluating vendor alternatives 6 7. Selecting or sizing of input/output signal devices and/or conditioners Task 4. Define the data structure layout and data flow model considering the volume and type of data involved in order to provide specifications for hardware selection and software development. Knowledge of: . Data requirements of system to be automated 1 2. Data structures of control systems 3. Data flow of control systems 4. Productivity tools and software (e.g., InTools, AutoCAD) 5. Entity relationship diagrams Skill in: . Modeling data 1 2. Tuning and normalizing databases Task 5. Select the physical communication media, network architecture, and protocols based on data requirements in order to complete system design and support system development. Knowledge of: . Vendor protocols 1 2. Ethernet and other open networks (e.g., DeviceNet) 3. Physical requirements for networks/media 4. Physical topology rules/limitations 5. Network design 6. Security requirements 7. Backup practices 8. Grounding and bonding practices Skill in: 1. Designing networks based on chosen protocols Task 6. Develop a functional description of the automation solution (e.g., control scheme, alarms, HMI, reports) using rules established in the definition stage in order to guide development and programming.
Knowledge of:
Knowledge of:
. Field devices 1 2. Control valves 3. Electrical standards (NEC) 4. Instrument selection and sizing tools 5. Vendors’ offerings 6. Motor and drive selection sizing tools
. Control theory 1 2. Visualization, alarming, database/reporting techniques 3. Documentation standards 4. Vendors’ capabilities for their hardware and software products 5. General control strategies used within the industry 6. Process/equipment to be automated 7. Operating philosophy
Skill in: . Selecting and sizing motors and drives 1 2. Selecting and sizing instrument equipment 3. Designing low-voltage electrical systems 4. Selecting and sizing computers 5. Selecting and sizing control equipment
Skill in: . Writing functional descriptions 1 2. Interpreting design specifications and user requirements 3. Communicating the functional description to stakeholders
8
Task 7. Design the test plan using chosen methodologies in order to execute appropriate testing relative to functional requirements. Knowledge of: . Relevant test standards 1 2. Simulation tools 3. Process Industry Practices (PIP) (Construction Industry Institute) 4. General software testing procedures 5. Functional description of the system/equipment to be automated Skill in: . Writing test plans 1 2. Developing tests that validate that the system works as specified Task 8. Perform the detailed design for the project by converting the engineering and system design into purchase requisitions, drawings, panel designs, and installation details consistent with the specification and functional descriptions in order to provide detailed information for development and deployment.
PART | I The Automation Knowledge Base
Knowledge of: . Specific HMI software products 1 2. Tag definition schemes 3. Programming structure techniques 4. Network communications 5. Alarming schemes 6. Report configurations 7. Presentation techniques 8. Database fundamentals 9. Computer operating systems 10. Human factors 11. HMI supplier options Skill in: . Presenting data in a logical and aesthetic fashion 1 2. Creating intuitive navigation menus 3. Implementing connections to remote devices 4. Documenting configuration and programming 5. Programming configurations Task 2. Develop database and reporting functions in accordance with the design documents in order to meet the functional requirements.
Knowledge of:
Knowledge of:
1. Field devices, control devices, visualization devices, computers, and networks 2. Installation standards and recommended practices 3. Electrical and wiring practices 4. Specific customer preferences 5. Functional requirements of the system/equipment to be automated 6. Applicable construction codes 7. Documentation standards
. Relational database theory 1 2. Specific database software products 3. Specific reporting products 4. Programming/scripting structure techniques 5. Network communications 6. Structured query language 7. Report configurations 8. Entity diagram techniques 9. Computer operating systems 10. Data mapping
Skill in: . Performing detailed design work 1 2. Documenting the design Task 9. Prepare comprehensive construction work packages by organizing the detailed design information and documents in order to release project for construction. Knowledge of: . Applicable construction practices 1 2. Documentation standards Skill in: 1. Assembling construction work packages Domain IV: Development. Software development and coding. Task 1. Develop Human Machine Interface (HMI) in accordance with the design documents in order to meet the functional requirements.
Skill in: . Presenting data in a logical and aesthetic fashion 1 2. Administrating databases 3. Implementing connections to remote applications 4. Writing queries 5. Creating reports and formatting/printing specifications for report output 6. Documenting database configuration 7. Designing databases 8. Interpreting functional description Task 3. Develop control configuration or programming in accordance with the design documents in order to meet the functional requirements. Knowledge of: . Specific control software products 1 2. Tag definition schemes
9
Chapter | 1 The Automation Practicum
. Programming structure techniques 3 4. Network communications 5. Alarming schemes 6. I/O structure 7. Memory addressing schemes 8. Hardware configuration 9. Computer operating systems 10. Processor capabilities 11. Standard nomenclature (e.g., ISA) 12. Process/equipment to be automated
Skill in: . Documenting security configuration 1 2. Configuring/programming of security system 3. Implementing security features Task 6. Review configuration and programming using defined practices in order to establish compliance with functional requirements. Knowledge of:
Task 4. Implement data transfer methodology that maximizes throughput and ensures data integrity using communication protocols and specifications in order to assure efficiency and reliability.
. Specific control software products 1 2. Specific HMI software products 3. Specific database software products 4 . Specific reporting products 5. Programming structure techniques 6. Network communication 7. Alarming schemes 8. I/O structure 9. Memory addressing schemes 10. Hardware configurations 11. Computer operating systems 12. Defined practices 13. Functional requirements of system/equipment to be automated
Knowledge of:
Skill in:
1. Specific networking software products (e.g., I/O servers) . Network topology 2 3. Network protocols 4. Physical media specifications (e.g., copper, fiber, RF, IR) 5. Computer operating systems 6. Interfacing and gateways 7. Data mapping
. Programming and/or configuration capabilities 1 2. Documenting configuration and programs 3. Reviewing programming/configuration for compliance with design requirements
Skill in: . Interpreting functional description 1 2. Interpreting control strategies and logic drawings 3. Programming and/or configuration capabilities 4. Implementing connections to remote devices 5. Documenting configuration and programs 6. Interpreting P & IDs 7. Interfacing systems
Skill in: . Analyzing throughput 1 2. Ensuring data integrity 3. Troubleshooting 4. Documenting configuration 5. Configuring network products 6. Interfacing systems 7. Manipulating data Task 5. Implement security methodology in accordance with stakeholder requirements in order to mitigate loss and risk. Knowledge of: . Basic system/network security techniques 1 2. Customer security procedures 3. Control user-level access privileges 4. Regulatory expectations (e.g., 29 CFR Part 11) 5. Industry standards (e.g., ISA)
Task 7. Test the automation system using the test plan in order to determine compliance with functional requirements. Knowledge of: . Testing techniques 1 2. Specific control software products 3. Specific HMI software products 4. Specific database software products 5. Specific reporting products 6. Network communications 7. Alarming schemes 8. I/O structure 9. Memory addressing schemes 10. Hardware configurations 11. Computer operating systems 12. Functional requirements of system/equipment to be automated Skill in: . Writing test plans 1 2. Executing test plans 3. Documenting test results
10
. Programming and/or configuration capabilities 4 5. Implementing connections to remote devices 6. Interpreting functional requirements of system/equipment to be automated 7. Interpreting P & IDs Task 8. Assemble all required documentation and user manuals created during the development process in order to transfer essential knowledge to customers and end users. Knowledge of: . General understanding of automation systems 1 2. Computer operating systems 3. Documentation practices 4. Operations procedures 5. Functional requirements of system/equipment to be automated Skill in: 1. Documenting technical information for non-technical audience 2. Using documentation tools 3. Organizing material for readability Domain V Task 1. Perform receipt verification of all field devices by comparing vendor records against design specifications in order to ensure that devices are as specified. Knowledge of: 1. Field devices (e.g., transmitters, final control valves, controllers, variable speed drives, servo motors) 2. Design specifications Skill in: . Interpreting specifications and vendor documents 1 2. Resolving differences Task 2. Perform physical inspection of installed equipment against construction drawings in order to ensure installation in accordance with design drawings and specifications. Knowledge of: . Construction documentation 1 2. Installation practices (e.g., field devices, computer hardware, cabling) 3. Applicable codes and regulations Skill in: . Interpreting construction drawings 1 2. Comparing physical implementation to drawings 3. Interpreting codes and regulations (e.g., NEC, building codes, OSHA) 4. Interpreting installation guidelines
PART | I The Automation Knowledge Base
Task 3. Install configuration and programs by loading them into the target devices in order to prepare for testing. Knowledge of: . Control system (e.g., PLC, DCS, PC) 1 2. System administration Skill in: . Installing software 1 2. Verifying software installation 3. Versioning techniques and revision control 4. Troubleshooting (i.e., resolving issues and retesting) Task 4. Solve unforeseen problems identified during installation using troubleshooting skills in order to correct deficiencies. Knowledge of: . Troubleshooting techniques 1 2. Problem-solving strategies 3. Critical thinking 4. Processes, equipment, configurations, and programming 5. Debugging techniques Skill in: . Solving problems 1 2. Determining root causes 3. Ferreting out information 4. Communicating with facility personnel 5. Implementing problem solutions 6. Documenting problems and solutions Task 5. Test configuration and programming in accordance with the design documents by executing the test plan in order to verify that the system operates as specified. Knowledge of: . Programming and configuration 1 2. Test methodology (e.g., factory acceptance test, site acceptance test, unit-level testing, system-level testing) 3. Test plan for the system/equipment to be automated 4. System to be tested 5. Applicable regulatory requirements relative to testing Skill in: . Executing test plans 1 2. Documenting test results 3. Troubleshooting (e.g., resolving issues and retesting) 4. Writing test plans Task 6. Test communication systems and field devices in accordance with design specifications in order to ensure proper operation. Knowledge of: 1. Test methodology
11
Chapter | 1 The Automation Practicum
. Communication networks and protocols 2 3. Field devices and their performance requirements 4. Regulatory requirements relative to testing Skill in: . Verifying network integrity and data flow integrity 1 2. Conducting field device tests 3. Comparing test results to design specifications 4. Documenting test results 5. Troubleshooting (i.e., resolving issues and retesting) 6. Writing test plans Task 7. Test all safety elements and systems by executing test plans in order to ensure that safety functions operate as designed. Knowledge of: . Applicable safety 1 2. Safety system design 3. Safety elements 4. Test methodology 5. Facility safety procedures 6. Regulatory requirements relative to testing Skill in: . Executing test plans 1 2. Documenting test results 3. Testing safety systems 4. Troubleshooting (i.e., resolving issues and retesting) 5. Writing test plans Task 8. Test all security features by executing test plans in order to ensure that security functions operate as designed. Knowledge of: . Applicable security standards 1 2. Security system design 3. Test methodology 4. Vulnerability assessments 5. Regulatory requirements relative to testing Skill in: . Executing test plans 1 2. Documenting test results 3. Testing security features 4. Troubleshooting (i.e., resolving issues and retesting) 5. Writing test plans Task 9. Provide initial training for facility personnel in system operation and maintenance through classroom and handson training in order to ensure proper use of the system.
. System/equipment to be automated 5 6. Operating and maintenance procedures Skill in: . Communicating with trainees 1 2. Organizing instructional materials 3. Instructing Task 10. Execute system-level tests in accordance with the test plan in order to ensure the entire system functions as designed. Knowledge of: . Test methodology 1 2. Field devices 3. System/equipment to be automated 4. Networking and data communications 5. Safety systems 6. Security systems 7. Regulatory requirements relative to testing Skill in: . Executing test plans 1 2. Documenting test results 3. Testing of entire systems 4. Communicating final results to facility personnel 5. Troubleshooting (i.e., resolving issues and retesting) 6. Writing test plans Task 11. Troubleshoot problems identified during testing using a structured methodology in order to correct system deficiencies. Knowledge of: . Troubleshooting techniques 1 2. Processes, equipment, configurations, and programming Skill in: . Solving problems 1 2. Determining root causes 3. Communicating with facility personnel 4. Implementing problem solutions 5. Documenting test results Task 12. Make necessary adjustments using applicable tools and techniques in order to demonstrate system performance and turn the automated system over to operations.
Knowledge of:
Knowledge of:
. Instructional techniques 1 2. Automation systems 3. Networking and data communications 4. Automation maintenance techniques
. Loop tuning methods/control theory 1 2. Control system hardware 3. Computer system performance tuning 4. User requirements
12
5. System/equipment to be automated Skill in: . Tuning control loops 1 2. Adjusting final control elements 3. Optimizing software performance 4. Communicating final system performance results Domain VI: Operation and Maintenance. Long-term support of the system. Task 1. Verify system performance and records periodically using established procedures in order to ensure compliance with standards, regulations, and best practices. Knowledge of: . Applicable standards 1 2. Performance metrics and acceptable limits 3. Records and record locations 4. Established procedures and purposes of procedures Skill in: . Communicating orally and written 1 2. Auditing the system/equipment 3. Analyzing data and drawing conclusions Task 2. Provide technical support for facility personnel by applying system expertise in order to maximize system availability. Knowledge of: . All system components 1 2. Processes and equipment 3. Automation system functionality 4. Other support resources 5. Control systems theories and applications 6. Analytical troubleshooting and root-cause analyses Skill in:
PART | I The Automation Knowledge Base
Task 4. Provide training for facility personnel by addressing identified objectives in order to ensure the skill level of personnel is adequate for the technology and products used in the system. Knowledge of: . Training resources 1 2. Subject matter and training objectives 3. Teaching methodology Skill in: . Writing training objectives 1 2. Creating the training 3. Organizing training classes (e.g., securing demos, preparing materials, securing space) 4. Delivering training effectively 5. Answering questions effectively Task 5. Monitor performance using software and hardware diagnostic tools in order to support early detection of potential problems. Knowledge of: . Automation systems 1 2. Performance metrics 3. Software and hardware diagnostic tools 4. Potential problem indicators 5. Baseline/normal system performance 6. Acceptable performance limits Skill in: . Using the software and hardware diagnostic tools 1 2. Analyzing data 3. Troubleshooting (i.e., resolving issues and retesting) Task 6. Perform periodic inspections and tests in accordance with written standards and procedures in order to verify system or component performance against requirements.
. Troubleshooting (i.e., resolving issues and retesting) 1 2. Investigating and listening 3. Programming and configuring automation system components
Knowledge of:
Task 3. Perform training needs analysis periodically for facility personnel using skill assessments in order to establish objectives for the training program.
Skill in:
Knowledge of: . Personnel training requirements 1 2. Automation system technology 3. Assessment frequency 4. Assessment methodologies
. Performance requirements 1 2. Inspection and test methodologies 3. Acceptable standards . Testing and inspecting 1 2. Analyzing test results 3. Communicating effectively with others in written or oral form Task 7. Perform continuous improvement by working with facility personnel in order to increase capacity, reliability, and/or efficiency.
Skill in:
Knowledge of:
. Interviewing 1 2. Assessing level of skills
. Performance metrics 1 2. Control theories
13
Chapter | 1 The Automation Practicum
. System/equipment operations 3 4. Business needs 5. Optimization tools and methods Skill in: . Analyzing data 1 2. Programming and configuring 3. Communicating effectively with others 4. Implementing continuous improvement procedures Task 8. Document lessons learned by reviewing the project with all stakeholders in order to improve future projects. Knowledge of: . Project review methodology 1 2. Project history 3. Project methodology and work processes 4. Project metrics Skill in: . Communicating effectively with others 1 2. Configuring and programming 3. Documenting lessons learned 4. Writing and summarizing Task 9. Maintain licenses, updates, and service contracts for software and equipment by reviewing both internal and external options in order to meet expectations for capability and availability. Knowledge of: . Installed base of system equipment and software 1 2. Support agreements 3. Internal and external support resources 4. Life-cycle state and support level (including vendor product plans and future changes) Skill in: . Organizing and scheduling 1 2. Programming and configuring 3. Applying software updates (i.e., keys, patches) Task 10. Determine the need for spare parts based on an assessment of installed base and probability of failure in order to maximize system availability and minimize cost. Knowledge of: . Critical system components 1 2. Installed base of system equipment and software 3. Component availability 4. Reliability analysis 5. Sourcing of spare parts
Task 11. Provide a system management plan by performing preventive maintenance, implementing backups, and designing recovery plans in order to avoid and recover from system failures. Knowledge of: . Automation systems 1 2. Acceptable system downtime 3. Preventive and maintenance procedures 4. Backup practices (e.g., frequency, storage media, storage location) Skill in: . Acquiring and organizing 1 2. Leading 3. Managing crises 4. Performing backups and restores 5. Using system tools Task 12. Follow a process for authorization and implementation of changes in accordance with established standards or practices in order to safeguard system and documentation integrity. Knowledge of: . Management of change procedures 1 2. Automation systems and documentation 3. Configuration management practices Skill in: . Programming and configuring 1 2. Updating documentation The Certified Automation Professional program offers certification training, reviews, and certification examinations regularly scheduled on a global basis. Interested people should contact ISA for information.
1.4 Where automation fits in the extended enterprise In the late 1980s, work began at Purdue University, under the direction of Dr. Theodore J. Williams, on modeling computer-integrated manufacturing enterprises.6 The resultant model describes five basic layers making up the enterprise (see Figure 1.2): Level 0: The Process Level 1: Direct Control Level 2: Process Supervision Level 3: Production Supervision
Skill in: . Acquiring and organizing information 1 2. Analyzing data
6. This effort was documented in A Reference Model for Computer Integrated Manufacturing (CIM): A Description from the Viewpoint of Industrial Automation, Theodore J. Williams, Ph.D., editor, ISA, 1989.
14
PART | I The Automation Knowledge Base
LEVEL 5: Enterprise
LEVEL 4: Site Business Planning and Logistics
LEVEL 3: Plantwide Operations and Control
LEVEL 2: Area Operations
LEVEL 1: Basic Control/Safety Critical
LEVEL 0: The Process
Figure 1.2 The Purdue Model.
Level 4: Plant Management and Scheduling Level 0 consists of the plant infrastructure as it is used for manufacturing control. This level includes all the machinery, sensors, controls, valves, indicators, motors, drives, and so forth. Level 1 is the system that directly controls the manufacturing process, including input/output devices, plant networks, single loop controllers, programmable controllers, and process automation systems. Level 2 is supervisory control. This is the realm of the Distributed Control System (DCS) with process visualization (human/machine interface, or HMI) and advanced process control functions. These three levels have traditionally been considered the realm of industrial automation. The two levels above them have often been considered differently, since they are often under the control of different departments in the enterprise. However, the two decades since the Purdue Research Foundation CIM Model was developed have brought significant changes, and the line between Level 2 and below and Level 3 and above is considerably blurrier than it was in 1989. In 1989, basic control was performed by local control loops, with supervision from SCADA (in some industries, otherwise known as Supervisory Control and Data Acquisition) or DCSes that were proprietary, used proprietary
networks and processors, and were designed as standalone systems and devices. In 2009, the standard for control systems is Microsoft Windows-based operating systems, running proprietary Windows-based software on commercial, off-the-shelf (COTS) computers, all connected via Ethernet networks to local controllers that are either proprietary or that run some version of Windows themselves. What this has made possible is increasingly integrated operations, from the plant floor to the enterprise resource planning (ERP) system, because the entire enterprise is typically using Ethernet networks and Windows-based software systems running on Windows operating systems on COTS computers. To deal with this additional complexity, organizations like MESA International (formerly known as the Manufacturing Execution Systems Association) have created new models of the manufacturing enterprise (see Figure 1.3). These models are based on information flows and information use within the enterprise. The MESA model squeezes all industrial automation into something called Manufacturing/Production and adds two layers above it. Immediately above the Manufacturing/ Production layer is Manufacturing/Production Operations, which includes parts of Level 2 and Level 3 of the Purdue Reference Model. Above that is the Business Operations layer. MESA postulates another layer, which its model shows off to one side, that modulates the activities of the other three: the Strategic Initiatives layer. The MESA model shows “events” being the information transmitted to the next layer from Manufacturing/Operations in a unidirectional format. Bidirectional information flow is shown from the Business Operations layer to the Manufacturing/Production Operations layer. Unidirectional information flow is shown from the Strategic Initiatives layer to the rest of the model. Since 1989, significant work has been done by many organizations in this area, including WBF (formerly the World Batch Forum); ISA, whose standards, ISA88 and ISA95, are the basis for operating manufacturing languages; the Machinery Information Management Open Systems Alliance (MIMOSA); the Organization for Machine Automation and Control (OMAC); and others.
1.5 Manufacturing execution systems and manufacturing operations management C. Gifford, S. L. S. Worthington, and W. H. Boyes
1.5.1 Introduction For more than 30 years, companies have found extremely valuable productivity gains in automating their plant floor
15
Chapter | 1 The Automation Practicum
Lean Manufacturing
Project Lifecycle Mgmt
Quality and Regulatory Compliance
Asset Performance Mgmt
Real Time Enterprise
Figure 1.3 The MESA Model.
Additional Initiatives ...
STRATEGIC INITIATIVES OBJECTIVES
RESULTS
Financial & Performance Focused: ERP, B1
Product Focused: CAD, CAM, PLM
Compliance Focused: Doc Mgmt, ISO, EH&S
Supply Focused: Procurement SCP
Asset Reliability Focus: EAM, CMMS
AGGREGATE
Customer Focused: CRM, Service Mgmt
BUSINESS OPERATIONS QUERY
Product Tracking & Genealogy
Resource Allocation & Status
Performance Analysis
Process Management
Data Collection Acquisition
DATA
Quality Management
Labor Management
Dispatching Production Units
Logistics Focused: TMS, WMS
Controls: PLC, DCS
MANUFACTURING/PRODUCTION OPERATIONS EVENTS
EVENTS REAL-TIME VERSION #2.1
processes. This has been true whether the plant floor processes were continuous, batch, hybrid, or discrete manufacturing. No manufacturing company anywhere in the world would consider operating its plant floor processes manually, except under extreme emergency conditions, and even then a complete shutdown of all systems would be preferable to many plant managers. From the enterprise level in the Purdue Model (see Figure 1.2) down to the plant floor and from the plant floor up to the enterprise level, there are significant areas in which connectivity and real-time information transfer improve performance and productivity.
1.5.2 Manufacturing Execution Systems (MES) and Manufacturing Operations Management (MOM) At one time, the theoretical discussion was limited to connecting the manufacturing floor to the production scheduling systems and then to the enterprise accounting systems. This was referred to as manufacturing execution systems, or MES. MES implementations were difficult and often returned less than expected return on investment (ROI). Recent thought has centered on a new acronym, MOM, which stands for manufacturing operations management. But even MOM does not completely describe a fully connected enterprise.
The question that is often asked is, “What are the benefits of integrating the manufacturing enterprise?” In “Mastering the MOM Model,” a chapter written for the next edition of The Automation Book of Knowledge, to be published in 2009, Charlie Gifford writes, “Effectiveness in manufacturing companies is only partially based on equipment control capability. In an environment that executes as little as 20% make-to-order orders (80% make-to-stock), resource optimization becomes critical to effectiveness. Manufacturing companies must be efficient at coordinating and controlling personnel, materials and equipment across different operations and control systems in order to reach their maximum potential.”7
1.5.3 The Connected Enterprise8 To transform a company into a true multidomain B2B extended enterprise is a complex exercise. Here we have provided a 20-point checklist that will help transform a traditional manufacturing organization into a fully extended
7. Gifford, Charlie, “Mastering the MOM Model,” from The Automation Book of Knowledge, 3rd ed., to be published by ISA Press in 2009. 8. This section contains material that originally appeared in substantially different form in e-Business in Manufacturing, Shari L. S. Worthington and Walt Boyes, ISA Press, 2001.
16
PART | I The Automation Knowledge Base Figure 1.4 ISA-95 generic detailed work activity model (part 3) for MOM ANSI/ISA95. Used with permission of ISA.
enterprise with a completely integrated supply chain and full B2B data interchange capability. To begin, a company must develop, integrate, and maintain metrics vertically through the single domain of the enterprise and then horizontally across the extended domains of the company’s supplier and customer base. At each step of the process, the company must achieve buy-in, develop detailed requirements and business cases, and apply the metrics to determine how much progress has been made toward an extended enterprise. The following details the key points that a company must include in any such checklist for measuring its progress toward becoming a true e-business. ●●
●●
Achieve upper management buy-in. Before anything else, upper management must be educated in the theory of manufacturing enterprise integration. The entire c-level (CEO, COO, CFO, CIO, etc.) must be clear that integration of the enterprise is a process, not a project, that has great possibility of reward but also a possibility of failure. The board and the executive committee must be committed to, and clearly understand, the process of building an infrastructure for cultural change in the corporation. They must also understand the process that must be followed to become a high-response extended enterprise. Perform a technology maturity and gap analysis. The team responsible for enterprise transformation has to perform two assessments before it can proceed to implementation:
B. Requirements Assessment:
1. Identify the main corporate strategy for the extended enterprise. 2. Identify the gap between the technologies and infrastructure that the company currently uses and those that are necessary for an extended enterprise.
1. Develop a plan to scale the IT systems, enterprisewide, and produce a reliability analysis. 2. Determine the probable failure points and data storage and bandwidth load requirements. 3. Perform an assessment of the skill sets of the in-house team and determine how to remedy any gaps identified.
Once the preliminary work is done and upper management has been fully educated about and has bought into the project, the enterprise team can begin to look at specific tactics. ●●
A. Feasibility Assessment:
3. Identify the current and future role of the internal supply chain. 4. Identify the gap between the technologies and infrastructure that are currently available and those that are required to fully integrate the enterprise supply chain. 5. Identify all the corporate views of the current and future state of the integrated supply chain and clearly define their business roles in the extended enterprise model. 6. Determine the costs/benefits of switching to an integrated supply chain model for the operation of the extended enterprise.
Identify the business drivers and quantify the return on investment (ROI) for the enterprise’s integrated supply chain strategy. 1. Perform a supply chain optimization study to determine the business drivers and choke points for the supply chain, both vertically in the enterprise itself and horizontally across the supplier and customer base of the enterprise.
17
Chapter | 1 The Automation Practicum
●●
2. Perform a plant-level feasibility assessment to identify the requirements for supply chain functionality at the plant level in the real world. 3. Create a preliminary data interchange model and requirements. 4. Do a real-world check by determining the ROI of each function in the proposed supply chain integration. 5. Make sure that the manufacturing arm of the enterprise is fully aligned with the supply chain business strategy and correct any outstanding issues. Develop an extended enterprise and supply chain strategy and model that is designed to be “low maintenance.” 1. Write the integrated supply chain strategy and implementation plan, complete with phases, goals, and objectives, and the metrics for each. 2. Write the manufacturing execution infrastructure (MEI) plan. This process includes creating a project steering team, writing an operations manual for the steering team, developing communications criteria, creating a change management system, and establishing a version control system for both software and manuals. This plan should also include complete details about the document management system, including the documentation standards needed to fulfill the project goals. 3. Last, the MEI plan should include complete details about the management of the project, including chain of command, responsibilities, authority, and reporting.
Once the plan is written, the enterprise team can move to the physical pilot stage. At this point, every enterprise will need to do some experimenting to determine the most effective way to integrate the supply chain and become an extended enterprise. ●●
Develop an MEI plan. In the next phase, the team must develop a manufacturing execution infrastructure (MEI) plan during the pilot phase that will respond appropriately to the dynamic change/knowledge demands of a multidomain environment. In the pilot phase, the earlier enterprisewide planning cycle takes concrete form in a single plant environment. So, we will see some of the same assessments and reports, but this time they will be focused on a single plant entity in the enterprise. 1. Perform a plant-level feasibility and readiness assessment. 2. Perform a plant-level functional requirements assessment that includes detailed requirements for the entire plant’s operations and the business case and ROI for each requirement. 3. Determine the pilot project’s functionality and the phases, goals, and objectives that are necessary to achieve the desired functionality.
4. Draft a plant manufacturing execution system (MES) design specification. 5. Determine the software selection criteria for integrating the supply chain. 6. Apply the criteria and produce a software selection process report. 7. Design and deploy the pilot manufacturing execution system. 8. Benchmark the deployment and produce a performance assessment. 9. Perform a real-world analysis of what was learned and earned by the pilot project, and determine what changes are required to the pilot MOM model.
Once the enterprise team has worked through several iterations of pilot projects, it can begin to design an enterprisewide support system. ●●
●●
●●
Design a global support system (GSS) for manufacturing. In an integrated manufacturing enterprise, individual plants and departments cannot be allowed to do things differently than the rest of the enterprise. Global standards for work rules, training, operations, maintenance, and accounting must be designed, trained on, and adhered to, with audits and accountability, or the enterprise integration project will surely fail. 1. Draft a GSS design specification. 2. Draft a complete set of infrastructure requirements that will function globally throughout the enterprise. 3. Draft an implementation plan and deploy the support system within the enterprise. Finalize the development of data interchange requirements between the MES, ERP, SCM, CRM, and logistics and fulfillment applications by using the data acquired during the pilot testing of plantwide systems. 1. Develop an extended strategy for an enterprise application interface (EAI) layer that may incorporate the company’s suppliers and customers into a scalable XML data set and schema. Finalize the specifications for the integrated supply chain (ISC) data interchange design. 2. Figure out all the ways the project can fail, with a failure effort and mode analysis (FEMA). Now, in the period before the enterprisewide implementation, the enterprise team must go back to the pilot process. This time, however, they are focusing on a multiplant extended enterprise pilot project. 3. Extend the MEI, global support system (GSS), EAI layer, and the failure effort and mode analysis (FEMA) into a multiplant, multidomain pilot project. Perform a multiplant plant-level feasibility and readiness assessment.
18
●●
●● ●●
●● ●●
●●
●●
PART | I The Automation Knowledge Base
Identify detailed requirements for all the plants in the project, including business case and ROI for each requirement. Draft a multiplant design specification. Draft an integrated supply chain specification for all the plants in the project. Draft and deploy the multiplant MES. Draft and deploy the multiplant integrated supply chain (ISC). Benchmark and measure the performance of the system “as designed.” Finally, it becomes possible for the enterprise team to develop the corporate deployment plan and to fully implement the systems they have piloted throughout the enterprise. To do this successfully, they will need to add some new items to the “checklist”: 1. Develop rapid application deployment (RAD) teams to respond as quickly as possible to the need for enterprisewide deployments of new required applications. 2. Develop and publish the corporate deployment plan. The company’s education and training requirements are critical to the success of an enterprisewide deployment of new integrated systems. A complete enterprise training plan must be created that includes end-user training, superuser training, MEI and GSS training, and the relevant documentation for each training program. 3. The multiplant pilot MEI and GSS must be scaled to corporatewide levels. This project phase must include determining enterprisewide data storage and load requirements, a complete scaling and reliability plan, and benchmark criteria for the scale-up. 4. Actually deploy the MEI and GSS into the enterprise and begin widespread use.
●●
Next, after it has transformed the enterprise’s infrastructure and technology, the enterprise team can look outward to the supplier and customer base of the company. 1. Select preferred suppliers and customers that can be integrated into the enterprise system’s EAI/XML schema. Work with each supplier and customer to implement the data-interchange process and train personnel in using the system. 2. Negotiate the key process indicators (KPI) with supply chain partners by creating a stakeholders’ steering team that consists of members of the internal and external supply chain partners. Publish the extended ISC data interchange model and requirements through this team. Have this team identify detailed requirements on a plant-by-plant basis, including a business case and ROI for each requirement. 3. This steering team should develop the specifications for the final pilot test: a pilot extended enterprise (EE), including ISC pilot implementation specifications and a performance assessment of the system “as designed.” 4. Develop and track B2B performance metrics using the pilot EE as a model, and create benchmarks and tracking criteria for a full implementation of the extended enterprise. 5. Develop a relationship management department to administer and expand the B2B extended enterprise.
Suggested Reading Gifford, Charles, The Hitchhiker’s Guide to Manufacturing Operations Management, ISA Press, Research Triangle Park, NC, 2007.
Chapter 2
Basic Principles of Industrial Automation W. Boyes
2.1 Introduction There are many types of automation, broadly defined. Industrial automation, and to some extent its related discipline of building automation, carries some specific principles. The most important skills and principles are discussed in Chapter 1 of this book. It is critical to recognize that industrial automation differs from other automation strategies, especially in the enterprise or office automation disciplines. Industrial automation generally deals with the automation of complex processes, in costly infrastructure programs, and with design life cycles in excess of 30 years. Automation systems installed at automotive assembly plants in the late 1970s were still being used in 2008. Similarly, automation systems installed in continuous and batch process plants in the 1970s and 1980s continued to be used in 2008. Essentially, this means that it is not possible to easily perform rip-and-replace upgrades to automation systems in industrial controls, whereas it is simpler in many cases to do such rip-and-replace in enterprise automation systems, such as sales force automation or even enterprise requirements planning (ERP) systems when a new generation of computers is released or when Microsoft releases a new version of Windows. In the industrial automation environment, these kinds of upgrades are simply not practical.
2.2 Standards Over the past three decades, there has been a strong movement toward standards-based design, both of field instruments and controls themselves and the systems to which they belong. The use of and the insistence on recognized standards for sensor design, control system operation, and system design and integration have reduced costs, improved reliability, and enhanced productivity in industrial automation. There are several standards-making bodies that create standards for industrial automation. They include the ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00002-4
International Electrotechnical Commission (IEC) and the International Standards Organization (ISO). Other standards bodies include CENELEC, EEMUA, and the various national standards bodies, such as NIST, ANSI, the HART Communication Foundation, and NEC in the United States; BSI in the United Kingdom; CSA in Canada; DIN, VDE, and DKE in Germany; JIS in Japan; and several standards organizations belonging to the governments of China and India, among others. For process automation, one of the significant standards organizations is ISA, the International Society of Automation. ISA’s standards are in use globally for a variety of automation operations in the process industries, from process and instrumentation diagram symbology (ISA5 and ISA20) to alarm management (ISA18) to control valve design (ISA75), fieldbus (ISA50), industrial wireless (ISA100), and cyber security for industrial automation (ISA99). Three of the most important global standards devel oped by ISA are the ISA84 standard on safety instrumented systems, the ISA88 standard on batch manufacturing, and the ISA95 standard on manufacturing operations language. Other organizations that are similar to standards bodies but do not make actual standards include NAMUR, OMAC, WBF (formerly World Batch Forum), WIB (the Instrument Users’ Association), and others. In addition, with the interpenetration of COTS computing devices in industrial automation, IEEE standards, as well as standards for the design and manufacture of personal computers, have become of interest and importance to the automation professional. There are also de facto standards such as the Microsoft Windows operating system, OPC (originally a Microsoft “standard” called Object Linking and Embedding for Process Control, or OLE for Process Control, and now called simply OPC), and OPC UA (Universal Architecture). It is important for the process automation professional to keep current with standards that impinge on the automation system purview.
19
20
PART | I The Automation Knowledge Base
2.3 Sensor and system design, installation, and commissioning It is not generally in the purview of the automation professional to actually design sensors. This is most commonly done by automation and instrumentation vendors. What are in the purview of the automation professional are system design, installation, and commissioning. Failure to correctly install a sensor or final control element can lead to serious consequences, including damage to the sensor, the control element, or the process and infrastructure themselves.
2.3.1 The Basics The basics of sensor and system design are: ●● ●● ●● ●● ●● ●●
Identification of the application Selection of the appropriate sensor/transmitter Selection of the final control element Selection of the controller and control methodology Design of the installation Installing, commissioning, and calibrating the system
2.3.2 Identification of the Application Most maintenance problems in automation appear to result from improper identification of the application parameters. This leads to incorrect selection of sensors and controls and improper design of the installation. For example, it is impossible to produce an operational flow control loop if the flowmeter is being made both inaccurate and nonlinear by having been installed in a location immediately downstream of a major flow perturbation producer such as a butterfly valve or two 90-degree elbows in series. The most common mistake automation professionals make is to start with their favorite sensors and controls and try to make them fit the application.
2.3.3 Selection of the Appropriate Sensor/ Transmitter The selection of the most appropriate sensor and transmitter combination is another common error point. Once the application parameters are known, it is important to select the most correct sensor and transmitter for those parameters. There are 11 basic types of flow measurement devices and a similar number of level measurement principles being used in modern automation systems. This is because it is often necessary to use a “niche” instrument in a particular application. There are very few locations where a gamma nuclear-level gauge is the most correct device to measure level, but there are a number where the gamma nuclear principle is the only practical way to achieve the measurement. Part of the automation professional’s skill set is the applications knowledge and expertise to be able to make the proper selection of sensors and transmitters.
2.3.4 Selection of the Final Control Element Selection of the final control element is just as important as selection of the transmitter and sensor and is equally based on the application parameters. The final control element can be a control valve, an on/off valve, a temperature control device such as a heater, or a pump in a process automation application. It can be a relay, a PLC ladder circuit, or a stepper motor or other motion control device in a discrete automation application. Whatever the application, the selection of the final control element is critical to the success of the installation. Sometimes, too, the selection of the final control element is predicated on factors outside the strict control loop. For example, the use of a modulating control valve versus the use of a variable-speed drive-controlled pump can make the difference between high energy usage in that control loop and less energy usage. Sometimes this difference can represent a significant cost saving.
2.3.5 Selection of the Controller and Control Methodology Many automation professionals forget that the selection of the control methodology is as important as the selection of the rest of the control loop. Using an advanced process control system over the top of a PID control loop when a simple on/off deadband control will work is an example of the need to evaluate the controller and the control methodology based on the application parameters.
2.3.6 Design of the Installation As important as any other factor, properly designing the installation is critical to the implementation of a successful control loop. Proper design includes physical design within the process. Not locating the sensor at an appropriate point in the process is a common error point. Locating a pH sensor on the opposite side of a 1,000-gallon tank from the chemical injection point is an example. The pH sensor will have to wait until the chemical injection has changed the pH in the entire vessel as well as the inlet and outlet piping before it sees the change. This could take hours. A loop lag time that long will cause the loop to be dysfunctional. Another example of improper location is to locate the transmitter or final control element in a place where it is difficult or impossible for operations and maintenance personnel to reach it after startup. Installations must be designed with an eye to ease of maintenance and calibration. A sensor mounted 40 feet off the ground that requires a cherry-picker crane to reach isn’t a good installation. Another example of improper installation is to place a device, such as a flowmeter, where the process flow must
21
Chapter | 2 Basic Principles of Industrial Automation
be stopped to remove the flowmeter for repair. Bypass lines should be installed around most sensors and final control elements.
2.3.7 Installing, Commissioning, and Calibrating the System Installation of the system needs to be done in accordance with both the manufacturers’ instructions and good trade craft practices, and any codes that are applicable. In hazardous areas, applicable national electrical codes as well as any plant specific codes must be followed. Calibration should be done during commissioning and at regularly scheduled intervals over the lifecycle of the installation.
2.4 Maintenance and operation 2.4.1 Introduction Automation professionals in the 21st century may find themselves working in maintenance or operations rather than in engineering, design, or instrumentation and controls. It is important for automation professionals to understand the issues and principles of maintenance of automation systems, in both continuous and batch process and discrete factory automation. These principles are similar to equipment maintenance principles and have changed from maintenance practices of 20 years ago. Then maintenance was done on a reactive basis—that is, if it broke down, it was fixed. In some cases, a proactive maintenance scheme was used. In this practice, critical automation assets would be replaced at specific intervals, regardless of whether they were working or not. This led to additional expense as systems and components were pulled out when they were still operational. Recent practice has become that of predictive maintenance. Predictive maintenance uses the recorded trends of physical measurements compared to defined engineering limits to determine how to analyze and correct a problem before failure occurs. This practice, where asset management software is used along with sensor and system diagnostics to determine the point at which the automation asset must be replaced, is called life-cycle maintenance or life-cycle optimization.
2.4.2 Life-cycle Optimization In any automation system, there is a recognized pattern to the life cycles of all the components. This pattern forms the well-known “bathtub curve.” There are significant numbers of “infant mortality” failures at the start of the curve; then, as each component ages, there are relatively few failures. Close to the end of the product’s life span, the curve rises, forming the other side of the “bathtub.” Using predictive maintenance techniques, it is possible to improve the
operational efficiency and availability of the entire system by monitoring physical parameters of selected components. For example, it is clear that the mean time between failures (MTBF) of most electronics is significantly longer than the design life of the automation system as a whole, after infant mortality. This means that it is possible to essentially eliminate the controller as a failure-prone point in the system and concentrate on what components have much shorter MTBF ratings, such as rotating machinery, control valves, and the like.
2.4.3 Reliability Engineering For the automation professional, reliability is defined as the probability that an automation device will perform its intended function over a specified time period under conditions that are clearly understood. Reliability engineering is the branch of engineering that designs to meet a specified probability of performance, with an expressed statistical confidence level. Reliability engineering is central to the maintenance of automation systems.
2.4.4 Asset Management, Asset Optimization, and Plant Optimization Asset management systems have grown into detailed, layered software systems that are fully integrated into the sensor networks, measure parameters such as vibration and software diagnostics from sensors and final control elements, and are even integrated into the maintenance systems of plants. A modern asset management system can start with a reading on a flow sensor that is out of range, be traced to a faulty control valve, and initiate a work order to have the control valve repaired, all without human intervention. This has made it possible to perform workable asset optimization on systems as large and complex as the automation and control system for a major refinery or chemical process plant. Using the techniques of reliability engineering and predictive maintenance, it is possible to maximize the amount of time that the automation system is working properly—the uptime of the system. Asset optimization is conjoined to another subdiscipline of the automation professional: plant optimization. Using the control system and the asset management system, it is possible to operate the entire plant at its maximum practical level of performance.
Suggested Reading Mather, Darryl, Lean Strategies for Asset Reliability: Asset Resource Planning, Industrial Press Inc., 2009. EAM Resource Center, The Business Impact of Enterprise Asset Management, EAM, 2008.
This page intentionally left blank
Chapter 3
Measurement Methods and Control Strategies W. Boyes
3.1 Introduction
3.3 Process control strategies
Measurement methods for automation are somewhat different from those designed for use in laboratories and test centers. Specific to automation, measurement methods that work well in a laboratory might not work at all in a process or factory-floor environment. It might not be possible to know all the variables acting on a measurement to determine the degree of error (uncertainty) of that measurement. For example, a flowmeter may be calibrated at the factory with an accuracy of ±0.05% of actual flow. Yet in the field, once installed, an automation professional might be lucky to be able to calibrate the flowmeter to ±10% of actual flow because of the conditions of installation. Because control strategies are often continuous, it is often impossible to remove the sensor from the line for calibration.
Basic process control strategies include on/off control; deadband control; proportional, integral, derivative (PID) control; and its derivatives. On/off control is simple and effective but may not be able to respond to rapid changes in the measured variable (known as PV, or process variable). The next iteration is a type of on/off control called deadband, or hysteresis control. In this method, either the “on” or the “off” action is delayed until a prescribed limit set point is reached, either ascending or descending. Often multiple limit set points are defined, such as a level application with “high” level, “high-high” level, and “high-overflow” level set points. Each of the set points is defined as requiring a specific action. Feedback control is used with a desired set point from which deviation is not desired. When the measured variable deviates from the set point, the controller output drives the measured variable back toward the set point. Most of the feedback control algorithms in use are some form of PID algorithm, of which there are three basic types: the standard, or ideal, form, sometimes called the ISA form; the interactive form, which was the predominant form for analog controllers; and the parallel form, which is rarely found in industrial process control. In the PID algorithm, the proportional term provides most of the control while the integral function and the derivative function provide additional correction. In practice, the proportional and integral terms do most of the control; the derivative term is often set to 0. PID loops contain one measured variable, one controller, and one final control element. This is the basic “control
3.2 Measurement and field calibration methodology In many cases, then, field calibration methods are expedients designed to determine not the absolute accuracy of the measurement but the repeatability of the measurement. Especially in process applications, repeatability is far more critical to the control scheme than absolute accuracy. It is often not possible to do more than one or two calibration runs in situ in a process application. It often means that the calibration and statistical repeatability of the transmitter is what is checked in the field, rather than the accuracy of the entire sensor element.
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00003-6
23
24
PART | I The Automation Knowledge Base
loop” in automation. PID loops need to be “tuned”; there are several tuning algorithms, such as Ziegler-Nichols and others, that allow the loop to be tuned. Many vendors today provide automatic loop-tuning products in their control software offerings. PID feedback controllers work well when there are few process disturbances. When the process is upset or is regularly discontinuous, it is necessary to look at other types of controllers. Some of these include ratio, feed forward, and cascade control. In ratio control, which is most often found in blending of two process streams, the basic process stream provides the pacing for the process while the flow rates for the other streams are modulated to make sure that they are in a specific ratio to the basic process stream. Feed forward control, or open loop control, uses the rate of fall-off from the set point (a disturbance in the process) to manipulate the controlled variable. An example is the use of a flowmeter to control the injection of a chemical additive downstream of the flowmeter. There must be some model of the process so that the effect of the flow change can be used to induce the correct effect on the process downstream. Combining feed forward and feedback control in one integrated control loop is called cascade control. In this scheme, the major correction is done by feed forward control, and the minor correction (sometimes called trim) is done by the feedback loop. An example is the use of flow to control the feed of a chemical additive while using an analyzer downstream of the addition point to modulate the set point of the flow controller.
multiple fuzzy logic sets appear to be able to learn, they are often regarded as a crude form of artificial intelligence. In process automation, only four rules are required for a fuzzy logic controller:2
3.4 Advanced control strategies
Suggested Reading
Since the 1960s, advances in modeling the behavior of processes have permitted a wholly new class of control strategies, called advanced process control, or APC. These control strategies are almost always layered over the basic PID algorithm and the standard control loop. These APC strategies include fuzzy logic, adaptive control, and model predictive control. Conceived in 1964 by University of California at Berkeley scientist Lotfi Zadeh, fuzzy logic is based on the concept of fuzzy sets, where membership in the set is based on probabilities or degrees of truth rather than “yes” or “no.”1 Because
Dieck, Ronald H., Measurement Uncertainty, Methods and Applications, 4th ed., ISA Press, Research Triangle Park, NC, 2007. Trevathan, Vernon L., editor, A Guide to the Automation Body of Know ledge, 2nd ed., ISA Press, Research Triangle Park, NC, 2006. Blevins, Terry, and McMillan, Gregory, et al., Advanced Control Unleashed: Plant Performance Management for Optimum Benefit, ISA Press, Research Triangle Park, NC, 2003.
1. Britannica Concise Encyclopedia, quoted in www.answers.com
Rule 1: If the error is negative and the change in error is negative, the change in output is positive. Rule 2: If the error is negative and the change in error is positive, the change in output is zero. Rule 3: If the error is positive and the change in error is negative, the change in output is zero. Rule 4: If the error is positive and the change in error is positive, the change in output is negative. Adaptive control is somewhat loosely defined as any algorithm in which the controller’s tuning has been altered. Another term for adaptive controllers is self-tuning controllers. Model predictive control uses historicized incremental models of the process to be controlled where the change in a variable can be predicted. When the MPC controller is initialized, the model parameters are set to match the actual performance of the plant. According to Gregory McMillan, “MPC sees future trajectory based on past moves of manipulated variables and present changes in disturbance variables as inputs to a linear model. It provides an integral-only type of control.”3 These advanced control strategies can often improve loop performance but, beyond that, they are also useful in optimizing performance of whole groups of loops, entire processes, and even entire plants themselves.
2. McMillan, Gregory K., “Advanced Process Control,” in A Guide to the Automation Body of Knowledge, 2nd ed., ISA Press, Research Triangle Park, NC, 2006 3. Ibid
Chapter 4
Simulation and Design Software1 M. Berutti
4.1 Introduction Chemical engineers are accustomed to software for designing processes and simulation. Simulation systems such as Matlab and Aspen Plus are commonly referenced in chemical engineering curricula as required courseware and study tools. Automation professionals are also becoming used to applying simulation to operator training, system testing, and commissioning of plant process control systems. Plant design simulation programs are substantially different from systems used for training and commissioning. Many of the most common plant design simulation programs are steadystate, low-resolution simulations that are not usable for automation or plant life-cycle management.
4.2 Simulation Simulation is usually integrated into the plant life cycle at the front-end engineering and design stage and used to test application software using a simulated I/O system and process models in an offline environment. The same simulation system can then be used to train operations staff on the automation and control systems and the application software that will be running on the hardware platform. In the most advanced cases, integration with manufacturing execution systems (MES) and electronic batch records (EBR) systems can be tested while the operations staff is trained. Once installed, the simulation system can be used to test and validate upgrades to the control system before they are installed. The simulator then becomes an effective tool for testing control system modifications in a controlled, offline environment. In addition, plant operations staff and new operators can become qualified on new enhancements and
1. This material appeared in somewhat different form in a white paper, “Optimizing Results in Automation Projects with Simulation,” by Martin Berutti, Mynah Technologies, 2006, and published on www.controlglobal.com
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00004-8
certified on the existing operating system. The simulation system can be used as a test bed to try new control strategies, build new product recipes, and design new interlock strategies prior to proposing those changes as projects to production management. The simulator can also be an effective risk management tool by providing the ability to conduct failure testing in an offline environment rather than on the operat ing process. Simulation’s ROI has been proven to be effectual and substantial across all process industries, from batch to continuous processes. The savings come from identifying and correcting automation system errors prior to startup and commissioning and by identifying so-called “sleeping” errors, or inadequacies in the system’s application software. Additional savings come from accelerating operators’ learning curves and the ability to train operators on upset or emergency conditions they normally do not encounter in day-to-day operations. This reduces operator errors in responding to abnormal situations.
4.3 Best practices for simulation systems in automation Nonintrusive simulation interfaces allow the user to test the control system configuration without making any modifications to the configuration database. As far as the operator is concerned, the system is “live”; as far as the database is concerned, there are no changes. By nature, a nonintrusive simulation interface will provide a virtual I/O interface to the process controller application code that supports I/O and process simulation and modeling. It allows the application software to run in a normal mode without any modification so that the testing application software is identical to the production application software. In addition, the nonintrusive interface will not produce “dead code” that formerly was necessary to spoof the control system during testing. This type of simulation interface supports complete and thorough testing of the application software and provides
25
26
PART | I The Automation Knowledge Base
a benchmark of process controller performance, including CPU and memory loading, application software order execution, and timing.
4.4 Ground-up testing and training Ground-up testing and training is an incremental approach, integrated with the automation project life cycle. This approach to application software testing and training has several benefits. Ground-up testing allows identification and correction of issues early in the project, when they can be corrected before being propagated throughout the system. Additionally, ground-up testing allows the end-user and operations staff to gain acceptance and familiarity with the automation system throughout the entire project instead of at one final acceptance test. Best practices dictate that training and testing are inextricably linked throughout the automation project. Here are some general guidelines for following this best practice: ●●
●●
●●
●●
●●
Control modules are the base-level database elements of the process automation system. These include motors, discrete valves, analog loops, and monitoring points. Testing of these elements can be effectively accomplished with simple tieback simulations and automated test scripts. Operator training on these elements can also bolster buy-in of the automation system and provide a review of usability features. Equipment modules are the next level of an automation system and generally refer to simple continuous unit operations such as charging paths, valve manifolds, and package equipment. Testing these elements can be effectively accomplished with tieback simulations and limited process dynamics. Operator training on these elements is also valuable. Sequence, batch controls, and continuous advanced controls are the next layer of the automation system. Effective testing of these controls generally requires a mass balance simulation with effective temperature and pressure dynamics models. Operator training at this level is necessary due to the complexity of the controls and the user interface. MES applications and business system integration is the final layer of most automation systems. Mass and heat balance models are usually required for effective testing of this layer. Training at this level may be extended beyond the operations staff to include quality assurance, information technology, and other affected departments. Display elements for each layer of the automation should be tested with the database elements listed here. In other words, control module faceplates are tested with the control modules, and batch help screens are tested with the batch controls.
4.5 Simulation system selection The proven best practice is to use actual automation system controllers (or equivalent soft controllers) and application software with a simulation “companion” system. Simulation systems that use a “rehosted” automation system should be eliminated for system testing and avoided for operator training. Use of the actual automation system components with a simulation system allows effective testing and training on HMI use, display access familiarity, process and emergency procedures, response to process upsets, and control system dynamics. This approach builds automation system confidence in operations staff, resulting in more effective use of the automation system and greater benefits.
4.6 Simulation for automation in the validated industries Automation system users and integrators for the validated industries need to be concerned about the GAMP4 Guidelines when they are applying simulation systems to their automation projects. The GAMP4 Guidelines clearly state that simulation systems are allowable tools for automation system testing. The guidelines also make two requirements for the treatment of the automation system application software. First, they require that the application software be “frozen” prior to software integration and system acceptance testing. Second, they require the removal of “dead code” prior to testing. These two requirements dictate the use of nonintrusive simulation interfaces. The GAMP4 Guidelines also state several requirements for the supplier of simulation systems for testing of automation projects. The supplier must have a documented quality and software development program in line with industry best practices. The product used should be designed specifically for process control system testing and operator training. Finally, the product should be a commercially available, off-the-shelf (COTS) tool, delivered in validated, tested object code. Additionally, operator training management modules allow comprehensive development of structured operator training sessions with scripted scenarios and process events. Large or small automation projects can both use a simulation system with a scalable client/server architecture and a stable, repeatable simulation engine.
4.7 Conclusion The use of simulation systems for testing and training process automation projects has been proven to reduce time to market and increase business results. The same systems can be utilized in automation life-cycle management to reduce operational costs and improve product quality.
Chapter 5
Security for Industrial Automation1 W. Boyes and J. Weiss
One of the very largest problems facing the automation professional is that the control systems in plants and the SCADA systems that tie together decentralized facilities such as power, oil, and gas pipelines and water distribution and wastewater collection systems were designed to be open, robust, and easily operated and repaired, but not necessarily secure.
5.1 The security problem For example, in August 2008, Dr. Nate Kube and Bryan Singer of Wurldtech demonstrated at the ACS Cyber Security Conference that a properly designed safety instrumented system that had received TÜV certification could very easily be hacked. The unidentified system failed unsafely in less than 26 seconds after the attack commenced. Note that “properly designed” meant that the controller was designed to be robust and safe and to operate properly. Operating cyber-securely was not one of the design elements. For quite some time, Schweitzer Engineering Laboratories has had a utility on its website, www.selinc.com, that allowed SEL Internet-enabled relays to be programmed via a Telnet client by any authorized user. Recently, several security researchers found and acted on exploits against Telnet; SEL has now taken the utility down to protect the users. It isn’t the power industry alone that faces these issues, although the critical infrastructure in the power industry is certainly one of the largest targets. These cyber incidents have happened in many process industry verticals, whether they’ve been admitted to or not. History shows that it is much more likely to be an internal accident or error that produces the problem.
1. Some of this material originally appeared in somewhat different form in the November 2008 issue of Control magazine in an article coauthored by Walt Boyes and Joe Weiss and from the text of a speech given by Walt Boyes at the 2008 TÜV Safety Symposium
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00005-X
In 1999, an operator for the Olympic Pipeline Company in Bellingham, Washington, was installing a patch on his pipeline SCADA system. Unknown to him, the scan rate of the SCADA system slowed to the point where a leak alarm failed to reach the SCADA HMI until after the ignition of the leak and the deaths of three people as well as numerous injuries. This is a classic cyber accident. On January 26, 2000, the Hatch Nuclear Power Station experienced a cyber event. A Wonderware HMI workstation running on the plant local area network (LAN) was patched, experienced instability because of the patch, and rebooted. It was connected via a firewall directly into the OSI PI database that Hatch used as the plant historian. So was an Allen-Bradley PLC, which was caused to reboot. When it rebooted, it reinitialized all the valve positioners, and with all the valves closed the main feedwater pumps shut down, exactly as they were supposed to do, scramming the reactor. At Brown’s Ferry Nuclear Station in August 2006, a broadcast storm apparently caused by the plant’s IT department “pinging” the network in a standard network quality control procedure caused a similar PLC to fail, shutting off the feedwater pumps and … you guessed it, scramming the reactor. It is troubling that Hatch and Brown’s Ferry had similar incidents six years apart. Lest one conclude that this is all about the power industry and the oil and gas industry, there is the case of Maroochy Shire in Western Australia. From the official MITRE report of the incident, coauthored by Joe Weiss, here is what happened: Vitek Boden worked for Hunter Watertech, an Australian firm that installed a SCADA system for the Maroochy Shire Council in Queensland, Australia. Boden applied for a job with the Maroochy Shire Council. The Council decided not to hire him. Consequently, Boden decided to get even with both the Council and his former employer. He packed his car with stolen radio equipment attached to a (possibly stolen) computer. He drove around the area on at least 46 occasions from February 28 to April 23, 2000, issuing radio commands to the sewage equipment he (probably) helped install. Boden caused 800,000 liters of raw sewage to
27
28
spill out into local parks, rivers, and even the grounds of a Hyatt Regency hotel. Boden coincidentally got caught when a policeman pulled him over for a traffic violation after one of his attacks. A judge sentenced him to two years in jail and ordered him to reimburse the Council for cleanup. There is evidence of more than 100 cyber incidents, whether intentional, malicious, or accidental, in the realtime ACS database maintained by Joe Weiss. These include the Northeast power outage and the Florida power outage in 2008. It is worth noting that neither event has been described as a cyber event by the owners of the power companies and transmission companies involved.
PART | I The Automation Knowledge Base
5.3 Some recommendations for industrial automation security The following recommendations, taken from a report2 to the bipartisan commission producing position papers for the Obama administration, can provide steps to improve the security and reliability of these very critical systems, and most of them are adoptable by any process industry business unit: ●● ●●
●●
5.2 An analysis of the security needs of industrial automation Industrial automation systems (or industrial control systems, abbreviated ICS) are an integral part of the industrial infrastructure supporting the nation’s livelihood and economy. They aren’t going away, and starting over from scratch isn’t an option. UCss are “systems of systems” and need to be operated in a safe, efficient, and secure manner. The sometimes competing goals of reliability and security are not just a North American issue, they are truly a global issue. A number of North American control system suppliers have development activities in countries with dubious credentials; for example, a major North American control system supplier has a major code-writing office in China, and a European RTU manufacturer uses code written in Iran. Though sharing basic constructs with enterprise IT business systems, ICSs are technically, administratively, and functionally different systems. Vulnerability disclosure philosophies are different and can have devastating consequences to critical infrastructure. A major concern is the dearth of an educated workforce; there are very few control system cyber security experts (probably fewer than 100) and currently no university curricula or ICS cyber security personnel certifications. Efforts to secure these critical systems are too diffuse and do not specifically target the unique ICS aspects. The lack of ICS security expertise extends into the government arena, which has focused on repackaging IT solutions. The successful convergence of IT and ICS systems and organizations is expected to enable the promised secure productivity benefits with technologies such as the smart grid. However, the convergence of mainstream IT and ICS systems requires both mainstream and control system expertise, acknowledging the operating differences and accepting the similarities. One can view current ICS cyber security as being where mainstream IT security was 15 years ago; it is in the formative stage and needs support to leapfrog the previous IT learning curve. Regulatory incentives and industry self-interest are necessary to create an atmosphere for adequately securing critical infrastructures. However, regulation will also be required.
●●
●● ●●
●●
●●
●●
●●
●●
●●
●●
●●
●●
Develop a clear understanding of ICS cyber security. Develop a clear understanding of the associated impacts on system reliability and safety on the part of industry, government, and private citizens. Define cyber threats in the broadest possible terms, including intentional, unintentional, natural, and other electronic threats, such as electromagnetic pulse (EMP) and electronic warfare against wireless devices. Develop security technologies and best practices for the field devices based on actual and expected ICS cyber incidents. Develop academic curricula in ICS cyber security. Leverage appropriate IT technologies and best practices for securing workstations using commercial off-the-shelf (COTS) operating systems. Establish standard certification metrics for ICS proces ses, systems, personnel, and cyber security. Promote/mandate adoption of the NIST Risk Manage ment Framework for all critical infrastructures, or at least the industrial infrastructure subset. Establish a global, nongovernmental Cyber Incident Response Team (CIRT) for control systems, staffed with control system expertise for vulnerability disclosure and information sharing. Establish a means for vetting ICS experts rather than using traditional security clearances. Provide regulation and incentives for cyber security of critical infrastructure industries. Establish, promote, and support an open demonstration facility dedicated to best practices for ICS systems. Include subject matter experts with control system experience at high-level cyber security planning sessions. Change the culture of manufacturing in critical industries so that security is considered as important as performance and safety. Develop guidelines similar to that of the Sarbanes-Oxley Act for adequately securing ICS environments.
Like process safety, process security is itself a process and must become part of a culture of inherent safety and security.
2. This report was authored by Joe Weiss, with assistance from several other cyber security experts. The full text of the report is available at www. controlglobal.com
Part II
Mechanical Measurements
This page intentionally left blank
Chapter 6
Measurement of Flow G. Fowles and W. H. Boyes
6.1 Introduction
quantity = flow rate # time =
Flow measurement is a technique used in any process requiring the transport of a material from one point to another (for example, bulk supply of oil from a road tanker to a garage holding tank). It can be used for quantifying a charge for material supplied or maintaining and controlling a specific rate of flow. In many processes, plant efficiency depends on being able to measure and control flow accurately. Properly designed flow measurement systems are compatible with the process or material they are measuring. They must also be capable of producing the accuracy and repeatability that are most appropriate for the application. It is often said that the “ideal flowmeter should be nonintrusive, inexpensive, have absolute accuracy, infinite repeatability, and run forever without maintenance.” Unfortunately, such a device does not yet exist, although some manufacturers might claim that it does. Over recent years, however, many improvements have been made to established systems, and new products utilizing novel techniques are continually being introduced onto the market. The “ideal” flowmeter might not in fact be so far away, and now more than ever, potential users must be fully aware of the systems at their disposal.
6.2 Basic principles of flow measurement
(m 3) $ s = m3 (s)
If, as shown here, flow rate is recorded for a period of time, the quantity is equal to the area under the curve (shaded area in the figure). This can be established automatically by many instruments, and the process is called integration. The integrator of an instrument may carry it out either electrically or mechanically.
6.2.1 Streamlined and Turbulent Flow Streamlined flow in a liquid is a phenomenon best described by example. Reynolds did a considerable amount of work on this subject, and Figure 6.2 illustrates the principle of streamlined flow (also called laminar flow).
Figure 6.1 Flow-time graph.
We need to spend a short time with the basics of flow measurement theory before looking at the operation of the various types of available measurement systems. Flow can be measured as either a volumetric quantity or an instantaneous velocity (this is normally translated into a flow rate). You can see the interdependence of these measurements in Figure 6.1. flow rate = velocity # area = ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00006-1
(m ) (m 3) $ m2 = (s) (s)
Figure 6.2 Reynolds’s experiment.
31
32
PART | II Mechanical Measurements
A thin filament of colored liquid is introduced into a quantity of water flowing through a smooth glass tube. The paths of all fluid particles will be parallel to the tube walls, and therefore the colored liquid travels in a straight line, almost as if it were a tube within a tube. However, this state is velocity- and viscosity-dependent, and as velocity is increased, a point is reached (critical velocity) when the colored liquid will appear to disperse and mix with the carrier liquid. At this point the motion of the particles of fluid is not all parallel to the tube walls but also has a transverse velocity. This form of flow pattern is called turbulent flow. Summarizing, therefore, for velocities below the critical velocity, flow is said to be streamlined or laminar, and for velocities above the critical value, flow is said to be turbulent, a situation that is most common in practice. Reynolds formulated his data in a dimensionless form:
(D $ v $ t) Re = ( n)
(6.1)
where Re is the Reynolds number, D is the diameter of the throat of the installation, o is velocity, t is density of fluid, and n is absolute viscosity. Flow of fluid in pipes is expected to be laminar if the Reynolds number is less than 2000 and turbulent if it is greater than 4000. Between these values is the critical zone. If systems have the same Reynolds number and are geometrically similar, they are said to have dynamic similarity.
6.2.1.1 Flow Profile The velocity across the diameter of a pipe varies due to many influence quantities. The distribution is termed the velocity profile of the system. For laminar flow the profile is parabolic in nature. The velocity at the center of the pipe is approximately twice the mean velocity. For turbulent flow, after a sufficient straight pipe run, the flow profile becomes fully developed. The concept of “fully developed flow” is critical to good flow measurement system design. In a fully developed flow, the velocity at the center of the pipe is only about 1.2 times the mean velocity. This is the preferred flow measurement situation. It permits the most accurate, most repeatable, and most linear measurement of flow.
6.2.1.2 Energy of a Fluid in Motion Let’s look at the forms in which energy is represented in a fluid in motion. This will help us understand the use of the Reynolds number in universal flow formulas. The basic types of energy associated with a moving fluid are: ●● ●● ●● ●●
Potential energy or potential head Kinetic energy Pressure energy Heat energy
6.2.1.3 Potential Energy The fluid has potential energy by virtue of its position or height above some fixed level. For example, 1 m3 of liquid of density t1 kg/m3 will have a mass of t1 kg and would require a force of 9.81 t1 N to support it at a point where the gravitational constant g is 9.81 m/s. Therefore, if it is at a height of z meters above a reference plane, it would have 9.81 t1z joules of energy by virtue of its height.
6.2.1.4 Kinetic Energy A fluid has kinetic energy by virtue of its motion. Therefore, 1 m3 of fluid of density t1 kg/m3 with a velocity V1 m/s would have a kinetic energy of (1)/(2) t1 V 21 joules.
6.2.1.5 Pressure Energy A fluid has pressure energy by virtue of its pressure. For example, a fluid having a volume o1 m3 and a pressure of t1 N/m2 would have a pressure energy of t1o1 joules.
6.2.1.6 Internal Energy The fluid will also have energy by virtue of its temperature (i.e., heat energy). If there is resistance to flow in the form of friction, other forms of internal energy will be converted into heat energy.
6.2.1.7 Total Energy The total energy E of a fluid is given by the equation: total energy ] E g = potential energy + kinetic energy + pressure energy + internal energy
E = P.E. + K.E. + PR.E. + I.E.
(6.2)
6.2.2 Viscosity Viscosity is the frictional resistance that exists in a flowing fluid. It is discussed in more detail in the next chapter. Briefly, the particles of fluid actually in contact with the walls of the channel are at rest while those at the center of the channel move at maximum velocity. Thus the layers of fluid near the center, which are moving at maximum velo city, will be slowed by the slower-moving layers, and the slower-moving layers will be sped up by the faster-moving layers. Dynamic viscosity of a fluid is expressed in units of Ns/m2. Thus a fluid has a dynamic viscosity of 1 Ns/m2 if a force a 1 N is required to move a plane of 1 m2 in area at a speed of 1 m/s parallel to a fixed plane, the moving plane
33
Chapter | 6 Measurement of Flow
Figure 6.3 Determination of dynamic viscosity. Figure 6.4 Hydraulic conditions for pipe flow.
being 1 m away from the fixed plane and the space between the planes being completely filled with the fluid. This is illustrated diagrammatically in Figure 6.3. Thus for parallel flow lines:
force ] F g dynamic viscosity n = area ] A g # velocity ] v g
n=
F A d v/ d x
+ pressure energy + internal energy
= 1 $ Z1 $ g +
1 $ 1 $ V 21 + p 1 $ v 1 + I 1 2
(6.6)
The energy of the fluid at Section 2 (6.4)
Kinematic viscosity is the ratio of the dynamic viscosity of a fluid to its density at the same temperature. kinematic viscosity at T%Cat T%C kinematic viscosity dynamic viscosity at T%at C T%C dynamic viscosity = = density at T%C % density at T C
= potential energy + kinetic energy
(6.3)
or, if a velocity gradient exists,
The energy of the fluid at Section 1
= 1 $ Z2 $ g +
1 $ 1 $ V 22 + p 2 $ v 2 + I 2 2
(6.7)
and since energy cannot leave the channel nor be created or destroyed, Total energy at Section 1 = Total energy at Section 2 Total energy at Section 1 = Total energy at Section 2
(6.5)
For liquids, the viscosity decreases with increase of temperature at constant pressure, whereas for gases, viscosity will increase with increasing temperature at a constant pressure. It is viscosity that is responsible for the damping out or suppression of flow disturbances caused by bends and valves in a pipe; the energy that existed in the swirling liquid is changed into heat energy. This is the reason manufacturers of flow instruments require stated distances ahead and behind the installation point of a flowmeter. What they are trying to achieve is to allow fluid viscosity to have time to work to suppress flow disturbances and permit accurate and repeatable readings.
6.2.3 Bernoulli’s Theorem All fluid flow formulas in a closed pipe are based on Bernoulli’s theorem. This states that in a steady flow, without friction, the sum of potential energy, kinetic energy, and pressure energy is a constant along any streamline. If we have a closed pipe or channel (Figure 6.4) in which there are two sections due to the placement of a restriction, orifice, or hydraulic gradient, there is a pressure or head loss in the transition from the first section to the second. If 1 kg of fluid enters the pipe at the first section, 1 kg of fluid must leave at the second.
V 21 2 Z1 $ g + +Vp1 $ v + I Z 1 $ g1 + 1+ p11 $ v 11 + I 1 1 V 22 2 = Z2 $ g + +Vp22 $ v 2 + I 2 2 = Z2 $ g + + p2 $ v 2 + I2 2
(6.8)
Now, if the temperature of the fluid remains the same, the internal energy remains the same and
I 1 = I 2
(6.9)
and Equation (6.8) reduces to Z1 $ g +
V 21 1
+ p1 $ v 1 = Z 2 $ g +
V 22 2
+ p 2 $ v 2 (6.10)
This equation applies to liquids and ideal gases. Now consider liquids only. These can be regarded as being incompressible and their density and specific volume will remain constant along the channel and
v1 = v2 =
1 1 1 = + t1 t2 t
(6.11)
and Equation (6.10) may be rewritten as
Z1 $ g +
V 21 1
+
V 22 p 2 p1 = Z2 $ g + + t t 2
(6.12)
34
PART | II Mechanical Measurements
Dividing by g, this becomes
Z1 +
V 21 2g
+
p1 t$g
= Z2 +
V 22 2g
+
p2 t$g
(6.13)
Referring back to Figure 6.4, it is obvious that there is a height differential between the upstream and downstream vertical connections representing Sections 1 and 2 of the fluid. Considering first the conditions at the upstream tapping, the fluid will rise in the tube to a height p1/t $ g above the tapping or p1/t $ g + Z1 above the horizontal level taken as the reference plane. Similarly, the fluid will rise to a height p2/t $ g or p2/t $ g + Z2 in the vertical tube at the downstream tapping. The differential head will be given by h=e
p1 t$g
+ Z1 o - e
p2 t$g
+ Z 2 o
e
p1 t$g
+ Z1 o +
V 21
=e
2g
p2 t$g
+ Z2o +
V 22
V 22
V 21
V2 =
2gh ` 1 - A 22 / A 21 j
(6.19)
Now A2/A1 is the ratio (area of Section 2)/(area of Section 1) and is often represented by the symbol m. Therefore
f1 -
A 22 A 21
p = 1 - m2
and 91
1
- _ A 22 /A 21 i C
may be written as
1 (1 - m 2)
V 2 = E 2gh
(6.20)
Q = A 2 $ V 2 = A 2 $ E 2gh m 3 / s
(6.21)
2g
p1 t$g
+ Z1 o - e
t$g
+ Z2 o =
V 22
V 21
p2
2g
=
2g
Mass of liquid flowing per second = W = t $ Q
Therefore h=
2g
-
2g
= A 2 $ t $ E 2gh kg also since Dp = h t,
(6.15)
and V 22 - V 21 = 2gh
(6.16)
Now the volume of liquid flowing along the channel per second will be given by Q m3 where Q = A 1 $ V1 = A 2 $ V2
A 2 $ V2
V1 =
A1
Now, substituting this value in Equation (6.16): V 22 -
V 22
A 22 A 21
= 2gh
or V 22 f 1 -
A 22 A 21
p = 2gh
Q = A2 $ E
2gDp 3 m /s t
W = A 2 $ E 2gt $ Dp kg / s
(6.22) (6.23)
6.2.4 Practical Realization of Equations The foregoing equations apply only to streamlined (or laminar) flow. To determine actual flow, it is necessary to take into account various other parameters. In practice, flow is rarely streamlined but is turbulent. However, the velocities of particles across the stream will be entirely random and will not affect the rate of flow very much. In developing the equations, effects of viscosity have also been neglected. In an actual fluid the loss of head between sections will be greater than that which would take place in a fluid free from viscosity. To correct for these and other effects, another factor is introduced into the equations for flow. This factor is the discharge coefficient C and is given by this equation:
or
(6.18)
and
e
This is termed the velocity of approach factor, often represented by E. Equation (6.19) may be written
or
1 - A 22 /A 21
and taking the square root of both sides
(6.14)
but from Equation (6.13) we have
2gh
V 22 =
(6.17)
and dividing by _ 1 - A 22 /A 21 i, Equation (6.17) becomes
Discharge coefficient: C=
actual mass rate of flow theoretical mass rate of flow
35
Chapter | 6 Measurement of Flow
or if the conditions of temperature, density, and the like are the same at both sections, it may be written in terms of volume: actual volume flowing C= theoretical volume flowing It is possible to determine C experimentally by actual tests. It is a function of pipe size, type of pressure tappings, and the Reynolds number. Equation (6.22) is modified and becomes
Q = C $ A2 $ E
2g $ Dp t
(6.24)
This is true for flow systems where the Reynolds number is above a certain value (20,000 or above for orifice plates). For lower Reynolds numbers and for very small or rough pipes, the basic coefficient is multiplied by a correction factor Z whose value depends on the area ratio, the Reynolds number, and the size and roughness of the pipe. Values for both C and Z are listed with other relevant data in BS 1042 Part 1 1964. We can use differential pressure to measure flow. Here’s a practical example: Internal diameter of upstream pipe: D mm Orifice or throat diameter: d mm Pressure differential produced: h mm water gauge Density of fluid at upstream tapping: t kg/m2 Absolute pressure at upstream tapping: p bar
Q = 0.01252C $ Z $ E $ d
h t
(6.25)
and the weight or mass rate of the flow W kg/h is given by
W = 0.01252C $ Z $ E $ d 2 h t
p 0 $ o0 = p $ o1 or o1 = o0 $ p 0 p
(c) The ideal gas law In the general case, p, o, and T change. Suppose a mass of gas at pressure t0 and temperature T0 Kelvin has a volume o0, and the mass of gas at pressure p and temperature T has a volume o, and that the change from the first set of conditions to the second set of conditions takes place in two stages. (1) Change the pressure from p0 to p at a constant temperature. Let the new volume be o1. From Boyle’s law: p0 p 0 $ o0 = p $ o1 or o1 = o0 $ p (2) Change the temperature from T0 to T at constant pressure. From Charles’s law: o1
Gases are compressible; liquids, mostly, are not. If the gas under consideration can be regarded as an ideal gas (most gases are ideal when well away from their critical temperatures and pressures), the gas obeys several very important gas laws. These laws will now be stated.
(a) Boyle’s law This law states that the volume of any given mass of gas will be inversely proportional to its absolute pressure, provided temperature remains constant. Thus
o T
(6.29)
If the quantity of gas considered is 1 mole, i.e., the quantity of gas that contains as many molecules as there are atoms in 0.012 kg of carbon-12, this constant is represented by R, the gas constant, and Equation (6.29) becomes: po = R 0 $ T where R 0 = 8.314 J/Mol K and p is in N/m2 and o is in m3. (d) Adiabatic expansion When a gas is flowing through a primary element, the change in pressure takes place too rapidly for the gas to absorb heat from its surroundings. When it expands owing to the reduction in pressure, it does work, so that if it does not receive energy it must use its own heat energy, and its temperature will fall. Thus the expansion that takes place owing to the fall in pressure does not obey Boyle’s law, which applies only to an expansion at constant temperature. Instead it obeys the law for adiabatic expansion of a gas:
6.2.5.1 Dry Gases
=
Hence, equating the two values of o1 p0 T0 o0 $ p = o $ T o o p 0 = = constant p0 $ T0 T
(6.26)
6.2.5 Modification of Flow Equations to Apply to Gases
(6.27)
(b) Charles’s law This law states that if the volume of a given mass of gas occupies a volume o1 at a temperature T0 Kelvin, its volume o at T Kelvin is given by o1 o T (6.28) = or o = o1 $ T0 T T0
T0
Then, introducing the discharge coefficient C, the correction factor and the numerical constant, the equation for quantity rate of flow Q m3/h becomes 2
if a certain mass of gas occupies a volume v0 at an absolute p ressure p0 and a volume v1 at an absolute pressure p, then
p 1 $ v 1c = p 2 $ v 2c or p $ v c = constant
(6.30)
where c is the ratio of the specific heats of the gas: c=
specific heat of a gas at constant pressure specific heat of a gas at constant volume
36
PART | II Mechanical Measurements
and has a value of 1.40 for dry air and other diatomic gases, 1.66 for monatomic gases such as helium, and about 1.33 for triatomic gases such as carbon dioxide. If a metered fluid is not incompressible, another factor is introduced into the flow equations. This factor is necessary to correct for the change in volume due to the expansion of the fluid while passing through the restriction. This factor is called the expansibility factor f and has a value of unity (1) for incompressible fluids. For ideal compressible fluids expanding without any change of state, the value can be calculated from the equation f=
f
2/c
^c - 1h
cr 1-m 1-r /c p 2 2 / c c-11-m r 1-r 2
where r is the ratio of the absolute pressures at the upstream and downstream tappings (i.e., r = p 1 /p 2) and c is the ratio of the specific heat of the fluid at constant pressure to that at constant volume. This is detailed in BS 1042 Part 1 1964. To apply working fluid flow equations to both liquids and gases, the factor f is introduced and the equations become:
Q = 0.012 5 CZfEd 2 h / t m 3 / h W = 0.012 52 CZfEd
2
ht kg/h
(6.33)
6.2.5.2 Critical Flow of Compressible Fluids For flow through a convergent tube such as a nozzle, the value of r at the throat cannot be less than a critical value rc. When the pressure at the throat is equal to this critical fraction of the upstream pressure, the rate of flow is a maximum and cannot be further increased except by raising the upstream pressure. The critical pressure ratio is given by the equation 2r ^c1 - c h /c + ^ c - 1 h m 2 $ r 2c /c = c - 1
(6.34)
The value of r is about 0.5, but it increases slightly with increase of m and with decrease of specific heat ratio. Values of r are tabulated in BS 1042 Part 1 1964. The basic equation for critical flow is obtained by substituting (1 - rc)t for Tp in Equation (6.23), substituting rc for r in Equation (6.31), and the equation becomes
W = 1.252U $ d 2 tp kg/h
(6.35)
U = C ^ c/2 h rc $ ^ c - 1 h / c
(6.36)
where
At room temperature and at absolute pressures less than 10 bar, most common gases except carbon dioxide behave sufficiently like an ideal gas that the error in flow calculations brought about by departure from the ideal gas laws is less than 1 percent. To correct for departure from the ideal gas laws, a deviation coefficient K (given in BS 1042 Part 1 1964) is used in the calculation of densities of gases where the departure is significant. For ideal gases, K = 1.
6.2.5.4 Wet Gases The preceding modification applies to dry gases. In practice, many gases are wet, since they are a mixture of gas and water vapor. Partial pressure due to saturated water vapor does not obey Boyle’s law. Gas humidity is discussed in Chapter 11. If the temperature and absolute pressure at the upstream tapping and the state of humidity of the gas are known, a correction factor can be worked out and applied to obtain the actual mass of gas flowing. Gas density is given by the equation
(6.32)
f = 1 for liquids
6.2.5.3 Departure from Gas Laws
The volume rate of flow (in m3/h) is obtained by dividing the weight ratio of flow by the density (in kg/m3) of the fluid at the reference conditions.
t = 6.196 > d
_ p - pv i
k$T
+
0.622p v T
H kg/m 3
(6.37)
where d is specific gravity of dry gas relative to air, T is temperature in Kelvin, p is pressure in mbar at the upstream tapping, pv is partial pressure in mbar of the water vapor, k is the gas law deviation at temperature T, and p is gas density. For dry gas po is zero and the equation becomes
t = 6.196
dp kg/m 3 kT
(6.38)
6.3 Fluid flow in closed pipes 6.3.1 Differential-Pressure Devices Differential pressure devices using a constriction in the pipeline have been the most common technique for measuring fluid flow. Recently, other devices have made substantial inroads in the basic measurement of fluids. Differential pressure is still a widely used technique, with even some new devices that have been introduced in the recent past. A recent estimate puts the use of differential pressure devices to measure flow in the petrochemical industry at over 70 percent of all flow devices. As already shown in the derivation of Bernoulli’s equation in the previous section, a constriction will cause an increase in fluid velocity in the area of that constriction, which in turn will result in a corresponding pressure drop across the constriction. This differential pressure (DP) is a function of the flow velocity and density of the fluid and is shown to be a square root relationship; see Equation (6.24).
37
Chapter | 6 Measurement of Flow
A flowmeter in this category would normally comprise a primary element to develop a differential pressure and a secondary element to measure it. The secondary element is effectively a pressure transducer, and operational techniques are discussed in Chapter 14, so no further coverage will be given here. However, there are various types of primary elements, and these deserve further consideration. The main types of interest are orifice plate, venturi, nozzle, Dall, rotameter, gate meter, Gilflo element, target meter, and V-Cone.
the form of a plate that covers the upper cross-section of the pipe, leaving the lower portion open for the passage of solids to prevent their buildup. The eccentric orifice is used on installations where condensed liquids are present in gas-flow measurement or where undissolved gases are present in the measurement of liquid flow. It is also useful where pipeline drainage is required. To sum up the orifice plate: Advantages: ●●
6.3.1.1 Orifice Plate An orifice plate in its simplest form is a thin steel plate with a circular orifice of known dimensions located centrally in the plate. This is termed a concentric orifice plate; see Figure 6.5(a). The plate would normally be clamped between adjacent flange fittings in a pipeline, a vent hole and drain hole being provided to prevent solids building up and gas pockets developing in the system; see Figure 6.5(b). The differential pressure is measured by suitably located pressure tappings on the pipeline on either side of the orifice plate. These may be located in various positions, depending on the application (e.g., corner, D and D/2, or flange tappings), and reference should be made to BS 1042 Part 1 1964 for correct application. Flow rate is determined from Equation (6.24). This type of orifice plate is inadequate to cope with difficult conditions experienced in metering dirty or viscous fluids and gives a poor disposal rate of condensate in flowing steam and vapors. Several design modifications can overcome these problems, in the form of segmental or eccentric orifice plates, as shown in Figure 6.5(a). The segmental orifice provides a method for measuring the flow of liquids with solids in suspension. It takes
●● ●● ●●
Inherently simple in operation No moving parts Long-term reliability Inexpensive Disadvantages:
●● ●● ●● ●●
Square-root relationship Poor turndown ratio Critical installation requirements High irrecoverable pressure loss
6.3.1.2 Venturi Tube The classical venturi tube is shown in Figure 6.6. It comprises a cylindrical inlet section followed by a convergent entrance into a cylindrical throat and a divergent outlet section. A complete specification may be found by reference to BS 1042 Part 1 1964; relevant details are repeated here: a. Diameter of throat. The diameter d of the throat shall be not less than 0.224D and not greater than 0.742D, where D is the entrance diameter. b. Length of throat. The throat shall have a length of 1.0d. c. Cylindrical entrance section. This section shall have an internal diameter D and a length of not less than 1.0d. . Conical section. This shall have a taper of 10½°. Its d length is therefore 2.70(D - d) within !0.24(D - d). e. Divergent outlet section. The outlet section shall have an inclined angle of not less than 5° and not greater than 15°. Its length shall be such that the exit diameter is not less than 1.5d.
Figure 6.5 (a) Orifice plate types. (b) Concentric orifice plate with D and D/2 tappings mounted between flange plates. Courtesy of British Standards Institution.
Figure 6.6 Venturi tube. Courtesy of British Standards Institution.
38
PART | II Mechanical Measurements
In operation the fluid passes through the convergent entrance, increasing velocity as it does so, resulting in a differential pressure between the inlet and throat. This differential pressure is monitored in the same way as for the orifice plate, the relationship between flow rate and differential being as defined in Equation (6.24). Location of Pressure Tappings The upstream pressure tapping is located in the cylindrical entrance section of the tube 0.5D upstream of the convergent section and the downstream pressure tapping is located in the throat at a distance 0.5D downstream of the convergent section. Pressure tappings should be sized so as to avoid accidental blockage. Generally the tappings are not in the form of a single hole but several equally spaced holes connected together in the form of an annular ring, sometimes called a piezometer ring. This has the advantage of giving a true mean value of pressure at the measuring section.
Figure 6.7 Venturi nozzle. Courtesy of British Standards Institution.
Application The venturi is used for applications in which there is a high solids content or high pressure recovery is desirable. The venturi is inherently a low head-loss device and can result in an appreciable saving of energy. To sum up the venturi tube: Advantages: ●● ●● ●● ●● ●●
Simple in operation Low head loss Tolerance of high solids content Long-term reliability No moving parts Disadvantages:
●● ●● ●● ●●
Expensive Square-root pressure-velocity relationship Poor turndown ratio Critical installation requirements
6.3.1.3 Nozzles The other most common use of the venturi effect is the venturi nozzle. Venturi Nozzle This is in effect a shortened venturi tube. The entrance cone is much shorter and has a curved profile. The inlet pressure tap is located at the mouth of the inlet cone and the low-pressure tap in the plane of minimum section, as shown in Figure 6.7. This reduction in size is taken a stage further in the flow nozzle. Flow Nozzle Overall length is again greatly reduced. The entrance cone is bell-shaped and there is no exit cone. This is illustrated in Figure 6.8. The flow nozzle is not suitable for viscous liquids, but for other applications it is considerably cheaper than the standard venturi tube. Also, due to the smooth entrance cone, there is less resistance to fluid flow
Figure 6.8 Flow nozzle. Courtesy of British Standards Institution.
through the nozzle, and a lower value of m may be used for a given rate of flow. Its main area of use therefore is in highvelocity mains where it will produce a substantially smaller pressure drop than an orifice plate of similar m number.
6.3.1.4 Dall Tube A Dall tube is another variation of the venturi tube and gives a higher differential pressure but a lower head loss than the conventional venturi tube. Figure 6.9 shows a cross-section of a typical Dall flow tube. It consists of a short, straight inlet section, a convergent entrance section, a narrow throat annulus, and a short divergent recovery cone. The whole device is about 2 pipe-diameters long. A shortened version of the Dall tube, the Dall orifice or insert, is also available; it is only 0.3 pipe-diameter long. All the essential Dall tube features are retained in a truncated format, as shown in Figure 6.10. Venturi tubes, venturi nozzles, Dall tubes, and other modifications of the venturi effect are rarely used outside the municipal wastewater and mining industries. There is even a version of a venturi tube combined with a venturi flume called a DataGator that is useful for any pipe, full or not. In this device, the inlet fills up simultaneously with the throat, permitting measurement in subcritical flow as though the device were a venturi flume and above critical flow as though the device were a venturi
39
Chapter | 6 Measurement of Flow
Figure 6.9 Dall tube. Courtesy of ABB.
Figure 6.11 Net pressure loss as a percentage of pressure difference. Courtesy of British Standards Institution.
6.3.1.5 Variable-Orifice Meters Figure 6.10 Dall insert. Courtesy of British Standards Institution.
tube. In the “transition zone” between sub- and supercritical flow, the design of the unit permits a reasonably accurate measurement. This design won an R&D100 Award in 1993 as one of the 100 most important engineering innovations of the year. Pressure Loss All the differential pressure devices discussed so far cause an irrecoverable pressure loss of varying degrees. In operation it is advantageous to keep this loss as low as possible, which will often be a major factor in the selection criteria of a primary element. The pressure loss curves for nozzles, orifices, and venturi tubes are given in Figure 6.11. Installation Requirements As already indicated, installation requirements for differential-pressure devices are quite critical. It is advisable to install primary elements as far downstream as possible from flow disturbances, such as bends, valves, and reducers. These requirements are tabulated in considerable detail in BS 1042 Part 1 1964 and are reproduced in part in Appendix 6.1. It is critical for the instrument engineer to be aware that these requirements are rules of thumb, and even slavish adherence to them may not produce measurement free from hydraulics-induced error. From a practical point of view, the best measurement is the one with the longest upstream straight run and the longest downstream straight run.
So far the devices discussed have relied on a constriction in the flowstream causing a differential pressure varying with flow rate. Another category of differential-pressure device relies on maintaining a nominally constant differential pressure by allowing effective area to increase with flow. The prin cipal devices to be considered are the rotameter, gate meter, and Gilflo. Rotameter This device is shown schematically in Figure 6.12(a). In a tapered tube the upward stream of fluid supports the float where the force on its mass due to gravity is balanced against the flow force determined by the annular area between the float and the tube and the velocity of the stream. The float’s position in the tube is measured by a graduated scale, and its position is taken as an indication of flow rate. Many refinements are possible, including the use of magnetic coupling between the float and external devices to translate vertical movement into horizontal and develop either electrical transmission or alarm actuation. Tube materials can be either metal or glass, depending on application. Figure 6.12(b) shows an exploded view of a typical rotameter. Gate Meter In this type of meter the area of the orifice may be varied by lowering a gate either manually or by an automatically controlled electric motor. The gate is moved so as to maintain a constant pressure drop across the orifice. The pressure drop is measured by pressure tappings located upstream and downstream of the gate, as shown
40
Appendix 6.1 Minimum lengths of straight pipeline upstream of device* Minimum number of pipe diameters for Cases A to F listed below (a) Minimum length of straight pipe immediately upstream of device Diameter ratio d/D less than:
0.22
0.32
0.45
0.55
0.63
0.7
0.77
0.84
Area ratio m less than:
0.05†
0.1
0.2
0.3
0.4
0.5
0.6
0.7
(b) Minimum length between first upstream fitting and next upstream fitting
16
16
18
20
23
26
29
33
13
Case B. Gate valve fully open (for ¾ closed, see Case H)
12
12
12
13
16
20
27
38
10
Case C. Globe valve fully open (for ¾ closed, see Case J)
18
18
20
23
27
32
40
49
16
Case D. Reducer (any reduction, including from a large space)
25
25
25
25
25
26
29
33
13
Case E. Single bend up to 90°, elbow, Y-junction, T-junction (flow in either but not both branches)
10
10
13
16
22
29
41
56
15
Case F. Two or more bends in the same plane, single bend of more than 90°, swan
14
15
18
22
28
36
46
57
18
Case G‡. Two or more bends, elbows, loops, or Y-junctions in different planes, T-junction with flow in both branches
34
35
38
44
52
63
76
89
32
Case H‡. Gate valve up to ¾ closed§ (for fully open, see Case B)
40
40
40
41
46
52
60
70
26
Case J‡. Globe valve up to ¾ closed§ (for fully open, see Case C)
12
14
19
26
36
60
80
100
30
Other fittings Case K. All other findings (provided there is no swirling motion)
100
100
100
100
100
100
100
100
50
Fittings producing symmetrical disturbances Case A. Reducer (reducing not more than 0.5D over a length of 3D); enlarger (enlarging not more than 2D over a length of 1.5D) Any pressure difference device having an area ratio m not less than 0.3
Fittings producing asymmetrical disturbances in one plane
Fittings producing asymmetrical disturbances and swirling motion
Subclauses 47b and 47c. area ratio less than 0.015 or diameter ratios less than 0.125, see Subclause 47b. ‡If swirling motion is eliminated by a flow straightener (Appendix F) installed downstream of these fittings, they may be treated as Class F, B, and C, respectively. §The valve is regarded as three quarters closed when the area of the opening is one quarter of that when fully open. nb: Extracts from British Standards are reproduced by permission of the British Standards Institution, 2 Park Street, London, W1A 2BS, from which complete copies can be obtained. †For
PART | II Mechanical Measurements
*See
41
Chapter | 6 Measurement of Flow
Figure 6.12 (a) Rotameter principle of operation. Courtesy of ABB Instrument Group. (b) Rotameter-exploded view. Cour tesy of ABB Instrument Group.
in Figure 6.13(a). The position of the gate is indicated by a scale. As the rate of flow through the orifice increases, the area of the orifice is increased. If all other factors in Equation (6.21) except area A2 are kept constant, the flow through the orifice will depend on the product A2$E, or A2 /
[1 - (A 2 /A 1) 2] . As A2 increases, (A2/A1)2 increases
and [1 - (A2/A1)2] decreases; therefore 1 2 / [1 - (A 2 /A 1) 2] increases. The relationship between A2 and flow is not linear. If the vertical movement of the gate is to be directly proportional to the rate of flow, the width of the opening A2 must decrease toward the top, as shown in Figure 6.13(a). The flow through the meter can be made to depend directly on the area of the orifice A2 if, instead of the normal static pressure being measured at the upstream tapping, the impact pressure is measured. To do this the upstream tap is made in the form of a tube with its open end facing directly into the flow, as shown in Figure 6.13(b). It is in effect a pitot tube (see the section on point-velocity measurement).
The differential pressure is given by Equation (6.15), where h is the amount the pressure at the upstream tap is greater than that at the downstream tap:
h=
V 22 2g
-
V 21 2g
(6.39)
Now, at the impact port, V 2 = 0; therefore h1 =
V1 2g
where h1 is the amount the impact pressure is greater than the normal upstream static pressure. Thus the difference between impact pressure and the pressure measured at the downstream tap will be h2 where
h2 = h + h1 h2 = h + h1 V 2 2 V 21 2 V 21 2 V 22 2 V V V= V = + 2=g 2 2-g 1 2+g 1 2=g 2 2g 2g 2g 2g
(6.40)
42
PART | II Mechanical Measurements
Therefore, the velocity V2 through the section A2 is given by V 2 =
_ 2g $ h 2 i . The normal flow equations for the type
of installation shown in Figure 6.13(b) will be the same for other orifices, but the velocity of approach factor is 1 and flow is directly proportional to A2. The opening of the gate may therefore be made rectangular, and the vertical movement will be directly proportional to flow. The hinged gate meter is another version of this type of device. Here a weighted gate is placed in the flowstream, its deflection being proportional to flow. A mechanical linkage between the gate and a recorder head provides flow indication. It is primarily used for applications in water mains where the user is interested in step changes rather than absolute flow accuracy. The essential features of this device are shown in Figure 6.13(c). The “Gilflo” Primary Sensor The Gilflo metering principle was developed to overcome the limitations of the square law fixed orifice plate in the mid-1960s. Its construction is in two forms: A and B. The Gilflo A, Figure 6.14(a), sizes 10 to 40 mm, has an orifice mounted to a strong linear bellows fixed at one end and with a shaped cone positioned concentrically in it. Under flow conditions, the orifice moves axially along the cone creating a variable annulus across which the differential pressure varies. Such is the relationship of change that the differential pressure is directly proportional to flow rate, enabling a rangeability of up to 100:1. The Gilflo B, Figure 6.14(b), sizes 40 to 300 mm standard, has a fixed orifice with a shaped cone moving axially against the resistance of a spring, again producing a linear differential pressure and a range of up to 100:1. The Gilflo A has a water equivalent range of 0–5 to 0–350 liters/minute; the Gilflo B range is 0–100 to 0–17500 liters/minute. The main application for Gilflo-based systems is on saturated and superheated steam, with pressures up to 200 bar and temperatures up to !500°C.
6.3.1.6 Target Flowmeter
Figure 6.13 (a) Gate-type area meter. Courtesy of American Society of Mechanical Engineers. (b) Gate-type area meter corrected for velocity of approach. Courtesy of American Society of Mechanical Engineers. (c) Weight-controlled hinged-gate meter.
Although not strictly a differential-pressure device, this is generally categorized under that general heading. The primary and secondary elements form an integral unit, and differential pressure tappings are not required. It is particularly suited for measuring the flow of high-viscosity liquids: hot asphalt, tars, oils, and slurries at pressures up to 100 bar and Reynolds numbers as low as 2000. Figure 6.15 shows the meter and working principles. The liquid impinging on the target will be brought to rest so that pressure increases by V2/2g in terms of head of liquid so that the force F on the target will be
F=
KclV 21 A t 2N
(6.41)
43
Chapter | 6 Measurement of Flow
where c is the mass per unit volume in kg/m3. The area of the target is At measured in m3, K is a constant, and V1 is the velocity in m/s of the liquid through the annular ring between target and pipe. If the pipe diameter is D m and the target diameter d m, then area A of the annular space equals r(D2 - d2)/4 m2. Therefore, volume flow rate is:
Q = A $ V1 = =
^ D2 - d2 h 4
C ^ D2 - d2 h d
2
8F Kclrd 2
F 3 m /s cl
(6.42)
where C is a new constant including the numerical factors. Mass flow rate is:
Figure 6.14 (a) The essentials of Gilflo A. As flow increases the easuring orifice moves along the control cone against the spring bellows. m Courtesy of Gervase Instruments Ltd. (b) Gilflo B extends the principle to higher flow. Now the orifice is fixed and the control cone moves against the spring. Courtesy of Gervase Instruments Ltd.
W = Qcl =
C ^ D2 - d2 h d
Fcl kg/s
(6.43)
The force F is balanced through the force bar and measured by a balanced strain gauge bridge whose output signal is proportional to the square root of flow. Available flow ranges vary from 0–52.7 to 0–123 liters/ minute for the 19 mm size at temperatures up to 400°C to a range of 0–682 to 0–2273 liters/minute for the 100 mm size at temperatures up to 260°F. Meters are also available for gas flow. The overall accuracy of the meter is !0.5 percent, with repeatability of !0.1 percent. Target flowmeters are in use in applications as diverse as supersaturated two-phase steam and municipal water distribution. Wet chlorine gas and liquefied chlorine gas are also applications for this type of device. The shape of the target, which produces the repeatability of the device, is empirical and highly proprietary among manufacturers.
6.3.2 Rotating Mechanical Meters for Liquids Rotating mechanical flowmeters derive a signal from a moving rotor that is rotated at a speed proportional to the fluid flow velocity. Most of these meters are velocitymeasuring devices except for positive displacement meters, which are quantity or volumetric in operation. The principal types are positive displacement, rotating vane, angled propeller, bypass, helix, and turbine meters.
6.3.2.1 Positive Displacement Meters
Figure 6.15 A Target flowmeter with an electronic transmitter. Courtesy of the Venture Measurement Division of Alliant Inc.
Positive displacement meters are widely used on applications where high accuracy and good repeatability are required. Accuracy is not affected by pulsating flow, and accurate measurement is possible at higher liquid viscosities than with many other flowmeters. Positive displacement meters are frequently used in oil and water undertakings for accounting purposes.
44
PART | II Mechanical Measurements
The principle of the measurement is that as the liquid flows through the meter, it moves a measuring element that seals off the measuring chamber into a series of measuring compartments, which are successively filled and emptied. Thus for each complete cycle of the measuring element a fixed quantity of liquid is permitted to pass from the inlet to the outlet of the meter. The seal between the measuring element and the measuring chamber is provided by a film of the measured liquid. The number of cycles of the measuring element is indicated by several possible means, including a pointer moving over a dial driven from the measuring element by suitable gearing and a magnetically coupled sensor connected to an electronic indicator or “flow computer.” The extent of error, defined as the difference between the indicated quantity and the true quantity and expressed as a percentage of the true quantity, is dependent on many factors, among them being: A. The amount of clearance between the rotor and the measuring chamber through which liquid can pass unmetered. B. The amount of torque required to drive the register. The greater the torque, the greater the pressure drop across the measuring element, which in turn determines the leakage rate past the rotor. This is one reason that electronic readout devices have become much more common in recent years, since they eliminate this error factor. c. The viscosity of the liquid to be measured. Increase in viscosity will also result in increased pressure drop across the measuring element, but this is compensated for by the reduction in flow through the rotor clearances for a given pressure drop. The accuracy of measurement attained with a positive displacement meter varies very considerably from one design to another, with the nature and condition of the liquid measured, and with the rate of flow. Great care should be taken to choose the correct meter for an application. The most common forms of positive displacement meters are rotary piston, reciprocating piston, nutating disc, fluted spiral rotor, sliding vane, rotating vane, and oval gear. Rotary Piston The rotary-piston flowmeter is most com mon in the water industry, where it is used for metering domestic supplies. It consists of a cylindrical working chamber that houses a hollow cylindrical piston of equal length. The central hub of the piston is guided in a circular motion by two short inner cylinders. The piston and cylinder are alternately filled and emptied by the fluid passing through the meter. A slot in the sidewall of the piston is removed so that a partition extending inward from the bore of the working chamber can be inserted. This has the effect of restricting the movement of the piston to a sliding motion along the partition. The rotary movement of the piston is transmitted via a permanent-magnet coupling from the drive shaft to a
Figure 6.16 Rotary-piston positive displacement meter. Courtesy of ABB Instrument Group. 1. Lid. 2. Hinge pin. 3. Counter housing complete with lid and hinge pin. 4. Counter with worm reduction gear and washer. 5. Counter washer. 6. Ramp assembly. 7. Top plate assembly comprising top plate only; driving spindle; driving dog; dog retaining clip. 8. Piston. 9. Shutter. 10. Working chamber only. 11. Locating pin. 12. Strainer-plastic. Strainer-copper. 13. Strainer cap. 14. Circlip. 15. Nonreturn valve. 16. O ring. 17. Chamber housing. 18. Protective caps for end threads.
mechanical register or electronic readout device. The basic design and principle of operation of this meter is shown diagrammatically in Figure 6.16. Reciprocating Piston A reciprocating meter can be either of single- or multi-piston type, this being dependent on the application. This type of meter exhibits a wide turndown ratio (e.g., 300:1), with extreme accuracy of !0.1 percent, and can be used for a wide range of liquids. Figure 6.17 illustrates the operating principle of this type of meter. Suppose the piston is at the bottom of its stroke. The valve is so arranged that inlet liquid is admitted below the piston, causing it to travel upward and the liquid above the piston to be discharged to the outlet pipe. When the piston has reached the limit of its travel, the top of the cylinder is cut off from
45
Chapter | 6 Measurement of Flow
Figure 6.17 Reciprocating-piston meter.
the outlet side and opened to the inlet liquid supply. At the same time the bottom of the cylinder is opened to the outlet side but cut off from the inlet liquid. The pressure of the incoming liquid will therefore drive the piston downward, discharging the liquid from below the piston to the outlet pipe. The process repeats. As the piston reciprocates, a ratchet attached to the piston rod provides an actuating force for an incremental counter, each count representing a predetermined quantity of liquid. Newer devices use magnetically coupled sensors—Halleffect or Wiegand-effect types being quite common—or optical encoders to produce the count rate. Nutating-Disc Type This type of meter is similar in principle to the rotary-piston type. In this case, however, the gear train is driven not by a rotating piston but by a movable disc mounted on a concentric sphere. The basic construction is shown in Figure 6.18. The liquid enters the left side of the meter, alternately above and below the disc, forcing it to rock (nutate) in a circular path without rotating about its own axis. The disc is contained in a spherical working chamber and is restricted from rotating about its own axis by a radial partition that extends vertically across the chamber. The disc is slotted to fit over this partition. The spindle protruding from the sphere traces a circular path and is used to drive a geared register. This type of meter can be used for a wide variety of liquids—disc and body materials being chosen to suit. Fluted-Spiral-Rotor Type (Rotating-Impeller Type) The principle of this type of meter is shown in Figure 6.19. The meter consists of two fluted rotors supported in sleeve-type bearings and mounted so as to rotate rather like gears in a liquid-tight case. The clearance between the rotors and measuring chambers is kept to a minimum. The shape of the rotors is designed so that a uniform uninterrupted rotation is produced by the liquid. The impellers in turn rotate the index of a counter, which shows the total measured quantity.
Figure 6.18 Nutating-disc meter.
Figure 6.19 Fluted-spiral-rotor type of meter.
This type of meter is used mainly for measuring crude and refined petroleum products covering a range of flows up to 3000 m3/h at pressures up to 80 bar. Sliding-Vane Type The principle of this type is illustrated in Figure 6.20. It consists of an accurately machined body containing a rotor revolving on ball bearings. The rotor has four evenly spaced slots, forming guides for four vanes. The vanes are in contact with a fixed cam. The four camfollowers follow the contour of the cam, causing the vanes to move radially. This ensures that during transition through the measuring chamber the vanes are in contact with the chamber wall. The liquid impact on the blades causes the rotor to revolve, allowing a quantity of liquid to be discharged. The number of revolutions of the rotor is a measure of the volume of liquid passed through the meter.
46
PART | II Mechanical Measurements
Figure 6.21 Oval-gear meter.
Oval-gear meters are available in a wide range of materials, in sizes from 10 to 400 mm and suitable for pressures up to 60 bar and flows up to 1200 m3/h. Accuracy of !0.25 percent of rate of flow can be achieved.
6.3.2.2 Rotating Vane Meters The rotating vane type of meter operates on the principle that the incoming liquid is directed to impinge tangentially on the periphery of a free-spinning rotor. The rotation is monitored by magnetic or photoelectric pickup, the frequency of the output being proportional to flow rate, or alternatively by a mechanical register connected through gearing to the rotor assembly, as shown in Figure 6.22. Accuracy is dependent on calibration, and turndown ratios up to 20:1 can be achieved. This device is particularly suited to low flow rates. Figure 6.20 Sliding-vane type meter. Courtesy of Wayne Tank & Pump Co.
Rotating-Vane Type This meter is similar in principle to the sliding-vane meter, but the measuring chambers are formed by four half-moon-shaped vanes spaced equidistant on the rotor circumference. As the rotor is revolved, the vanes turn to form sealed chambers between the rotor and the meter body. Accuracy of !0.1 percent is possible down to 20 percent of the rated capacity of the meter. Oval-Gear Type This type of meter consists of two intermeshing oval gearwheels that are rotated by the fluid passing through it. This means that for each revolution of the pair of wheels, a specific quantity of liquid is carried through the meter. This is shown diagrammatically in Figure 6.21. The number of revolutions is a precise measurement of the quantity of liquid passed. A spindle extended from one of the gears can be used to determine the number of revolutions and convert them to engineering units by suitable gearing.
6.3.2.3 Angled-Propeller Meters The propeller flowmeter comprises a Y-type body, with all components apart from the propeller being out of the liquid stream. The construction of this type of meter is shown in Figure 6.23. The propeller has three blades and is designed to give maximum clearance in the measuring chamber, thereby allowing maximum tolerance of suspended particles. The propeller body is angled at 45° to the main flowstream, and liquid passing through the meter rotates it at a speed proportional to flow rate. As the propeller goes through each revolution, encapsulated magnets generate pulses through a pickup device, with the number of pulses proportional to flow rate.
6.3.2.4 Bypass Meters In a bypass meter (also known as a shunt meter), a proportion of the liquid is diverted from the main flowstream by an orifice plate into a bypass configuration. The liquid is concentrated through nozzles to impinge on the rotors of
47
Chapter | 6 Measurement of Flow
Figure 6.22 Rotating-vane type meter.
Figure 6.23 Angled-propeller meter.
a small turbine located in the bypass, the rotation of the turbine being proportional to flow rate. This type of device can give moderate accuracy over a 5:1 turndown ratio and is suitable for liquids, gases, and steam. Bypass meters have been used with other shunt-meter devices, including Coanda-effect oscillatory flowmeters, rotameters, ultrasonic meters, and positive displacement meters and multijets.
6.3.2.5 Helix Meters In a helix meter, the measuring element takes the form of a helical vane mounted centrally in the measuring chamber with its axis along the direction of flow, as shown in Figure 6.24. The vane consists of a hollow cylinder with accurately formed wings. Owing to the effect of the buoyancy of the liquid on the cylinder, friction between its spindle and the sleeve bearings is small. The water is directed evenly onto the vanes by means of guides.
Figure 6.24 Helix meter, exploded view. 1. Body. 2. Top cover with regulator plug and regulator sealing ring. 3. Top cover plate. 4. Joint plate. 5. Joint plate gasket. 6. Joint plate screws. 7. Top cover sealing ring. 8. Body bolt. 9. Body bolt unit. 10. Body bolt washer. 11. Regulator plug. 12. Regulator plug sealing ring. 13. Joint breaking screw. 14. Counter box screw. 15. Measuring element. 16. Element securing screw. 17. Element securing screw washer. 18. Back bearing cap assembly. 19. Back vane support. 20. Tubular dowel pin. 21. Vane. 22. Worm wheel. 23. Vertical worm shaft. 24. First pinion. 25. Drive clip. 26. Regulator assembly. 27. Regulator assembly screw. 28. Undergear. 29. Undergear securing screw. 30. Register.
Transmission of the rotation from the under-gear to the meter register is by means of ceramic magnetic coupling. The body of the meter is cast iron, and the mechanism and body cover are of thermoplastic injection molding. The meter causes only small head loss in operation and is suited for use in water-distribution mains. It is available in sizes from 40 mm up to 300 mm, respective maximum flow rates being 24 m3/h and 1540 m3/h, with accuracy of !2 percent over a 20:1 turndown ratio.
6.3.2.6 Turbine Meters A turbine meter consists of a practically friction-free rotor pivoted along the axis of the meter tube and designed in such a way that the rate of rotation of the rotor is proportional to the rate of flow of fluid through the meter. This rotational speed is sensed by means of an electric pick-off
48
PART | II Mechanical Measurements
In many similar product designs, the rotor is designed so that the pressure distribution of the process liquid helps suspend the rotor in an “axial” floating position, thereby eliminating end-thrust and wear, improving repeatability, and extending the linear flow range. This is illustrated in Figure 6.25(b). As the liquid flows through the meter, there is a small, gradual pressure loss up to Point A caused by the rotor hangers and housing. At this point the area through which flow can take place reduces and velocity increases, resulting in a pressure minimum at Point B. By the time the liquid reaches the downstream edge of the rotor (C), the flow pattern has reestablished itself and a small pressure recovery occurs, causing the rotor to move hard upstream in opposition to the downstream forces. To counteract this upstream force, the rotor hub is designed to be slightly larger in diameter than the outside diameter of the deflector cone to provide an additional downstream force. A hydraulic balance point is reached, with the rotor floating completely clear of any end stops. The turbine meter is available in a range of sizes up to 500 mm, with linearity better than !0.25 percent and repeatability better than !0.02 percent, and can be bidirectional in operation. To ensure optimum operation of the meter, it is necessary to provide a straight pipe section of 10 pipe-diameters upstream and 5 pipe-diameters downstream of the meter. The addition of flow is sometimes necessary.
6.3.3 Rotating Mechanical Meters for Gases The principal types to be discussed are positive displacement, deflecting vane, rotating vane, and turbine.
6.3.3.1 Positive Displacement Meters
Figure 6.25 (a) Principle of operation of turbine meter. (b) Pressure distribution through turbine meter.
coil fitted to the outside of the meter housing, as shown in Figure 6.25(a). The only moving component in the meter is the rotor, and the only component subject to wear is the rotor bearing assembly. However, with careful choice of materials (e.g., tungsten carbide for bearings) the meter should be capable of operating for up to five years without failure.
Three main types of meter come under the heading of positive displacement. They are diaphragm meters, wet gas meters (liquid sealed drum), and rotary displacement meters. Diaphragm meters (bellows type) This type of meter has remained fundamentally the same for over 100 years and is probably the most common kind of meter in existence. It is used in the United Kingdom for metering the supply of gas to domestic and commercial users. The meter comprises a metal case with an upper and a lower section. The lower section consists of four chambers, two of which are enclosed by flexible diaphragms that expand and contract as they are charged and discharged with the gas being metered. Figure 6.26 illustrates the meter at four stages of its operating cycle. Mechanical readout is obtained by linking the diaphragms to suitable gearing, since each cycle of the diaphragms discharges a known quantity of gas. This type of meter is of necessity highly accurate and trouble-free,
49
Chapter | 6 Measurement of Flow
Figure 6.26 Diaphragm meterstages of operation.
and the performance is governed by the regulations of the Department of Trade and Industry. Liquid Sealed Drum This type of meter differs from the bellows type of meter in that the sealing medium for the measuring chambers is not solid but is water or some other suitable liquid. The instrument is shown in section in Figure 6.27. It consists of an outer chamber of tinned brass plate or Staybrite steel sheeting containing a rotary portion. This rotating part consists of shaped partitions forming four measuring chambers made of light-gauge tinplate or Staybrite steel, balanced about a center spindle so that it can rotate freely. Gas enters by the gas inlet near the center and leaves by the outlet pipe at the top of the outer casing. The measuring chambers are sealed off by water or other suitable liquid, which fills the outer chamber to just above the center line. The level of the water is so arranged that when one chamber becomes unsealed to the outlet side, the partition between it and the next chamber seals it off from the inlet side. Thus each measuring chamber will, during the course of a rotation, deliver a definite volume of gas from the inlet side to the outlet side of the instrument. The actual volume delivered will depend on the size of the chamber and the level of the water in the instrument. The level of the water is therefore critical and is maintained at the correct value by means of a hook type of level indicator in a side chamber, which is connected to the main chamber of the instrument. If the level becomes very low, the measuring chambers will become unsealed and gas can pass freely through the instrument without being measured; if the level is too high, the volume delivered at each rotation will be too small, and water may pass back down
Figure 6.27 Liquid sealed drum type gas meter.
the inlet pipe. The correct calibration is obtained by adjusting the water level. When a partition reaches a position where a small sealed chamber is formed connected to the inlet side, there is a greater pressure on the inlet side than on the outlet side. There will therefore be a force that moves the partition in an anticlockwise direction, thus increasing the volume of the chamber. This movement continues until the chamber is sealed off from the inlet pipe but opened up to the outlet side; at the same time the chamber has become open to the inlet gas but sealed off from the outlet side. This produces continuous rotation. The rotation operates a counter that indicates complete rotations and fractions of rotation and can be calibrated in actual volume units. The spindle between the rotor and the counter is usually made of brass and passes through a grease-packed gland. The friction of this gland, together with the friction in the counter gearing, will determine the pressure drop across the meter, which is found to be almost independent of the speed of rotation. This friction must be kept as low as possible, for if there is a large pressure
50
difference between inlet and outlet sides of the meter, the level of the water in the measuring chambers will be forced down, causing errors in the volume delivered; and at low rates of flow the meter will rotate in a jerky manner. It is very difficult to produce partitions of such a shape that the meter delivers accurate amounts for fractions of a rotation; consequently the meter is only approximately correct when fractions of a rotation are involved. The mass of gas delivered will depend on the temperature and pressure of the gas passing through the meter. The volume of gas is measured at the inlet pressure of the meter, so if the temperature and the density of the gas at STP are known, it is not difficult to calculate the mass of gas measured. The gas will, of course, be saturated with water vapor, and this must be taken into account in finding the partial pressure of the gas. Rotating-Impeller Type This type of meter is similar in principle to the rotating-impeller type meter for liquids and could be described as a two-toothed gear pump. It is shown schematically in Figure 6.28. Although the meter is usually manufactured almost entirely from cast iron, other materials can be used. The meter basically consists of two impellers housed in a casing and supported on rolling element bearings. A clearance of a few thousandths of an inch between the impellers and the casing prevents wear, with the result that the calibration of the meter remains constant throughout its life. The leakage rate is only a small fraction of 1 percent, and this is compensated for in the gearing counter ratio. Each lobe of the impellers has a scraper tip machined onto its periphery to prevent deposits forming in the measuring chamber. The impellers are timed relative to each other by gears fitted to one or both ends of the impeller shafts. The impellers are caused to rotate by the decrease in pressure, which is created at the meter outlet following a consumer’s use of gas. Each time an impeller passes through the vertical position, a pocket of gas is momentarily trapped between the impeller and the casing. Four
Figure 6.28 Rotary displacement meter.
PART | II Mechanical Measurements
pockets of gas are therefore trapped and expelled during each complete revolution of the index shaft. The rotation of the impellers is transmitted to the meter counter by suitable gearing so that the counter reads directly in cubic feet. As the meter records the quantity of gas passing through it at the conditions prevailing at the inlet, it is necessary to correct the volume indicated by the meter index for various factors. These are normally pressure, temperature, and compressibility. Corrections can be carried out manually if the conditions within the meter are constant. Alternatively, the correction can be made continuously and automatically by small mechanical or electronic computers if conditions within the meter vary continuously and by relatively large amounts. Meters can also drive, through external gearing, various types of pressure- or temperature-recording devices as required. Meters of this type are usually available in pressures up to 60 bar and will measure flow rates from approximately 12 m3/h up to 10,000 m3/h. Within these flow rates the meters will have a guaranteed accuracy of !1.0 percent, over a range of from 5 to 100 percent of maximum capacity. The pressure drop across the meter at maximum capacity is always less than 50 mm wg. These capacities and the pressure loss information are for meters operating at low pressure; the values would be subject to the effects of gas density at high pressure.
6.3.3.2 Deflecting-Vane Type: Velometers The principle of this type of instrument is similar to that of the same instrument for liquids. The construction, however, has to be different because the density of a gas is usually considerably less than that of a liquid. As the force per unit area acting on the vane depends on the rate of change of momentum and momentum is mass multiplied by velocity, the force will depend on the density and on the velocity of the impinging gas. The velocity of gas flow in a main is usually very much greater (6 to 10 times) than that of liquid flow, but this is not sufficient to compensate for the greatly reduced density. (Density of dry air at 0°C and 760 mm is 0.0013 g/ml; density of water is 1 g/ml.) The vane must therefore be considerably larger when used for gases or be considerably reduced in weight. The restoring force must also be made small if an appreciable deflection is to be obtained. The simple velometer consists of a light vane that travels in a shaped channel. Gas flowing through the channel deflects the vane according to the velocity and density of the gas, the shape of the channel, and the restoring torque of the hairspring attached to the pivot of the vane. The velometer is usually attached to a “duct jet,” which consists of two tubes placed so that the open end of one faces upstream while the open end of the other points downstream. The velometer then measures the rate of flow through the pair of tubes, and because this depends on the
51
Chapter | 6 Measurement of Flow
lengths and sizes of connecting pipes and the resistance and location of the pressure holes, each assembly needs individual calibration. The main disadvantages of this simple velometer are the effects of hot or corrosive gases on the vane and channel. This disadvantage may be overcome by measuring the flow of air through the velometer produced by a differential air pressure equal to that produced by the “duct jet.” In this way the hot gases do not pass through the instrument, and so it is not damaged.
is positioned by pillars (15), which are secured to the top flange of the internal tubular body. The meter casing is made of cast iron; the anemometer is made from aluminum. The larger sizes have a separate internal tubular body made from cast iron, with a brass or mild steel skirt that forms part of the overall measuring element. Its area of application is in the measurement of gas flow in industrial and commercial installations at pressures up to 1.5 bar and flows up to 200 m 3/h, giving accuracy of !2 percent over a flow range of 10:1.
6.3.3.3 Rotating-Vane Type
6.3.3.4 Turbine Meters
Anemometers As in the case of the deflecting-vane type, the force available from gases to produce the rotation of a vane is considerably less than that available in the measurement of liquids. The vanes must therefore be made light or have a large surface area. The rotor as a whole must be accurately balanced, and the bearings must be as friction-free as possible and may be in the form of a multicap or multiplefan blade design, the speed of rotation being proportional to air speed.
The gas turbine meter operates on the same principle as the liquid turbine meter, although the design is somewhat different since the densities of gases are much lower than those of liquids; high gas velocities are required to turn the rotor blades.
Rotary Gas Meter The rotary meter is a development of the air meter type of anemometer and is shown in Figure 6.29. It consists of three main assemblies: the body, the measuring element, and the multipoint index driven through the intergearing. The lower casing (1) has integral inline flanges (2) and is completed by the bonnet (3) with index glass (4) and bezel (5). The measuring element is made up of an internal tubular body (6), which directs the flow of gas through a series of circular ports (7) onto a vaned anemometer (8). The anemometer is carried by a pivot (9) that runs in a sapphire– agate bearing assembly (10), the upper end being steadied by a bronze bush (11). The multipointer index (12) is driven by an intergear (13) supported between index plates (14). The index assembly
6.3.4 Electronic Flowmeters Either the principle of operation of flowmeters in this category is electronically based or the primary sensing is by means of an electronic device. Most of the flowmeters discussed in this section have undergone considerable development in the last five years, and the techniques outlined are a growth area in flowmetering applications. They include electromagnetic flowmeters, ultrasonic flowmeters, oscillatory flowmeters, and cross-correlation techniques. It is important to note, however, that there has been very limited development of new techniques in flowmetering since the early 1980s, due in part to concentration of effort on the design of other sensors and control systems.
6.3.4.1 Electromagnetic Flowmeters The principle of operation of this type of flowmeter is based on Faraday’s law of electromagnetic induction, which states that if an electric conductor moves in a magnetic field, an electromotive force (EMF) is induced, the amplitude of which is dependent on the force of the magnetic field, the velocity of the movement, and the length of the conductor, such that
Figure 6.29 Diagrammatic section of a rotary gas meter. Courtesy of Parkinson & Cowan Computers.
E \ BlV
(6.44)
where E is EMF, B is magnetic field density, l is length of conductor, and V is the rate at which the conductor is cutting the magnetic field. The direction of the EMF with respect to the movement and the magnetic field is given by Fleming’s right-hand generator rule. If the conductor now takes the form of a conductive liquid, an EMF is generated in accordance with Faraday’s law. It is useful at this time to refer to BS 5792 1980, which states: “If the magnetic field is perpendicular to an electrically
52
insulating tube through which a conductive liquid is flowing, a maximum potential difference may be measured between two electrodes positioned on the wall of the tube such that the diameter joining the electrodes is orthogonal to the magnetic field. The potential difference is proportional to the magnetic field strength, the axial velocity, and the distance between the electrodes.” Hence the axial velocity and rate of flow can be determined. This principle is illustrated in Figure 6.30(a). Figure 6.30(b) shows the basic construction of an electromagnetic flowmeter. It consists of a primary device, which contains the pipe through which the liquid passes, the measurement electrodes, and the magnetic field coils and a secondary device, which provides the field-coil excitation and amplifies the output of the primary device and converts it to a form suitable for display, transmission, and totalization. The flow tube, which is effectively a pipe section, is lined with some suitable insulating material (dependent on liquid
PART | II Mechanical Measurements
type) to prevent short-circuiting of the electrodes, which are normally button type and mounted flush with the liner. The field coils wound around the outside of the flow tube are usually epoxy resin encapsulated to prevent damage by damp or liquid submersion. Field-Coil excitation To develop a suitable magnetic field across the pipeline, it is necessary to drive the field coil with some form of electrical excitation. It is not possible to use pure DC excitation due to the resulting polarization effect on electrodes and subsequent electrochemical action, so some form of AC excitation is employed. The most common techniques are sinusoidal and nonsinusoidal (square wave, pulsed DC, or trapezoidal). Sinusoidal AC excitation Most early electromagnetic flow meters used standard 50 Hz mains voltage as an excitation source for the field coils, and in fact most systems in use
Figure 6.30 (a) Principle of operation: electromagnetic flowmeter. (b) Electromagnetic flowmeter detector head: exploded view.
53
Chapter | 6 Measurement of Flow
today operate on this principle. The signal voltage will also be AC and is normally capacitively coupled to the secondary electronics to avoid any DC interfering potentials. This type of system has several disadvantages. Due to AC excitation, the transformer effect produces interfering voltages. These are caused by stray pickup by the signal cables from the varying magnetic field. It has a high power consumption and suffers from zero drift caused by the previously mentioned interfering voltages and electrode contamination. This necessitates manual zero control adjustment. These problems have now been largely overcome by the use of nonsinusoidal excitation. Nonsinusoidal Excitation Here it is possible to arrange that rate of change of flux density d B/ d t = 0 for part of the excitation cycle; therefore, there is no transformer action during this period. The flow signal is sampled during these periods and is effectively free from induced error voltages. Square-wave, pulsed, and trapezoidal excitations have all been employed initially at frequencies around 50 Hz, but most manufacturers have now opted for lowfrequency systems (2–7 Hz) offering the benefits of minimum power consumption (i.e., only 20 percent of the power used by a comparative 50 Hz system), automatic compensation for interfering voltages, automatic zero adjustment, and tolerance of light buildup of material on electrode surfaces. An example of this type of technique is illustrated in Figure 6.31, where square-wave excitation is used. The DC supply to the coils is switched on and off at approximately 2.6 Hz, with polarity reversal every cycle. Figure 6.31(a)
Figure 6.31 Electromagnetic flowmeter: pulsed DC excitation. Courtesy of Flowmetering Instruments Ltd.
shows the ideal current waveform for pulsed DC excitation, but, because of the inductance of the coils, this waveform cannot be entirely achieved. The solution as shown in Figure 6.31(b) is to power the field coils from a constantcurrent source giving a near square-wave excitation. The signal produced at the measuring electrodes is shown in Figure 6.31(c). The signal is sampled at five points during each measurement cycle, as shown, and microprocessor techniques are utilized to evaluate and separate the true flow signal from the combined flow and zero signals, as shown in the equation in Figure 6.31(c). Area of Application Electromagnetic flowmeters are suitable for measuring a wide variety of liquids such as dirty liquids, pastes, acids, slurries, and alkalis; accuracy is largely unaffected by changes in temperature, pressure, viscosity, density, or conductivity. However, in the case of the latter, conductivities must be greater than 1 micromho/cm.
Figure 6.32 Encapsulated coil magmeter. Courtesy of ISCO Inc.
54
PART | II Mechanical Measurements
Installation The primary element can be mounted in any attitude in the pipework, although care should be taken to ensure that when the flowmeter is mounted horizontally, the axis of the electrodes are in the horizontal plane. Where buildup of deposits on the electrodes is a recurring problem, there exist three alternatives for consideration: A. Ultrasonic cleaning of electrodes. B. Utilize capacitive electrodes that do not come into contact with the flowstream, and therefore insulating coatings have no effect. C. removable electrodes, inserted through a hottap valve assembly, enabling the electrodes to be withdrawn from the primary and physically examined and cleaned, then reinserted under pressure and without stopping the flow. It should be noted that on insulated pipelines, earthing rings will normally be required to ensure that the flowmeter body is at the same potential as that of the flouring liquid to prevent circulating current and interfering voltages. Recently, a magnetic flowmeter design was introduced that relies on a self-contained coil-and-electrode package, mounted at 180° to a similar package, across the centerline of the flow tube. This design does not require a fully lined flow tube and appears to have some advantages in terms of cost in medium- and larger-sized applications. The accuracy of the flowmeter can be affected by flow profile, and the user should allow at least 10 straight-pipe diameters upstream and 5 straight-pipe diameters downstream of the primary element to ensure optimum conditions. In addition, to ensure system accuracy, it is essential that the primary element should remain filled with the liquid being metered at all times. Entrained gases will cause similar inaccuracy. For further information on installation requirements, the reader is referred to the relevant sections of BS 5792 1980. Flowmeters are available in sizes from 32 mm to 1200 mm nominal bore to handle flow velocities from 0–0.5 m/s to 0–10 m/s with accuracy of !1 percent over a 10:1 turndown ratio.
6.3.4.2 Ultrasonic Flowmeters Ultrasonic flowmeters measure the velocity of a flowing medium by monitoring interaction between the flowstream and an ultrasonic sound wave transmitted into or through it. Many techniques exist; the two most commonly applied are Doppler and transmissive (time of flight). These will now be dealt with separately. Doppler Flowmeters These devices use the well-known Doppler effect, which states that the frequency of sound changes if its source or reflector moves relative to the listener or monitor. The magnitude of the frequency change is an indication of the speed of the sound source or sound reflector.
Figure 6.33 Principle of operation: Doppler meter.
In practice the Doppler flowmeter comprises a housing in which two piezoelectric crystals are potted, one a transmitter and the other a receiver, with the whole assembly located on the pipe wall, as shown in Figure 6.33. The transmitter transmits ultrasonic waves of frequency F1 at an angle i to the flowstream. If the flowstream contains particles, entrained gas, or other discontinuities, some of the transmitted energy will be reflected back to the receiver. If the fluid is travelling at velocity V, the frequency of the reflected sound as monitored by the receiver can be shown to be F2 such that F2 = F1 ! 2V $ cos i $
F1 C
where C is the velocity of sound in the fluid. Rearranging: V=
C _ F2 - F1 i
2 $ F1 $ cos i
which shows that velocity is proportional to the frequency change. The Doppler meter is normally used as an inexpensive clamp-on flowmeter, the only operational constraints being that the flowstream must contain discontinuities of some kind (the device will not monitor clear liquids), and the pipeline must be acoustically transmissive. Accuracy and repeatability of the Doppler meter are somewhat suspect and difficult to quantify, because its operation is dependent on flow profile, particle size, and suspended solids concentration. However, under ideal conditions and given the facility to calibrate in situ, accuracies of !5 percent should be attainable. This type of flowmeter is most suitable for use as a flow switch or for flow indication where absolute accuracy is not required. Transmissive Flowmeters Transmissive devices differ from Doppler flowmeters in that they rely on transmission of an ultrasonic pulse through the flowstream and therefore do not depend on discontinuities or entrained particles in the flowstream for operation. The principle of operation is based on the transmission of an ultrasonic sound wave between two points, first in the
55
Chapter | 6 Measurement of Flow
direction of flow and then of opposing flow. In each case the time of flight of the sound wave between the two points will have been modified by the velocity of the flowing medium, and the difference between the flight times can be shown to be directly proportional to flow velocity. In practice, the sound waves are not generated in the direction of flow but at an angle across it, as shown in Figure 6.34. Pulse transit times downstream T1 and upstream T2 along a path length D can be expressed as T1 = D / (C + V ) and T1 = D/(C - V ), where C is the velocity of sound in the fluid and V is the fluid velocity. Now:
T = T 1 - T2 = e
2DV o C2 - V2
(6.45)
Since V2 is very small compared to C2, it can be ignored. It is convenient to develop the expression in relation to frequency and remove the dependency on the velocity of sound (C). Since F1 = 1/T1 and F2 = 1/T2 and average fluid velocity V = V/cos i, Equation (6.44) is developed to: F1 - F2 = c
2V cos i m D
The frequency difference is calculated by an electronic converter, which gives an analog output proportional to average fluid velocity. A practical realization of this technique operates in the following manner. A voltage-controlled oscillator generates electronic pulses from which two consecutive pulses are selected. The first of these is used to operate a piezoelectric ceramic crystal transducer, which projects an ultrasonic beam across the liquid flowing in a pipe. This ultrasonic pulse is then received on the other side of the pipe, where it is converted back to an electronic pulse. The latter is then received by the “first-arrival” electronics, comparing its arrival time with the second pulse received directly. If the two pulses are received at the same time, the period of time between them equates to the time taken for the first pulse to travel to its transducer and be converted to ultrasound, to travel across the flowstream, to be reconverted back to an electronic pulse, and to travel back to the first-arrival position.
Should the second pulse arrive before the first one, the time between pulses is too short. Then the first-arrival electronics will step down the voltage to the voltage-controlled oscillator (VCO), reducing the resulting frequency. The electronics will continue to reduce voltage to the VCO in steps, until the first and second pulses are received at the first-arrival electronics at the same time. At this point, the periodic time of the frequency will be the same as the ultrasonic flight time plus the electronic delay time. If a similar electronic circuit is now used to project an ultrasonic pulse in the opposite direction to that shown, another frequency will be obtained that, when subtracted from the first, will give a direct measure of the velocity of the fluid in the pipe, since the electronic delays will cancel out. In practice, the piezoelectric ceramic transducers used act as both transmitters and receivers of the ultrasonic signals and thus only one is required on each side of the pipe. Typically the flowmeter will consist of a flowtube containing a pair of externally mounted, ultrasonic transducers and a separate electronic converter/transmitter, as shown in Figure 6.35(a). Transducers may be wetted or nonwetted and consist of a piezoelectric crystal sized to give the desired frequency (typically 1–5 MHz for liquids and 0.2–0.5 MHz for gases). Figure 6.35(b) shows a typical transducer assembly.
(a)
(b) Figure 6.34 Principle of operation: time-of-flight ultrasonic flowmeter.
Figure 6.35 (a) Ultrasonic flowmeter. Courtesy of Sparling Inc. (b) Transducer assembly.
56
Due to the fact that the flowmeter measures velocity across the center of the pipe, it is susceptible to flow profile effects, and care should be taken to ensure sufficient length of straight pipe upstream and downstream of the flowtube, to minimize such effects. To overcome this problem, some manufacturers use multiple-beam techniques in which several chordal velocities are measured and the average computed. However, it is still good practice to allow for approximately 10 upstream and 5 downstream diameters of straight pipe. Furthermore, since this type of flowmeter relies on transmission through the flowing medium, fluids with a high solids or gas-bubble content cannot be metered. This type of flowmeter can be obtained for use on liq uids or gases for pipe sizes from 75 mm nominal bore up to 1500 mm or more for special applications, and it is bidirectional in operation. Accuracy of better than !1 percent of flow rate can be achieved over a flow range of 0.2 to 12 meters per second. This technique has also been successfully applied to open channel and river flow and is also now readily available as a clamp-on flowmeter for closed pipes, but accuracy is dependent on knowledge of each installation, and in situ calibration is desirable.
PART | II Mechanical Measurements
flow velocity. If this body is fitted centrally into a pipeline, the vortex-shedding frequency is a measure of the flow rate. Any bluff body can be used to generate vortices in a flowstream, but for these vortices to be regular and well defined requires careful design. Essentially, the body must be nonstreamlined, symmetrical, and capable of generating vortices for a wide Reynolds number range. The most commonly adopted bluff body designs are shown in Figure 6.37. These designs all attempt to enhance the vortexshedding effect to ensure regularity or simplify the detection technique. If the design (d) is considered, it will be noted that a second nonstreamlined body is placed just downstream of the vortex-shedding body. Its effect is to reinforce and stabilize the shedding. The width of the bluff body is determined by pipe size, and a rule-of-thumb guide is that the ratio of body width to pipe diameter should not be less than 0.2.
6.3.4.3 Oscillatory “Fluidic” Flowmeters The operating principle of flowmeters in this category is based on the fact that if an obstruction of known geometry is placed in the flowstream, the fluid will start to oscillate in a predictable manner. The degree of oscillation is related to fluid flow rate. The three main types of flowmeter in this category are vortex-shedding flowmeters, swirl flowmeters, and the several Coanda effect meters that are now available. The Vortex Flowmeter This type of flowmeter operates on the principle that if a bluff (i.e., nonstreamlined) body is placed in a flowstream, vortices will be detached or shed from the body. The principle is illustrated in Figure 6.36. The vortices are shed alternately to each side of the bluff body, the rate of shedding being directly proportional to
Figure 6.36 Vortex shedding.
Figure 6.37 (a)–(d) Bluff body shapes. (e) Thermal sensor. Courtesy of Actaris Neptune Ltd. (f) Shuttle ball sensor. Courtesy of Actaris Neptune Ltd.
Chapter | 6 Measurement of Flow
57
Sensing Methods Once the bluff-body type has been selected we must adopt a technique to detect the vortices. Various methods exist, the more popular techniques being as follows: A. Ultrasonic. Where the vortices pass through an ultrasonic beam and cause refraction of this beam, resulting in modulation of the beam amplitude. B. Thermal (Figure 6.37(e)). Where a thermistor-type sensor is located in a through passage across the bluff body and behind its face. The heated thermistor will sense alternating vortices due to the cooling effect caused by their passage, and an electrical pulse output is obtained. C. Oscillating disc. Sensing ports on both sides of the flow element cause a small disc to oscillate. A variablereluctance pickup detects the disc’s oscillation. This type is particularly suited to steam or wet-gas flow. D. Capacitance. Metal diaphragms are welded on opposite sides of the bluff body, the small gaps between the diaphragms and the body being filled with oil. Interconnecting ports allow transfer of oil between the two sides. An electrode is placed close to each plate and the oil used as a dielectric. The vortices alternately deform the diaphragm plates, causing a capacitance change between the diaphragm and the electrode. The frequency of changes in capacitance is equal to the shedding frequency. E. Strain. Here the bluff body is designed such that the alternating pressures associated with vortex shedding are applied to a cantilevered section to the rear of the body. The alternating vortices create a cyclic strain on the rear of the body, which is monitored by an internal strain gauge. F. Shuttle ball (Figure 6.37(f)). The shuttle technique uses the alternating pressures caused by vortex shedding to drive a magnetic shuttle up and down the axis of a flow element. The motion of the shuttle is detected by a magnetic pickup. The output derived from the primary sensor is a lowfrequency signal dependent on flow; this is then applied to conditioning electronics to provide either analog or digital output for display and transmission. The calibration factor (pulses per m3) for the vortex meter is determined by the dimensions and geometry of the bluff body and will not change. Installation parameters for vortex flowmeters are quite critical. Pipe flange gaskets upstream and at the transmitter should not protrude into the flow, and to ensure a uniform velocity profile there should be 20 diameters of straight pipe upstream and 5 diameters downstream. Flow straighteners can be used to reduce this requirement if necessary. The vortex flowmeter has wide-ranging applications in both gas and liquid measurement, providing the Reynolds number lies between 2 # 103 and 1 # 105 for gases and
Figure 6.38 Cutaway view of the swirimeter. Courtesy of ABB Instrument Group.
4 # 103 and 1.4 # 105 for liquids. The output of the meter is independent of the density, temperature, and pressure of the flowing fluid and represents the flow rate to better than !1 percent of full scale, giving turndown ratios in excess of 20:1. The Swirlmeter Another meter that depends on the oscillatory nature of fluids is the swirlmeter, shown in Figure 6.38. A swirl is imparted to the body of flowing fluid by the curved inlet blades, which give a tangential component to the fluid flow. Initially the axis of the fluid rotation is the center line of the meter, but a change in the direction of the rotational axis (precession) takes place when the rotating liquid enters the enlargement, causing the region of highest velocity to rotate about the meter axis. This produces an oscillation or precession, the frequency of which is proportional to the volumetric flow rate. The sensor, which is a bead thermistor heated by a constant-current source, converts the instantaneous velocity changes into a proportional electrical pulse output. The number of pulses generated is directly proportional to the volumetric flow. The operating range of the swirlmeter depends on the specific application, but typical for liquids are 3.5 to 4.0 liters per minute for the 25 mm size, ranging to 1700 to 13,000 liters per minute for the 300 mm size. Typical gas flow ranges are 3 to 35 m3/h for the 25 mm size, ranging to 300 to 9000 m3/h for the 300 mm size. Accuracy of !1 percent of rate is possible, with repeatability of !0.25 percent of rate. The Coanda Effect Meters The Coanda effect produces a fluidic oscillator for which the frequency is linear with the volumetric flow rate of fluid. The Coanda effect is a hydraulic feedback circuit. A chamber is designed with a left-hand and a right-hand feedback channel. A jet of water flows through the chamber, and because of the feedback channels, some of the water will impact the jet from the side.
58
PART | II Mechanical Measurements
Figure 6.40 Cross-correlation meter. Figure 6.39 Coanda Effect Fluidic Meter. Courtesy of Fluidic Flowmeters LLC.
This causes a pressure differential between one side of the jet and the other, and the jet “flips” back and forth in the chamber. The frequency of this flipping is proportional to the flow through the chamber. Several means exist to measure this oscillation, including electromagnetic sensors and piezo-resistive pressure transducers. This coanda effect is extremely linear and accurate across at least a 300:1 range. It is reasonably viscosity independent, too, and can be made simply and inexpensively. Typically, small fluidic meters can be made so inexpensively, in fact, that fluidic flowmeters are being promoted as a replacement for the inexpensive positive displacement meters currently used as domestic water meters. Several companies have developed fluidic flowmeters as extremely inexpensive replacements for AGA-approved diaphragmtype gas meters for household metering. Coanda effect meters are insensitive to temperature change, too. A fluidic flowmeter is being marketed as an inexpensive BTU (heat) meter for district heating applications. Coanda effect meters become more expensive as their physical size increases. Above 50 mm diameter, they are more expensive in general than positive displacement meters. Currently, the only designs available above 50 mm are “bypass designs” that use a small diameter coanda effect meter as a bypass around a flow restriction in a larger pipeline. Meters up to 250 mm diameter have been designed in this fashion. These meters exhibit rangeability of over 100:1, with accuracies (when corrected electronically for linearity shift) of 0.5% of indicated flow rate. See Figure 6.39.
6.3.4.4 Cross-Correlation In most flowing fluids there exist naturally occurring random fluctuations such as density, turbulence, and temperature, which can be detected by suitably located transducers. If two such transducers are installed in a pipeline separated by a distance L, as shown in Figure 6.40, the upstream transducer will pick up a random fluctuation t seconds before the downstream transducer, and the distance between the transducers divided by the transit time t will yield flow velocity. In practice the random fluctuations will not be stable and
are compared in a cross-correlator that has a peak response at transit time Tp and correlation velocity V = L/Tp meters per second. This is effectively a nonintrusive measurement and could in principle be developed to measure flow of most fluids. Very few commercial cross-correlation systems are in use for flow measurement because of the slow response time of such systems. However, with the use of microprocessor techniques, processing speed has been increased significantly, and several manufacturers are now producing commercial systems for industrial use. Techniques for effecting the crosscorrelation operation are discussed in Part 4 of this book.
6.3.5 Mass Flowmeters The measurement of mass flow rate can have certain advantages over volume flow rate, i.e., pressure, temperature, and specific gravity do not have to be considered. The main interfering parameter to be avoided is that of two-phase flow, in which gas/liquid, gas/solid, or liquid/solid mixtures are flowing together in the same pipe. The two phases may be travelling at different velocities and even in different directions. This problem is beyond the scope of this book, but the user should be aware of the problem and ensure where possible that the flow is as near homogeneous as possible (by pipe-sizing or meter-positioning) or that the two phases are separately metered. Methods of measurement can be categorized under two main headings: true mass-flow measurement, in which the measured parameter is directly related to mass flow rate, and inferential mass-flow measurement, in which volume flow rate and fluid density are measured and combined to give mass flow rate. Since volume flow rate and density measurement are discussed elsewhere, only true mass-flow measurement is dealt with here.
6.3.5.1 True Mass-Flow Measurement Methods Fluid-momentum Methods An angular momentum type of device consists of two turbines on separate axial shafts in
Chapter | 6 Measurement of Flow
the meter body. The upstream turbine is rotated at constant speed and imparts a swirling motion to the fluid passing through it. On reaching the downstream turbine, the swirling fluid attempts to impart motion onto it; however, this turbine is constrained from rotating by a calibrated spring. The meter is designed such that on leaving the downstream turbine, all angular velocity will have been removed from the fluid, and the torque produced on it is proportional to mass flow. This type of device can be used for both gases and liquids with accuracies of !1 percent. Mass flowmeters in the category of gyroscopic/Coriolis mass flowmeters use the measurement of torque developed when subjecting the fluid stream to a Coriolis acceleration,* as a measure of mass flow rate. An early application of this technique is illustrated in Figure 6.41. The fluid enters a T-shaped tube, with flow equally divided down each side of the T, and then recombines into a main flowstream at the outlet from the meter. The whole assembly is rotated at constant speed, causing an angular
Figure 6.41 Early form of Coriolis mass flowmeter.
59
displacement of the T-tube that is attached to the meter casing through a torque tube. The torque produced is proportional to mass flow rate. This design suffered from various problems, mainly due to poor sealing of rotating joints or inadequate speed control. However, recent developments have overcome these problems, as shown in Figure 6.42. The mass flowmeter consists of a U-tube and a T-shaped leaf spring as opposite legs of a tuning fork. An electromagnet is used to excite the tuning fork, thereby subjecting each particle within the pipe to a Coriolis-type acceleration. The resulting forces cause an angular deflection in the U-tube inversely proportional to the stiffness of the pipe and proportional to the mass flow rate. This movement is picked up by optical transducers mounted on opposite sides of the U-tube, the output being a pulse that is width-modulated proportional to mass flow rate. An oscillator/counter digitizes the pulse width and provides an output suitable for display purposes. This system can be used to measure the flow of liquids or gases, and accuracies better than !0.5 percent of full scale are possible. Even more recent developments include “straight-through” designs (see Figure 6.43) that have produced similar performance to the U-tube designs. Several manufacturers now offer these designs. In addition, with better signal processing technologies, Coriolis mass meters have now begun to be used to measure gas flows, with apparently excellent results. In liquid flow measurement, even in slurries, Coriolis mass flowmeters have nearly completely replaced other types of mass flow measurements such as dual-turbine or volumetric/density combinations. Pressure Differential Methods In its classical form, this meter consists of four matched orifice plates installed in a Wheatstone bridge arrangement. A pump is used to transfer fluid at a known rate from one branch of the bridge into another to create a reference flow. The resultant differential pressure measured across the bridge is proportional to mass flow rate.
Figure 6.42 Gyroscopic/Coriolis mass flowmeter. *On a rotating surface there is an inertial force acting on a body at right angles to its direction of motion, in addition to the ordinary effects of motion of the body. This force is known as a Coriolis force.
Figure 6.43 Straight Tube Coriolis Mass Flowmeter. Courtesy of Krohne America Inc.
60
PART | II Mechanical Measurements
B. Velocity/area method. Where measurement of both variables, that is, head and velocity, is combined with the known geometry of a structure to determine flow. C. Dilution gauging.
6.4.1 Head/Area Method 6.4.1.1 Weirs Figure 6.44 Thermal mass flowmeter. Courtesy of Emerson Process Management.
Figure 6.45 Rectangular notch, showing top and bottom of contraction.
Thermal Mass Flowmeters This version of a mass flowmeter consists of a flowtube, an upstream and downstream temperature sensor, and a heat source, as illustrated in Figure 6.44. The temperature sensors are effectively active arms of a Wheatstone bridge. They are mounted equidistant from the constant-temperature heat source such that for noflow conditions, heat received by each sensor is the same and the bridge remains in balance. However, with increasing flow, the downstream sensor receives progressively more heat than the upstream sensor, causing an imbalance to occur in the bridge circuit. The temperature difference is proportional to mass flow rate, and an electrical output representing this is developed by the bridge circuit. This type of mass flowmeter is most commonly applied to the measurement of gas flows within the ranges 2.5 # 10-10 to 5 # 10-3 kg/s, and accuracy of !1 percent of full scale is attainable. Some thermal flowmeters are also used for liquid flow measurements, including very low flow rates.
6.4 Flow in open channels Flow measurement in open channels is a requirement normally associated with the water and wastewater industry. Flow in rivers, sewers (part-filled pipes), and regular-shaped channels may be measured by the following methods: A. Head/area method. Where a structure is built into the flowstream to develop a unique head/flow relationship, as in: 1. T he weir, which is merely a dam over which liquid is allowed to flow, the depth of liquid over the sill of the weir being a measure of the rate of flow. 2. The hydraulic flume, an example being the venturi flume, in which the channel is given the same form in the horizontal plane as a section of a venturi tube while the bottom of the channel is given a gentle slope up the throat.
Weirs may have a variety of forms and are classified according to the shape of the notch or opening. The simplest is the rectangular notch or in certain cases the square notch. The V or triangular notch is a V-shaped notch with the apex downward. It is used to measure rates of flow that may become very small. Owing to the shape of the notch, the head is greater at small rates of flow with this type than it would be for the rectangular notch. Notches of other forms, which may be trapezoidal or parabolic, are designed so that they have a constant discharge coefficient or a head that is directly proportional to the rate of flow. The velocity of the liquid increases as it passes over the weir because the center of gravity of the liquid falls. Liquid that was originally at the level of the surface above the weir can be regarded as having fallen to the level of the center of pressure of the issuing stream. The head of liquid producing the flow is therefore equal to the vertical distance from the center of pressure of the issuing stream to the level of the surface of the liquid upstream. If the height of the center of pressure above the sill can be regarded as being a constant fraction of the height of the surface of the liquid above the sill of the weir, then the height of the surface above the sill will give a measure of the differential pressure producing the flow. If single particles are considered, some will have fallen a distance greater than the average, but this is compensated for by the fact that others have fallen a smaller distance. The term head of a weir is usually taken to mean the same as the depth of the weir and is measured by the height of the liquid above the level of the sill of the weir, just upstream of where it begins to curve over the weir, and is denoted by H and usually expressed in units of length such as meters. Rectangular Notch Consider the flow over the weir in exactly the same way as the flow through other primary differential-pressure elements. If the cross-section of the stream approaching the weir is large in comparison with the area of the stream over the weir, the velocity V1 at Section 1 upstream can be neglected in comparison with the velocity V2 over the weir, and in Equation (6.17) V 1 = 0 and the equation becomes: V 22 = 2gh or V 2 =
^ 2gh h
61
Chapter | 6 Measurement of Flow
The quantity of liquid flowing over the weir will be given by: Q = A 2 V2
But the area of the stream is BH, where H is the depth over the weir and B the breadth of the weir, and h is a definite fraction of H. By calculus it can be shown that for a rectangular notch
Q=
=
2 BH ^ 2gH h 3
(6.46)
2 B _ 2gH 3 i m 3 / s 3
(6.47)
The actual flow over the weir is less than that given by Equation (6.45), for the following reasons: A. The area of the stream is not BH but something less since the stream contracts at both the top and bottom as it flows over the weir, as shown in Figure 6.46, making the effective depth at the weir less than H. B. Owing to friction between the liquid and the sides of the channel, the velocity at the sides of the channel will be less than that at the middle. This effect may be reduced by making the notch narrower than the width of the stream, as shown in Figure 6.47. This, however, produces sidecontraction of the stream. Therefore, B1 = B should be at least equal to 4H when the side contraction is equal to 0.1H on both sides, so that the effective width becomes B–0.2H. When required to suppress side contraction and make the measurement more reliable, plates may be fitted as shown in Figure 6.47 so as to make the stream move parallel to the plates as it approaches the weir. To allow for the difference between the actual rate of flow and the theoretical rate of flow, the discharge coefficient
Figure 6.46 Rectangular notch, showing side-contraction.
Figure 6.47 Rectangular notch, showing side plates.
C, defined as before, is introduced, and Equation (6.46) becomes: Q=
2 CB _ 2gH 3 i m 3 / s 3
(6.48)
The value of C will vary with H and will be influenced by the following factors, which must remain constant in any installation if its accuracy is to be maintained: (a) the relative sharpness of the upstream edge of the weir crest, and (b) the width of the weir sill. Both of these factors influence the bottom contraction and influence C, so the weir sill should be inspected from time to time to see that it is free from damage. In developing the preceding equations, we assumed that the velocity of the liquid upstream of the weir could be neglected. As the rate of flow increases, this is no longer possible, and a velocity of approach factor must be introduced. This will influence the value of C, and as the velocity of approach increases it will cause the observed head to become less than the true or total head so that a correcting factor must be introduced. Triangular Notch If the angle of the triangular notch is i, as shown in Figure 6.48, B = 2H tan ] i/2 g. The position of the center of pressure of the issuing stream will now be at a different height above the bottom of the notch from what it was for the rectangular notch. It can be shown by calculus that the numerical factor involved in the equation is now (4)/(15). Substituting this factor and the new value of A2 in Equation (6.47): 4 CB _ 2gH 3 i m 3 / s 15 i _ 4 = C2H tan 2gH 3 i 2 15 i _ 8 = C tan 2gH 5 i 2 15
Q=
(6.49)
Experiments have shown that i should have a value between 35° and 120° for satisfactory operation of this type of installation. Although the cross-section of the stream from a triangular weir remains geometrically similar for all values of H, the value of C is influenced by H. The variation of C is from 0.57 to 0.64 and takes into account the contraction of the stream. If the velocity of approach is not negligible, the value of H must be suitably corrected, as in the case of the rectangular weir.
Figure 6.48 Triangular notch (V-notch).
62
PART | II Mechanical Measurements
Installation and operation of weirs The following points are relevant to installing and operating weirs: A. Upstream of a weir there should be a wide, deep, and straight channel of uniform cross-section, long enough to ensure that the velocity distribution in the stream is uniform . This approach channel may be made shorter if baffle plates are placed across it at the inlet end to break up currents in the stream. B. Where debris is likely to be brought down by the stream, a screen should be placed across the approach channel to prevent the debris reaching the weir. This screen should be cleaned as often as necessary. C. The upstream edge of the notch should be maintained square or sharp-edged, according to the type of installation. D. The weir crest should be level from end to end. E . The channel end wall on which the notch plate is mounted should be cut away so that the stream may fall freely and not adhere to the wall. To ensure that this happens, a vent may be arranged in the side wall of the channel so that the space under the falling water is open to the atmosphere. F . Neither the bed nor the sides of the channel downstream from the weir should be nearer the weir than 150 mm, and the water level downstream should be at least 75 mm below the weir sill. G. The head H may be measured by measuring the height of the level of the stream above the level of the weir sill, sufficiently far back from the weir to ensure that the surface is unaffected by the flow. This measurement is usually made at a distance of at least 6H upstream of the weir. It may be made by any appropriate method for liquids, as described in the section on level measurement: for example, the hook gauge, float-operated mechanisms, air purge systems (“bubblers”), or ultrasonic techniques. It is often more convenient to measure the level of the liquid in a “stilling well” alongside the channel at the appropriate distance above the notch. This well is connected to the weir chamber by a small pipe or opening near the bottom. Liquid will rise in the well to the same height as in the weir chamber and will be practically undisturbed by currents in the stream.
6.4.1.2 Hydraulic Flumes Where the rate of fall of a stream is so slight that there is very little head available for operating a measuring device or where the stream carries a large quantity of silt or debris, a flume is often much more satisfactory than a weir. Several flumes have been designed, but the only one we shall consider here is the venturi flume. This may have more than one form, but where it is flat-bottomed and of the form shown in Figure 6.49 the volume rate of flow is given by the equation:
Figure 6.49 Hydraulic flume (venturi type).
Q = CBh 2
2g _ h 1 - h 2 i
1 - _ Bh 2 /B 1 h 1 i2
m 3 / s
(6.50)
where B1 is width of channel, B is width of the throat, h1 is depth of water measured immediately upstream of the entrance to the converging section, and h2 is minimum depth of water in the throat. C is the discharge coefficient for which the value will depend on the particular outline of the channel and the pattern of the flow. Tests on a model of the flume may be used to determine the coefficient, provided that the flow in the model and in the full-sized flume are dynamically similar. The depths of water h1 and h2 are measured as in the case of the weir by measuring the level in wells at the side of the main channel. These wells are connected to the channel by small pipes opening into the channel near or at the bottom. As in the case of the closed venturi tube, a certain minimum uninterrupted length of channel is required before the venturi is reached, in order that the stream may be free from waves and vortices. By carefully designing the flume, it is possible to simplify the actual instrument required to indicate the flow. If the channel is designed in such a manner that the depth in the exit channel at all rates of flow is less than a certain percentage of the depth in the entrance channel, the flume will function as a free-discharge outlet. Under these conditions, the upstream depth is independent of the downstream conditions, and the depth of water in the throat will maintain itself at a certain critical value, at which the energy of the water is at the minimum, whatever the rate of flow. When this is so, the quantity of water flowing through the channel is a function of the upstream depth h1 only and may be expressed by the equation: Q = kh 31/2 where k is a constant for a particular installation and can be determined. It is now necessary to measure h1 only, which can be done by means of a float in a well, connected to the upstream portion of the channel. This float operates an indicated recording and integrating instrument. Other means of sensing the height in a flume or weir include up-looking ultrasonic sensors mounted in the bottom of the channel. More often used are down-looking ultrasonic sensors mounted above the flume. Direct pressure transducers mounted at the bottom of the channel or in a standpipe
63
Chapter | 6 Measurement of Flow
can also be used. Other methods, such as RF admittance or capacitance slides, are used as well. The channel is usually constructed of concrete and the surface on the inside of the channel made smooth to reduce the friction between water and channel. Flumes of this kind are used largely for measuring flow of water or sewerage and may be made in a very large variety of sizes to measure anything from the flow of a small stream to that of a large river.
6.4.1.3 The DataGator Flowmeter In the early 1990s experimentation showed that a combination venturi flume and venturi tube could be constructed such that the signal from three pressure transducers could be used to measure the flow through the tube in any flow regime: subcritical flow, supercritical flow, and surcharge. By making the flow tube symmetrical, it was shown to be possible to measure flow in either direction with the same accuracy. This patented device, called a DataGator flowmeter (see Figure 6.50), can be used to monitor flow in manholes. It has the advantage over any other portable sewer flow-monitoring device of being traceable to the U.S. National Institute of Standards and Testing (NIST) since it is a primary device like a flume or flow tube.
are used for velocity measurement: turbine current meter, electromagnetic, and ultrasonic. The techniques have already been discussed in the section on closed pipe flow, so we describe application only here.
6.4.2.1 Turbine Current Meter In a current-meter gauging, the meter is used to give point velocity. The meter is sited in a predetermined cross-section in the flowstream and the velocity obtained. Since the meter only measures point velocity, it is necessary to sample throughout the cross-section to obtain mean velocity. The velocities that can be measured in this way range from 0.03 to 3.0 m/s for a turbine meter with a propeller of 50 mm diameter. The disadvantage of a current-meter gauging is that it is a point and not a continuous measurement of discharge.
6.4.2.2 Electromagnetic Method
where area A is proportional to head or level. The head/level measurement can be made by many of the conventional level devices described in Chapter 10 and therefore are not dealt with here. Three general techniques
In this technique, Faraday’s law of electromagnetic induction is utilized in the same way as for closed-pipe flow measurement (Section 6.3.4.1). That is, E \ BlV, where E is EMF generated, B is magnetic field strength, l is width of river or channel in meters, and V is average velocity of the flowstream. This equation applies only if the bed of the channel is insulated, similar to the requirement for pipe flowmeters. In practice it is costly to insulate a riverbed, and where this cannot be done, riverbed conductivity has to be measured to compensate for the resultant signal attenuation. In an operational system, a large coil buried under the channel is used to produce a vertical magnetic field. The flow of water throughout the magnetic field causes an EMF to be set up between the banks of the river. This potential is sensed by a pickup electrode at each bank. This concept is shown diagrammatically in Figure 6.51.
Figure 6.50 DataGator FlowTube. Courtesy of Renaissance Instruments.
Figure 6.51 Principle of electromagnetic gauge. Courtesy of Plessey Electronic Systems Ltd.
6.4.2 Velocity/Area Methods In velocity/area methods, volume flow rate is determined by measurement of the two variables concerned (mean velocity and head), since the rate of flow is given by the equation Q = V $ A m3
64
6.4.2.3 Ultrasonic Method As for closed-pipe flow, two techniques are available: single-path and multipath, both relying on time-of-flight techniques, as described in Section 6.3.4.2. Transducers capable of transmitting and receiving acoustic pulses are staggered along either bank of the river or channel. In practice the acoustic path is approximately 60° to the direction of flow, but angles between 30° and 60° could be utilized. The smaller the angle, the longer the acoustic path. Path lengths up to 400 meters can be achieved. New spool piece designs have included corner targets and other devices to improve the accuracy of the signal. Recently, clamp-on transit-time flow sensors have been adapted to work directly on the high-purity tubing used in the semiconductor manufacturing and in pharmaceutical industries. Correlation flowmeters have also been constructed using these new techniques.
6.4.3 Dilution Gauging This technique is covered in detail in section 6.6 on flow calibration, but basically the principle involves injecting a tracer element such as brine, salt, or radioactive solution and estimating the degree of dilution caused by the flowing liquid.
6.5 Point velocity measurement In flow studies and survey work, it is often desirable to be able to measure the velocity of liquids at points within the flow pattern inside both pipes and open channels to determine either mean velocity or flow profile. The following techniques are most common: laser Doppler anemometer, hotwire anemometer, pitot tube, insertion electromagnetic, insertion turbine, propeller-type current meter, insertion vortex, and Doppler velocity probe.
6.5.1 Laser Doppler Anemometer This device uses the Doppler shift of light scattered by moving particles in the flowstream to determine particle velocity and hence fluid flow velocity. It can be used for both gas and liquid flow studies and is used in both research and industrial applications. Laser Doppler is a noncontact technique and is particularly suited to velocity studies in systems that would not allow the installation of a more conventional system—for example, around propellers and in turbines.
PART | II Mechanical Measurements
the flowstream; the wire sensor is typically 5μm diameter and approximately 5 mm long. As flow velocity increases, it tends to cool the heated element. This change in temperature causes a change in resistance of the element proportional to flow velocity.
6.5.3 Pitot Tube The pitot tube is a device for measuring the total pressure in a flowstream (i.e., impact/velocity pressure and static pressure); the principle of its operation is as follows. If a tube is placed with its open end facing into the flowstream (Figure 6.52), the fluid impinging on the open end will be brought to rest and its kinetic energy converted into pressure energy. The pressure buildup in the tube will be greater than that in the free stream by an amount termed the impact pressure. If the static pressure is also measured, the differential pressure between that measured by the pitot tube and the static pressure will be a measure of the impact pressure and therefore the velocity of the stream. In Equation (6.15), h, the pressure differential or impact pres sure developed, is given by h = ` V 22 /2g j - ` V 21 /2g j, where V2 = 0. Therefore, V 21 /2g, that is, the pressure increases by V 21 /2g. The negative sign indicates that it is an increase in pressure and not a decrease. Increase in head: h = V 21 /2g or V 21 = 2gh i.e. V1 =
(6.51)
However, since this is an intrusive device, not all of the flowstream will be brought to rest on the impact post; some will be deflected round it. A coefficient C is introduced to compensate for this, and Equation (6.50) becomes:
V 1 = C ^ 2gh h
(6.52)
If the pitot tube is to be used as a permanent device for measuring the flow in a pipeline, the relationship between the velocity at the point of its location to the mean velocity must be determined. This is achieved by traversing the pipe and sampling velocity at several points in the pipe, thereby determining flow profile and mean velocity.
6.5.2 Hotwire Anemometer The hotwire anemometer is widely used for flow studies in both gas and liquid systems. Its principle of operation is that a small, electrically heated element is placed within
^ 2gh h
Figure 6.52 Single-hole pitot tube.
65
Chapter | 6 Measurement of Flow
Figure 6.53 The Annubar. Courtesy of Emerson Process Management.
For more permanent types of pitot-tube installation, a multiport pitot tube (such as an Annubar) may be used as shown in Figure 6.53. The pressure holes are located in such a way that they measure the representative dynamic pressure of equal annuli. The dynamic pressure obtained at the four holes facing into the stream is then averaged by means of the “interpolating” inner tube (Figure 6.53(b)), which is connected to the high-pressure side of the manometer. The low-pressure side of the manometer is connected to the downstream element, which measures the static pressure less the suction pressure. In this way a differential pressure representing the mean velocity along the tube is obtained, enabling the flow to be obtained with an accuracy of !1 percent of actual flow.
6.5.4 Electromagnetic Velocity Probe This type of device is basically an inside-out version of the electromagnetic pipeline flowmeter discussed earlier, the operating principle being the same. The velocity probe consists of either a cylindrical or an ellipsoidal sensor shape that houses the field coil and two diametrically opposed pickup electrodes. The field coil develops an electromagnetic field in the region of the sensor, and the electrodes pick up a generated voltage that is proportional to the point velocity. The probe system can be used for either open-channel or closed-pipe flow of conducting liquids. It should be noted, however, that the accuracy of a point-velocity magnetic flowmeter
Figure 6.54 Multiple Sensor Averaging Insertion Magmeter. Courtesy of Marsh-McBirney Inc.
is approximately similar to that of a paddlewheel or other point-velocity meter. Although it shares a measurement technology with a highly accurate flowmeter, it is not one. Recently, a combination of the multiple-port concept of an Annubar-type meter with the point velocity magnetic flowmeter has been released, with excellent results. See Figure 6.54.
6.5.5 Insertion Turbine The operating principle for this device is the same as for a full-bore pipeline flowmeter. It is used normally for pipe-flow velocity measurement in liquids and consists of a small turbine housed in a protective rotor cage, as shown
66
Figure 6.55 Insertion turbine flowmeter.
in Figure 6.55. In normal application the turbine meter is inserted through a gate valve assembly on the pipeline; hence it can be installed under pressure and can be precisely located for carrying out a flow traverse. Also, given suitable conditions, it can be used as a permanent flowmetering device in the same way as the pitot tube. The velocity of the turbine is proportional to liquid velocity, but a correction factor is introduced to compensate for errors caused by blockage in the flowstream caused by the turbine assembly.
6.5.6 Propeller-Type Current Meter Similar to the turbine in operation, this type of velocity probe typically consists of a five-bladed PVC rotor (Figure 6.56) mounted in a shrouded frame. This device is most commonly used for river or stream gauging and has the ability to measure flow velocities as low as 2.5 cm/s. Propeller meters are often used as mainline meters in water distribution systems and in irrigation and canal systems as inexpensive alternatives to turbine and magnetic flowmeters.
6.5.7 Insertion Vortex Operating on the same principle as the full-bore vortex meter, the insertion vortex meter consists of a short length of stainless-steel tube surrounding a centrally situated bluff body. Fluid flow through the tube causes vortex shedding. The device is normally inserted into a main pipeline via a flanged T-piece and is suitable for pipelines of 200 mm bore and above. It is capable of measuring flow velocities from 0.1 m/s up to 20 m/s for liquids and from 1 m/s to 40 m/s for gases.
PART | II Mechanical Measurements
Figure 6.56 Propeller-type current meter. Courtesy of Nixon Instrumentation Ltd.
6.5.8 Ultrasonic Doppler Velocity Probe This device is more commonly used for open-channel velocity measurement and consists of a streamlined housing for the Doppler meter.
6.6 Flowmeter calibration methods There are various methods available for the calibration of flowmeters, and the requirement can be split into two distinct categories: in situ and laboratory. Calibration of liquid flowmeters is generally somewhat more straightforward than that of gas flowmeters since liquids can be stored in open vessels and water can often be utilized as the calibrating liquid.
6.6.1 Flowmeter Calibration Methods for Liquids The main principles used for liquid flowmeter calibration are in situ: insertion-point velocity and dilution gauging/ tracer method; laboratory: master meter, volumetric, gravimetric, and pipe prover.
6.6.1.1 In situ Calibration Methods Insertion-Point Velocity One of the simpler methods of in situ flowmeter calibration utilizes point-velocity measuring devices (see Section 1.5) where the calibration device chosen is positioned in the flowstream adjacent to the flowmeter being calibrated and such that mean flow velocity can be measured. In difficult situations a flow traverse
67
Chapter | 6 Measurement of Flow
can be carried out to determine flow profile and mean flow velocity. Dilution Gauging/Tracer Method This technique can be applied to closed-pipe and open-channel flowmeter calibration. A suitable tracer (chemical or radioactive) is injected at an accurately measured constant rate, and samples are taken from the flowstream at a point downstream of the injection point, where complete mixing of the injected tracer will have taken place. By measuring the tracer concentration in the samples, the tracer dilution can be established, and from this dilution and the injection rate the volumetric flow can be calculated. This principle is illustrated in Figure 6.57. Alternatively, a pulse of tracer material may be added to the flowstream, and the time taken for the tracer to travel a known distance and reach a maximum concentration is a measure of the flow velocity.
6.6.1.2 Laboratory Calibration Methods Master Meter For this technique a meter of known accuracy is used as a calibration standard. The meter to be calibrated and the master meter are connected in series and are therefore subject to the same flow regime. It must be borne in mind that to ensure consistent accurate calibration, the master meter itself must be subject to periodic recalibration. Volumetric Method In this technique, flow of liquid through the meter being calibrated is diverted into a tank of
known volume. When full, this known volume can be compared with the integrated quantity registered by the flowmeter being calibrated. Gravimetric Method Where the flow of liquid through the meter being calibrated is diverted into a vessel that can be weighed either continuously or after a predetermined time, the weight of the liquid is compared with the registered reading of the flowmeter being calibrated (see Figure 6.58). Pipe Prover This device, sometimes known as a meter prover, consists of a U-shaped length of pipe and a piston or elastic sphere. The flowmeter to be calibrated is installed on the inlet to the prover, and the sphere is forced to travel the length of the pipe by the flowing liquid. Switches are inserted near both ends of the pipe and operate when the sphere passes them. The swept volume of the pipe between the two switches is determined by initial calibration, and this known volume is compared with that registered by the flowmeter during calibration. A typical pipe-prover loop is shown in Figure 6.59.
6.6.2 Flowmeter Calibration Methods for Gases Methods suitable for gas flowmeter calibration are in situ: as for liquids; and laboratory: soap-film burette, waterdisplacement method, and gravimetric.
Figure 6.57 Dilution gauging by tracer injection.
Figure 6.59 Pipe prover.
Figure 6.58 Flowmeter calibration by weighing. Courtesy of British Standards Institution.
Figure 6.60 Gas flowmeter calibration: soap-film burette.
68
PART | II Mechanical Measurements
it may be somewhat more cost-effective to have systems calibrated by the various national standards laboratories (such as NIST, NEL, and SIRA) or by manufacturers rather than committing capital to what may be an infrequently used system.
References
Figure 6.61 Water displacement method (bell prover).
6.6.2.1 Laboratory Calibration Methods Soap-Film Burette This method is used to calibrate measurement systems with gas flows in the range of 10-7 to 10-4 m3/s. Gas flow from the meter on test is passed through a burette mounted in the vertical plane. As the gas enters the burette, a soap film is formed across the tube and travels up it at the same velocity as the gas. By measuring the time of transit of the soap film between graduations of the burette, it is possible to determine flow rate. A typical calibration system is illustrated in Figure 6.60. Water-Displacement Method In this method a cylinder closed at one end is inverted over a water bath, as shown in Figure 6.61. As the cylinder is lowered into the bath, a trapped volume of gas is developed. This gas can escape via a pipe connected to the cylinder out through the flowmeter being calibrated. The time of the fall of the cylinder combined with the knowledge of the volume/length relationship leads to a determination of the amount of gas displaced, which can be compared with that measured by the flowmeter under calibration. Gravimetric Method Here gas is diverted via the meter under test into a gas-collecting vessel over a measured period of time. When the collecting vessel is weighed before diversion and again after diversion, the difference will be due to the enclosed gas, and flow can be determined. This flow can then be compared with that measured by the flowmeter. It should be noted that the cost of developing laboratory flow calibration systems as outlined can be quite prohibitive;
BS 1042, Methods for the Measurement of Fluid Flow in Pipes, Part 1: Orifice Plates, Nozzles & Venturi Tubes, Part 2a: Pitot Tubes (1964). BS 3680, Methods of Measurement of Liquid Flow in Open Channels (1969–1983). BS 5781, Specification for Measurement & Calibration Systems (1979). BS 5792, Specification for Electromagnetic Flowmeters (1980). BS 6199, Measurement of Liquid Flow in Closed Conduits Using Weighting and Volumetric Methods (1981). Cheremisinoff, N. P., Applied Fluid Flow Measurement, Dekker (1979). Durrani, T. S. and Greated, C. A., Laser Systems in Flow Measurement, Plenum (1977). Haywood, A. T. J., Flowmeter: A Basic Guide and Sourcebook for Users, Macmillan (1979). Henderson, F. M., Open Channel Flow, Macmillan (1966). Holland, F. A., Fluid Flow for Chemical Engineers, Arnold (1973). International Organization for Standardization, ISO 3354 (1975), Measurement of Clean Water Flow in Closed Conduits (Velocity Area Method Using Current Meters). Linford, A., Flow Measurement and Meters, E. & F. N. Spon. Miller, R. W., Flow Measurement Engineering Handbook, McGraw-Hill (1982). Shercliff, J. A., The Theory of Electromagnetic Flow Measurement, Cambridge University Press (1962). Watrasiewisy, B. M. and Rudd, M. J., Laser Doppler Measurements, Butterworth (1975).
Further Reading Akers, P. et al., Weirs and Flumes for Flow Measurement, Wiley (1978). Baker, R. C., Introductory Guide to Flow Measurement, Mechanical Engineering Publications (1989). Fowles, G., Flow, Level and Pressure Measurement in the Water Industry, Butterworth-Heinemann (1993). Furness, R. A., Fluid Flow Measurement, Longman (1989). Spitzer, D., Flow Measurement, Instrument Society of America (1991). Spitzer, D., Industrial Flow Measurement, Instrument Society of America (1990).
Chapter 7
Measurement of Viscosity K. Walters and W. M. Jones
7.1 Introduction In his Principia, published in 1687, Sir Isaac Newton postulated that “the resistance which arises from the lack of slipperiness of the parts of the liquid, other things being equal, is proportional to the velocity with which parts of the liquid are separated from one another” (see Figure 7.1). This “lack of slipperiness” is what we now call viscosity. The motion in Figure 7.1 is referred to as steady simple shear flow, and if x is the relevant shear stress producing the motion and c is the velocity gradient (c = U/d ), we have
x = hc
(7.1)
h is sometimes called the coefficient of viscosity, but it is now more commonly referred to simply as the viscosity. An instrument designed to measure viscosity is called a viscometer. A viscometer is a special type of rheometer (an instrument for measuring rheological properties), which is limited to the measurement of viscosity. The SI units of viscosity are the pascal second = 1 Nsm-2 (= 1 kgm-1s-1 and Nsm-2). The CGS unit is the poise (= 0.1 kgm-1s-1) or the poiseuille (= 1 Nsm-2). The units of kinematic viscosity y (= h/t, where t is the density) are m2s-1. The CGS unit is the stokes (St) and 1 cSt = 10-6 m2s-1. For simple liquids such as water, the viscosity can depend on the pressure and temperature, but not on the velocity gradient (i.e., shear rate). If such materials satisfy certain further formal requirements (e.g., that they are inelastic), they are referred to as Newtonian viscous fluids. Most viscometers were originally designed to study these simple Newtonian
fluids. It is now common knowledge, however, that most fluid-like materials have a much more complex behavior, and this is characterized by the adjective non-Newtonian. The most common expression of non-Newtonian behavior is that the viscosity is now dependent on the shear rate c, and it is usual to refer to the apparent viscosity h (c) of such fluids, where, for the motion of Figure 7.1,
x = h ^ c h c
(7.2)
In the next section, we argue that the concept of viscosity is intimately related to the flow field under investigation (e.g., whether it is steady simple shear flow or not), and in many cases it is more appropriate and convenient to define an extensional viscosity hf corresponding to a steady uniaxial extensional flow. Now, although there is a simple relation between the (extensional) viscosity hf and the (shear) viscosity h in the case of Newtonian liquids (in fact, hf = 3h for Newtonian liquids), such is not the case in general for non-Newtonian liquids, and this has been one of the motivations behind the emergence of a number of extensional viscometers in recent years (see Section 7.5). Most fluids of industrial importance can be classified as non-Newtonian: liquid detergents, multigrade oils, paints, printing inks, and molten plastics are obvious examples (see, for example, Walters, 1980), and no chapter on the measurement of viscosity would be complete without a full discussion of the application of viscometry to these complex fluids. This will necessitate an initial discussion of such important concepts as yield stress and thixotropy (which are intimately related to the concept of viscosity), as undertaken in the next section.
7.2 Newtonian and non-Newtonian behavior
Figure 7.1 Newton’s postulate. ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00007-3
For Newtonian liquids, there is a linear relation between shear stress c and shear rate c. For most non-Newtonian materials, the shear-thinning behavior shown schematically in Figure 7.2 pertains. Such behavior can be represented by 69
70
PART | II Mechanical Measurements
Figure 7.4 Uniaxial extensional deformation.
by some materials; these can be grouped under the headings thixotropy and antithixotropy. The shearing of some materials at a constant rate can result in a substantial lowering of the viscosity with time, with a gradual return to the initial viscosity when the shearing is stopped. This is called thixo tropy. Paints are the most obvious examples of thixotropic materials. As the name suggests, antithixotropy involves an increase in viscosity with time at a constant rate of shear. Clearly, the measurement of the shear viscosity within an industrial context is important and requires an understanding of material behavior. Is the material Newtonian or non-Newtonian? Is thixotropy important? Other questions come to mind. Many industrial processes involve more extensional deformation than shear flow, and this has been the motivation behind the search for extensional viscometers, which are constructed to estimate a material’s resistance to a stretching motion of the sort shown schematically in Figure 7.4. In this case, it is again necessary to define an appropriate stress T and rate of strain k and to define the extensional viscosity hf by
Figure 7.2 Representative (x, c) rheograms.
Figure 7.3 Schematic diagram of typical shear-thinning behavior.
the viscosity/shear-rate rheogram of Figure 7.3, where we see that the viscosity falls from a “zero-shear” value h0 to a lower (second-Newtonian) value h2. The term pseudo-plasticity was once used extensively to describe such behavior, but this terminology is now less popular. In the lubrication literature, shear thinning is often referred to as temporary viscosity loss. Some non-Newtonian fluids—corn-flour suspensions, for example—show the opposite type of behavior in which the viscosity increases with shear rate (Figure 7.2). This is called shear thickening. In old-fashioned texts, the term dilatancy was often used to describe this behavior. For many materials over a limited shear-rate range, a logarithmic plot of x against c is linear, so that
x = Kcn
(7.3)
When n > 1, these so-called “power-law fluids” are shearthinning, and when n < 1, they are shear-thinning. An important class of materials will not flow until a critical stress, called the yield stress, is exceeded. These “plastic” materials can exhibit various kinds of behavior above the yield stress, as shown in Figure 7.2. If the rheogram above the yield stress is a straight line, we have what is commonly referred to as a Bingham plastic material. In addition to the various possibilities shown in Figure 7.2, there are also important “time-dependent” effects exhibited
T = hf k
(7.4)
For a Newtonian liquid, hf is a constant (/3h). The extensional viscosity of some non-Newtonian liquids can take very high values, and it is this exceptional resistance to stretching in some materials, together with the practical importance of extensional flow, that makes the study of extensional viscosity so important. The reader is referred to the book Elongational Flows by Petrie (1979) for a detailed treatise on the subject. Rheometers for Molten Plastics, by Dealy (1982), on polymer-melt rheometry is also recommended in this context. Adetailed assessment of the importance of non-Newtonian effects is given in the text Rheometry: Industrial Applications (Walters, 1980), which contains a general discussion of basic principles in addition to an in-depth study of various industrial applications. The popular book on viscometry by Van Wazer et al. (1963) and that of Wilkinson (1960) on non-Newtonian flow are now out of date in some limited respects, but they have stood the test of time remarkably well and are recommended to readers, provided the dates of publication of the books are appreciated. More modern treatments, developed from different but complementary viewpoints, are given in the books by Lodge (1974), Walters (1975), and Whorlow (1980). Again, the text by Dealy (1982) to is limited to polymermelt rheometry, but much of the book is of general interest to those concerned with the measurement of viscosity.
71
Chapter | 7 Measurement of Viscosity
7.3 Measurement of the shear viscosity It is clearly impracticable to construct viscometers with the infinite planar geometry associated with Newton’s postulate (Figure 7.1), especially in the case of mobile liquid systems, and this has led to the search for convenient geometries and flows that have the same basic steady simple shear flow structure. This problem has now been resolved, and a number of the so-called “viscometric flows” have been used as the basis for viscometer design. (The basic mathematics is nontrivial and may be found in the texts by Coleman et al., 1966; Lodge, 1974; and Walters, 1975.) Most popular have been (1) capillary (or Poiseuille) flow, (2) circular Couette flow, and (3) cone-and-plate flow. For convenience, we briefly describe each of these flows and give the simple operating formulae for Newtonian liquids, referring the reader to detailed texts for the extensions to non-Newtonian liquids. We also include, in Section 7.3.4, a discussion of the parallel-plate rheometer, which approximates closely the flow associated with Newton’s postulate.
7.3.1 Capillary Viscometer Consider a long capillary with a circular crosssection of radius a. Fluid is forced through the capillary by the application of an axial pressure drop. This pressure drop P is measured over a length L of the capillary, far enough away from both entrance and exit for the flow to be regarded as “fully developed” steady simple shear flow. The volume rate of flow Q through the capillary is measured for each pressure gradient P/L and the viscosity h for a Newtonian liquid can then be determined from the so-called Hagen-Poiseuille law:
Q=
rPa 4 8hL
(7.5)
The nontrivial extensions to (2.5) when the fluid is non-Newtonian may be found in Walters (1975), Whorlow (1980), and Coleman et al. (1966). For example, in the case of the power-law fluid (2.3), the formula is given by
Q=
1/n rna 3 d ap n ] 3n + 1 g 2KL
(7.6)
One of the major advantages of the capillary viscometer is that relatively high shear rates can be attained. Often, it is not possible to determine the pressure gradient over a restricted section of the capillary; it is then necessary, especially in the case of non-Newtonian liquids, to carefully study the pressure losses in the entry and exit regions before the results can be interpreted correctly (see, for example, Dealy, 1982, and Whorlow, 1980). Other possible sources of error include viscous heating and flow instabilities. These and other potential problems are discussed in detail by Dealy (1982), Walters (1975), and Whorlow (1980).
Figure 7.5 Schematic diagram of an Ostwald viscometer.
The so-called “kinetic-energy correction” is important when it is not possible to limit the pressure drop measurement to the steady simple shear flow region and when this is taken over the complete length L of the capillary. For a Newtonian fluid, the kinetic energy correction is given (approximately) by
P = P0 -
1.1tQ 2 r2 a 4
(7.7)
where P is the pressure drop required in (7.5), P0 is the measured pressure drop, and t is the density of the fluid. Since a gas is highly compressible, it is more convenient to measure the mass rate of flow, m. Equation (7.5) has then to be replaced by (see, for example, Massey, 1968)
h=
r ra 4 pMP o RTL 8m
(7.8)
where pr is the mean pressure in the pipe, M is the molecular weight of the gas, R is the gas constant per mole, and T is the Kelvin temperature. The kinetic-energy correction (7.7) is still valid and must be borne in mind, but in the case of a gas, this correction is usually very small. A “slip correction” is also potentially important in the case of gases, but only at low pressures. In commercial capillary viscometers for nongaseous materials, the liquids usually flow through the capillaries under gravity. A good example is the Ostwald viscometer (Figure 7.5). In this, b, c, and d are fixed marks, and there are reservoirs at D and E. The amount of liquid must be such that at equilibrium one meniscus is at d. To operate, the liquid is sucked or blown so that the other meniscus is now a few millimeters above b. The time t for the level to fall from b to c is measured. The operating formula is of the form
y = At - B/t
(7.9)
72
PART | II Mechanical Measurements
where y is the kinematic viscosity (/h/t). The second term on the right-hand side of Equation (7.9) is a correction factor for end effects. For any particular viscometer, A and B are given as calibration constants. Viscometers with pipes of different radii are supplied according to British Standards specifications; a “recommended procedure” is also given in B.S. Publication 188:1957. Relying on gravity flow alone limits the range of measurable stress to between 1 and 15 Nm-2. The upper limit can be increased to 50 Nm-2 by applying a known steady pressure of inert gas over the left-hand side of the U-tube during operation.
7.3.2 Couette Viscometer The most popular rotational viscometer is the Couette concentric-cylinder viscometer. Fluid is placed in the annulus between two concentric cylinders (regarded as infinite in the interpretation of data), which are in relative rotation about their common axis. It is usual for the outer cylinder to rotate and for the torque required to keep the inner cylinder stationary to be measured, but there are variants, as in the Brookfield viscometer, for example, where a cylindrical bob (or sometimes a disc) is rotated in an expanse of test liquid and the torque on this same bob is recorded; see Section 7.4. If the outer cylinder of radius r0 rotates with angular velocity X0 and the inner cylinder of radius r1 is stationary, the torque C per unit length of cylinder on the inner cylinder for a Newtonian liquid is given by
C=
4rX0 r 21 r 20 h
_ r 20 - r 21 i
Figure 7.6 Basic cone-and-plate geometry.
X0, and the torque C required to keep the plate stationary is measured. The gap angle i0 is usually very small ( 2 + f
R3 R1
R2 R4
p+f
R1 R3
pf
R4 R3
V out =
pH
p H + Rd Rs
4
1 e o i = 1 Ri
/
With an electronic detector, Rd can be made large, and if Rs is small, then S is given by S=
>2 + f
d R3 R4
pf
R4 R3
pH
which has a maximum value of d/4 when (R3/R4) = 1. The unbalanced mode is shown in Figure 27.51(a) and is often used with strain gauges (Chapter 4). R1 is the active strain gauge and R2 is the dummy gauge subject to the same temperature changes as R1 but no strain. The output from the bridge is given by V out =
Vs 2
*
1-
1 d = 1 + c mG 2
4
where d=
DR R
VS 4
$d
Self-heating generally limits the bridge supply voltage and hence the output voltage. Amplification of the bridge output voltage has to be undertaken with an amplifier that has a high common-mode rejection ratio (CMRR), since the output from the bridge is in general small, and the commonmode signal applied to the amplifier is Vs/2. Further details of amplifiers suitable for use as bridge detectors can be found in Part 4. The output from a strain-gauge bridge can be increased if four gauges are employed, with two in tension and two in compression, as shown in Figure 27.51(b). For such a bridge the output is given by V out = V s $ d Strain gauges and platinum resistance thermometers may be situated at a considerable distance from the bridge, and the long leads connecting the active element to the bridge will have a resistance that will vary with temperature. Figure 27.52 shows the use of the Wheatstone bridge in three-lead resistance measurement where it can be seen that close to balance the effect of the lead resistance and its temperature variation is approximately self-canceling and that the canceling effect deteriorates the further the bridge condition departs from balance. Figure 27.53 shows the use of Smith and Muller bridges to eliminate the lead resistance of a four-lead platinum resistance thermometer. (See also Chapter 1.)
27.7.1.1 Low-Resistance Measurement Contact resistance causes errors in the measurement of low resistance, and therefore, to accurately define a resistance, it is necessary to employ the four-terminal technique shown in Figure 27.54. The outer two terminals are used to supply the current to the resistance, and the inner two, the potential terminals, determine the precise length of conductor over which the resistance is defined. Measurement of low resistance is undertaken using the Kelvin double bridge shown in Figure 27.55(a). R1 is the resistance to be measured and R2 is a standard resistance on the same order of magnitude as R1. The link between them, which is sometimes referred to as the yoke, has resistance r. The current through R1 and R2 is regulated by R. R3, R4, r3, and r4 are four resistances of which either R3 and r3 or R4 and r4 are variable and for which Figure 27.51 (a) Unbalanced Wheatstone bridge; (b) unbalanced Wheatstone bridge with increased sensitivity.
R3
r3 =r R4 4
476
PART | IV Electrical and Radiation Measurements Figure 27.52 Three-lead measurements using a Wheatstone bridge.
The delta star transformation applied to the bridge as shown in Figure 27.55(b) apportions the yoke resistance between the two sides of the bridge. The balance condition is given by R 1 + ra R 2 + rc
=
R3 R4
;
ra =
r3 $ r
_ r3 + r4 + r i
rc =
r4 $ r
_ r3 + r4 + r i
and thus the unknown resistance R1 is given by R3
R + rc h - ra R4 ^ 2 R r R3 r4 $ r f 3 - 3p = $ R2 + r3 + r4 + r R 4 r4 R4
R1 =
The term involving the yoke resistance r can be made small by making r small and by making R3
r3 =r R4 4 The bridge can be used to measure resistances typically from 0.1 nX to 1 X. For high precision the effect of thermally generated EMFs can be eliminated by reversing the current in R1 and R2 and rebalancing the bridge. The value of R1 is then taken as the average of the two measurements.
27.7.1.2 High-Resistance Measurement Modified Wheatstone bridges can be used to measure high resistance up to 1015 X. The problems in such measurements arise from the difficulty of producing stable, high-value standard resistors and errors caused by shunt-leakage resistance. The problem of stable high-resistance values can be overcome by using the bridge with lower value and therefore more stable resistances. This leads to bridges that have larger ratios and hence reduced sensitivity. By operating the bridge with R4 as the variable element then as R1 " R4 " 0. The shunt leakage is made up of leakage resistance across the leads and the terminals of the bridge and across the unknown resistor itself. High-value standard resistors are constructed with three terminals. In the bridge arrangement shown in Figure 27.56(a), Rsh1 shunts R3; thus if R1 >> R3, this method of connection decreases the effect of the leakage resistance. The only effect of Rsh2 is to reduce the sensitivity of the balance condition. Figure 27.56(b) shows a DC form of the Wagner grounding arrangement used to eliminate the effect of leakage resistance. The bridge balance then involves balancing the bridge with the detector across BC by adjusting R6 and then balancing the bridge with the detector across AB by adjusting R4. The procedure is then repeated until a balance is achieved under both conditions. The first balance condition ensures
477
Chapter | 27 Electrical Measurements
Figure 27.53 (a) Smith bridge for four-lead platinum resistance thermometer measurement; (b) Muller bridge for four-lead platinum resistance thermometer measurement.
Figure 27.54 A four-terminal resistance.
that there is no potential drop across Rsh2 and thus no current flows through it.
27.7.2 AC Equivalent Circuits of Resistors, Capacitors, and Inductors Resistors, capacitors, and inductors do not exist as pure components. They are in general made up of combinations
of all three impedance elements. For example, a resistor may have both capacitive and inductive parasitic elements. Figure 27.57 shows the complete equivalent circuits for physical realizations of the three components together with simplified equivalent circuits, which are commonly used. Further details of these equivalent circuits can be found in Oliver and Cage (1971). At any one frequency, any physical component can be represented by its complex impedance Z = R ! jX or its admittance Y = G ! jB. Since Y = 1/Z and Z = 1/Y, then R=
G G 2 + B 2;
X=
-B G2 + B2
G=
R R + X 2;
B=
-X R + X2
and 2
2
478
PART | IV Electrical and Radiation Measurements
Figure 27.55 (a) Kelvin double bridge; (b) equivalent circuit of Kelvin double bridge.
These two representations of the component correspond to series and parallel equivalent circuits. If at a given frequency the impedance is Z = R + jX, the equivalent circuit at that frequency in terms of ideal components is a resistor in either series or parallel with an inductor, as shown in Figure 27.58(a). This figure also gives the conversion formulae between the two representations. For components for which the impedance at any given frequency is given by Z = R - jX, the equivalent circuits are series or parallel combinations of a resistor and a capacitor, as in Figure 27.58(b). The quality factor, Q, is a measure of the ability of a reactive element to act as a pure storage element. It is defined as 2r # maximum stored energy in the cycle Q= Energy dissipated per cycle The dissipation factor, D, is given by D=
Figure 27.56 (a) Wheatstone bridge for use with three-terminal high resistances; (b) DC Wagner earthing arrangement.
δ is the loss angle. Generally, the quality of an inductance is measured by its Q factor and the quality of a capacitor by its D value or loss angle.
27.7.3 Four-Arm AC Bridge Measurements If the resistive elements of the Wheatstone bridge are replaced by impedances and the DC source and detector are replaced by their AC equivalents, as shown in Figure 27.59, then if Z1 is the unknown impedance, the balance condition is given by
1 Q
Z1 =
Z2Z3 Z4
; R 1 + jX 1 =
or The Q and D factors for the series and parallel inductive and capacitive circuits are given in Figure 27.58. From this figure it can be seen that Q is given by tan θ and D by tan δ, where
;Z 1;=
;Z 2;;Z 3; ;Z 4;
_ R 2 + jX 2 i _ R 3 + jX 3 i _ R 4 + jX 4 i
and +Z 1 = +Z 2 + +Z 3 - +Z 4
479
Chapter | 27 Electrical Measurements
Figure 27.57 Equivalent circuit for physical realizations of resistance, capacitance, and inductance.
There are, therefore, a very large number of possible bridge configurations. The most useful can be classified according to the following scheme due to Ferguson. Since the unknown impedance has only two parameters R1 and X1, it is therefore sufficient to adjust only two of the six available parameters on the right-hand side of the balance equation. If the adjustment for each parameter of the unknown impedance
is to be independent, the variables should be adjusted in the same branch. Adjusting the parameters R2, X2, is the same as adjusting parameters R3, X3, and thus four-arm bridges can be classified into one of two types, either ratio bridges or product bridges. In the ratio bridge, the adjustable elements in either Z2 or Z3 are adjacent to the unknown impedance and the
480
PART | IV Electrical and Radiation Measurements
Figure 27.58 (a) Equivalent series/parallel resistor and inductor circuits; (b) equivalent series/parallel resistor and capacitor circuits.
27.7.3.1 Stray Impedances in AC bridges
Figure 27.59 AC four-arm bridge.
ratio, either Z3/Z4 or Z2/Z4, must be either real or imaginary but not complex if the two elements in the balance condition are to be independent. In product bridges the balance is achieved by adjusting the elements in Z4, which is opposite the unknown. For the adjustments to be independent requires Z2 $ Z3 to be real or imaginary but not complex. Figure 27.60 gives examples of a range of commonly used four-arm bridges for the measurement of C and L. For further details concerning the application of such bridges, the reader should consult Hague and Foord (1971).
Associated with the branches, source, and detector of an AC bridge are distributed capacitances to ground. The use of shields around these elements enables the stray capacitances to be defined in terms of their location, magnitude, and effect. Figure 27.61(a) shows these capacitances, and Figure 27.61(b) shows the equivalent circuit with the stray capacitances transformed to admittances across the branches of the bridge and the source and detector. The stray admittances across the source and detector do not affect the balance condition. The balance condition of the bridge in terms of the admittances of the branches and the admittances of the stray capacitances across them is given by _ Y1 + YAB i _ Y4 + YCD i = _ Y3 + YAD i _ Y2 + YCB i
where, for example, YAB =
YA YB YA + YB + YC + YD
D = YA + YB + YC + YD
=
YA YB D
;
Chapter | 27 Electrical Measurements
481
Figure 27.60 AC four-arm bridges for the measurement of capacitance and inductance. (Continued)
482
PART | IV Electrical and Radiation Measurements
Figure 27.60 (Continued)
and thus the balance condition is given by _ Y1 Y4 - Y2 Y3 i
+
1 _ Y Y Y + Y 4 Y A YB - Y3 Y C Y B - Y 2 Y A Y D i = 0 T 1 C D
If the stray capacitances are to have no effect on the balance condition, this must be given by Y1 Y4 = Y2 Y3 and the second term of the balance condition must be zero. It can be easily shown that this can be achieved by either YA YC
=
Y1 Y2
=
Y3 Y4
or
YB YD
=
Y1 Y3
=
Y2 Y4
Thus, the stray impedances to ground have no effect on the balance condition if the admittances at one opposite pair of branch points are in the same ratio as the admittances of the pairs of branches shunted by them. The Wagner earthing arrangement shown in Figure 27.62 ensures that Points D and B of the balanced bridge are at ground potential; thus the effect of stray impedances at these points is eliminated. This is achieved by means of an
auxiliary arm of the bridge, consisting of the elements Y5 and Y6. The bridge is first balanced with the detector between D and B by adjusting Y3. The detector is moved between B and E and the auxiliary bridge balanced by adjusting Y5 and Y6. This ensures that Point B is at earth potential. The two balancing processes are repeated until the bridge balances with the detector in both positions. The balance conditions for the main bridge and the auxiliary arm are then given by Y1 Y4 = Y2 Y3 and Y3 _ Y6 + YC i = Y4 _ Y5 + YA i
27.7.4 Transformer Ratio Bridges Transformer ratio bridges, which are also called inductively coupled bridges, largely eliminate the problems associated with stray impedances. They also have the advantage that only a small number of standard resistors and capacitors is needed. Such bridges are therefore commonly used as universal bridges to measure the resistance, capacitance, and inductance of components having a wide range of values at frequencies up to 250 MHz. The element that is common to all transformer ratio bridges is the tapped transformer winding, shown in Figure 27.63. If the transformer is ideal, the windings have zero leakage flux, which implies that all the flux from one
483
Chapter | 27 Electrical Measurements
winding links with the other, and zero winding resistance. The core material on which the ideal transformer is wound has zero eddy-current and hysteresis losses. Under these circumstances, the ratio of the voltages V1 to V2 is identical to the ratio of the turns n1 to n2, and this ratio is independent of the loading applied to either winding of the transformer. In practice the transformer is wound on a tape-wound toroidal core made from a material such as supermalloy or supermumetal, which has low eddy-current and hysteresis loss as well as high permeability. The coil is wound as a multistranded rope around the toroid, with individual strands in the rope joined in series, as shown in Figure 27.64. This configuration minimizes the leakage inductance of the windings. The windings are made of copper with the largest cross-sectional area to minimize their resistance. Figure 27.65 shows an equivalent circuit of such a transformer. L1 and L2 are
Figure 27.61 (a) Stray capacitances in a four-arm AC bridge; (b) equivalent circuit of an AC four-arm bridge with stray admittances.
Figure 27.64 Construction of a toroidal tapped transformer.
Figure 27.62 Wagner earthing arrangement.
Figure 27.63 Tapped transformer winding.
Figure 27.65 Equivalent circuit of a tapped transformer.
484
PART | IV Electrical and Radiation Measurements
the leakage inductances of the windings; R1 and R2 are the winding resistances; M is the mutual inductance between the windings; and R represents hysteresis and eddy-current loss in the core. The ratio error from the ideal value of n1/n2 is given approximately by n 2 _ R 1 + j~L 1 i - n 1 _ R 2 + j~L 2 i _n1 + n2i
$d
1 1 + n #100% R j~M
and this error can be made to be less than 1 part in 106. The effect of loading is also small. An impedance Z applied across the n2 winding gives a ratio error of _ n 1 n 2 i _ R 2 + j~L 2 i _ n 2 n 1 i _ R 1 + j~L 1 i 8_n1 + n2i n2B $ Z
# 100%
For an equal bridge, with n1 = n2, this is R 2 + j~L 2 Z
Multidecade ratio transformers, as shown in Figure 27.66, use windings either with separate cores for each decade or all wound on the same core. For the multicore transformer, the input for the next decade down the division chain is the output across a single tap of the immediately higher decade. For the windings on a single core, the number of decades that can be accommodated is limited by the need to maintain the volts/turn constant over all the decades, and therefore the number of turns per tap at the higher decade becomes large. Generally a compromise is made between the number of cores and the number of decades on a single core.
27.7.4.1 Bridge Configurations There are three basic bridge configurations, as shown in Figure 27.67. In Figure 27.67(a) the detector indicates a null when Z1
# 100%
which is approximately the same error as if the transformer consisted of a voltage source with an output impedance given by its leakage inductance and the winding resistance. These can be made to be small, and thus the effective output impedance of the transformer is low; therefore, the loading effect is small. The input impedance of the winding seen by the AC source is determined by the mutual inductance of the windings (which is high) and the loss resistance (which is also high).
Z2
=
V1 V2
and for practical purposes V1
n1 = n =n V2 2 Thus Z 1 = nZ 2; ; Z 1; = n; Z 2; and +Z 1 = + Z 2
Figure 27.66 Multidecade ratio transformers.
485
Chapter | 27 Electrical Measurements
Figure 27.68 Universal bridge.
Figure 27.67 (a) Autotransformer ratio bridge; (b) double-wound transformer ratio bridge; (c) double ratio bridge.
The bridge can therefore be used for comparing like impedances. The three-winding voltage transformer shown in Figure 27.67(b) has the same balance condition as the bridge in Figure 27.67(a). However, in the three-winding bridge the voltage ratio can be made more nearly equal to the turns ratio. The bridge has the disadvantage that the leakage inductance and winding resistance of each section is in series with Z1 and Z2; therefore the bridge is most suitable for the measurement of high impedances. Figure 27.67(c) shows a double ratio transformer bridge in which the currents I1 and I2 are fed into a second doublewound transformer. The detector senses a null condition when there is zero flux in the core of the second transformer. Under these conditions for an ideal transformer I1 nl2 1 I 1 nl1 = I 2 nl2; = = I2 nl1 nl and the second transformer presents zero input impedance. Therefore, since I1 I2
=
Z2
n1 Z2 $n = n Z1 Z1 2
then Z 1 = nnlZ 2; ;Z 1; = nnl;Z 2; and +Z 1 =+Z 2
By using the two ratios, this bridge extends the range of measurement that can be covered by a small number of standards. Figure 27.68 shows a universal bridge for the measurement of R, C, and L. In the figure only two decades of the inductive divider that control the voltages applied to the bank of identical fixed capacitors and resistors are shown. The balance condition for the bridge, when connected to measure capacitance, is given by Cu =
nl2 n 2 n4 e o$ C + s nl1 10 100
and n3 nl2 n 1 1 o$ 1 = e + Ru nl1 10 100 R s When measuring inductance, the current through the capacitor and inductor are summed into the current transformer and the value of capacitance determined is the value that resonates with the inductance. For an unknown inductance, its measured values in terms of its parallel equivalent circuit are given by L up =
1 1 1 ; = R R ~ Cu up u 2
where the values of Cu and Ru are given in these equations. The value of w is chosen such that it is a multiple of 10, and therefore the values of Lup and Cu are reciprocal. The values of Lup and Rup can be converted to their series equivalent values using the equations in Section 27.7.2. The transformer ratio bridge can also be configured to measure low impedances, high impedances, and network and amplifier characteristics. The ampere turn balance used in ratio bridges is also used in current comparators employed in the calibration of current transformers and for intercomparing
486
PART | IV Electrical and Radiation Measurements
four-terminal impedances. Details of these applications can be found in Gregory (1973), Hague and Foord (1971), and Oliver and Cage (1971). The current comparator principle can also be extended to enable current comparison to be made at DC (Dix and Bailey, 1975). Transformer ratio bridges are often used with capacitive and inductive displacement transducers because they are immune to errors caused by earth-leakage impedances and since they offer an easily constructed, stable, and accurately variable current or voltage ratio (Hugill, 1983; Neubert, 1975).
27.7.4.2 The Effect of Stray Impedances on the Balance Condition of Inductively Coupled Bridges Figure 27.69 shows the unknown impedance with its associated stray impedances Zsh1 and Zsh2. The balance condition of the bridge is unaffected by Zsh1 since the ratio of V1 to V2 is unaffected by shunt loading. At balance the core of the current transformer has zero net flux. There is no voltage drop across its windings; hence there is no current flow through Zsh2. Zsh2 has therefore no effect on the balance condition. Thus the bridge rejects both stray impedances. This enables the bridge to measure components in situ while still connected to other components in a circuit. In practice, if the output impedance of the voltage transformer has a value Zvt and the current transformer has an input impedance of Zct, the error on the measurement of Z1 is given approximately by
f
Z vt Z sh1
+
Z ct Z sh 2
Figure 27.70 Unbalanced inductively coupled bridge.
p # 100%
27.7.4.3 The Use of Inductively Coupled Bridges in an Unbalanced Condition The balance condition in inductively coupled bridges is detected as a null. The sensitivity of the bridge determines the output under unbalance conditions and therefore the precision with which the balance can be found. Figure 27.70 shows the two-winding voltage and current transformers and their equivalent circuits. Figure 27.71 shows the sensitivities of the two bridges when used with capacitive and inductive
Figure 27.69 Effect of stray impedances on balance condition.
Figure 27.71 Sensitivity of current and voltage transformer bridges.
487
Chapter | 27 Electrical Measurements
elements. The capacitors form a resonant circuit with the current transformer, and for frequencies below the resonant frequency, the sensitivity of the bridge is dependent on both w, the angular excitation frequency of the bridge, and Lc, the self-inductance of the winding, as shown in Figure 27.71. The dependence of the sensitivity on w and Lc can be reduced at the cost of reduced sensitivity (Neubert, 1975).
The amplifier output and a signal 90° shifted from that output are then passed into two phase-sensitive detectors. These detectors employ reference voltages that enable the resistive and reactive components of the unknown to be displayed. Windings can be added to the bridge to enable the bridge to measure the difference between a standard and the unknown.
27.7.4.4 Autobalancing Ratio Bridges
27.7.5 High-Frequency Impedance Measurement
By employing feedback as shown in Figure 27.72, the transformer ratio bridge can be made to be self-balancing. The high-gain amplifier ensures that at balance the current from the unknown admittance Yu is balanced by the current through the feedback resistor. Thus at balance V 1 Yu nl1 =
V out R
$ nl2
with V1 = V1 sin ~t
and
Vout = Vout sin ^ ~t + z h Yu = G u + jB u Gu =
Bu =
nl2 nl1 nl2 nl1
$
1 V out $ cos z; R V1
$
1 V out $ $ sin z R V1
As the frequency of measurement is increased, the parasitic elements associated with real components begin to dominate the measurement. Therefore, RF bridges employ variable capacitors (typically less than 1,000 pF) as the adjustable elements in bridges and fixed resistors whose physical dimensions are small. A bridge that can be constructed using these elements is the Schering bridge, shown in Figure 27.60. Great care has to be taken with shielding and wiring layout in RF bridges, to avoid large coupling loops. The impedance range covered by such bridges decreases as the frequency is raised. At microwave frequencies all the wiring is coaxial, discrete components are no longer used, and impedance measurements can only be undertaken for impedances close to the characteristic impedance of the system. Further details of high-frequency measurements can be found in Oliver and Cage (1971) and Somlo and Hunter (1985). The bridged T and parallel T circuits (shown in Figure 27.73 together with their balance conditions) can be used for measurements at RF frequencies. The parallel T measurement technique has the advantage that the balance can be achieved using two grounded variable capacitors.
Figure 27.72 Autobalancing ratio bridge.
488
PART | IV Electrical and Radiation Measurements Figure 27.73 Bridged T (a) and parallel T (b) circuits for the measurement of impedance at high frequencies.
Resonance methods can also be used for the measurement of components at high frequencies. One of the most important uses of resonance in component measurement is the Q meter, shown in Figure 27.74. In measuring inductance as shown in Figure 27.74(a), the variable capacitor C, which forms a series-resonant circuit with Lus, is adjusted until the detector detects resonance at the frequency f. The resonance is detected as a maximum voltage across C. At resonance, Q is given by Q=
Vc V in
=
VL V in
resonates with the inductance at 2f. Then C0, the self-capacitance of the coil, is given by C0 =
C 1 - 4C 2 3
In Figure 27.74(b), the use of the Q meter to measure the equivalent parallel capacitance and resistance of a capacitor is shown. Using a standard inductor at a frequency f, the capacitor C is adjusted to a value C1Cc at which resonance occurs. The unknown capacitor is connected across C, and the value of C is adjusted until resonance is found again. If this value is C2, the unknown capacitor Cup has a value given by
and Lus is given by
C up = C 1 - C 2 L us =
1 4r f C
The value of Rus is given by R us =
Its dissipation factor, D, is given by
2 2
1 2rfCQ
The self-capacitance of an inductor can be determined by measuring the value of C—say, C1—which resonates with it at a frequency f together with value of C—say, C2—which
D=
Q1 - Q2 Q1Q2
$
C1 C1 - C2
where Q1 and Q2 are the measured Q values at the two resonances. Its parallel resistance, Rup, is given by R up =
Q1Q2 Q1 - Q2
$
1 2rfC 1
The elements of the high-frequency equivalent circuit of a resistance in Figure 27.74(c) can also be measured. At a
489
Chapter | 27 Electrical Measurements
Figure 27.74 Q meter, (a) Inductance measurement; (b) capacitance measurement; (c) resistance measurement.
given frequency, f, the capacitor C is adjusted to a value C1 such that it resonates with L. The resistor is then connected across the capacitor and the value of C adjusted until resonance is reestablished. Let this value of C be C2. If the values of Q at the resonances are Q1 and Q2, respectively, values of the unknown elements are given by R up =
Q1Q2
_ Q1 - Q2i
$
1 2rfC 1
C up = C 1 - C 2 and L up =
1 ^ 2rf h2 $ C up
27.8 Digital frequency and period/ time-interval measurement These measurements, together with frequency ratio, phase difference, rise and fall time, and duty-factor measurements, employ digital counting techniques and are all fundamentally related to the measurement of time.
The SI unit of time is defined as the duration of 9, 192, 631, 770 periods of the radiation corresponding to the transition between the F = 4, mf = 0 and F = 3, mf = 0 hyperfine levels of the ground state of the cesium-133 atom. The unit is realized by means of the cesium-beam atomic clock in which the cesium beam undergoes a resonance absorption corresponding to the required transition from a microwave source. A feedback mechanism maintains the frequency of the microwave source at the resonance frequency. The SI unit can be realized with an uncertainty of between 1 part in 1013 and 1014. Secondary standards are provided by rubidium gas cell resonator-controlled oscillators or quartz crystal oscillators. The rubidium oscillator uses an atomic resonance effect to maintain the frequency of a quartz oscillator by means of a frequency-lock loop. It provides a typical short-term stability (averaged over a 100-s period) of five parts in 1013 and a long-term stability of one part in 1011/month. Quartz crystal oscillators provide inexpensive secondary standards with a typical short-term stability (averaged over a 1-s period) of five parts in 1012 and a longterm stability of better than one part in 108/month. Details of time and frequency standards can be found in HewlettPackard (1974). Dissemination of time and frequency standards is also undertaken by radio broadcasts. Radio stations transmit waves for which the frequencies are known to an uncertainty of a part in 1011 or 1012. Time-signal broadcasting on a time scale known as Coordinated Universal Time (UTC) is coordinated by the Bureau International de L’Heure (BIH) in Paris. The BIH annual report details the national authorities responsible for time-signal broadcasts, the accuracies of the carrier frequencies of the standard frequency broadcasts, and the characteristics of national time-signal broadcasts. Table 27.9 provides details of time broadcast facilities in the United Kingdom.
27.8.1 Frequency Counters and Universal Timer/Counters Frequency measurements are undertaken by frequency counters, the functions of which (in addition to frequency measurement) may also include frequency ratio, period measurement, and totalization. Universal timer/counters provide the functions of frequency counters with the addition of timeinterval measurement. Figure 27.75 shows the elements of a microprocessor-controlled frequency counter. The input signal conditioning unit accepts a wide range of input signal levels, typically with a maximum sensitivity corresponding to a sinusoid having an RMS value of 20 mV and a dynamic range from 20 mV rms to 20 V rms. The trigger circuit has a trigger level that is either set automatically with respect to the input wave or can be continuously adjusted over some range. The trigger circuit generally employs hysteresis to reduce the effect of noise on the waveform, as shown in
490
PART | IV Electrical and Radiation Measurements
TABLE 27.9 U.K. time broadcasts GBR 16 kHz radiated from Rugby (52° 22’ 13” N 01 ° 10’ 25” W) Power: ERP 65 kW Transmission modes: A1, FSK (16.00 and 15.95 kHz), and MSK (future) Time signals: Schedule (UTC)
Form of the time signals
0255 to 0300 0855 to 0900 1455 to 1500 2055 to 2100 There is an interruption for maintenance from 1000 to 1400 every Tuesday
A 1 type second pulses lasting 100 ms, lengthened to 500 ms at the minute The reference point is the start of carrier rise Uninterrupted carrier is transmitted for 24 s from 54 m 30 s and from 0 m 6 s DUTI: CCIR code by double pulses
MSF 60 kHz radiated from Rugby Power: ERP 27 kW Schedule (UTC)
Form of the time signals
Continuous except for an interruption for maintenance from 1000 to 1400 on the first Tuesday in each month
Interruptions of the carrier of 100 ms for the second pulses and of 500 ms for the minute pulses. The epoch is given by the beginning of the interruption BCD NRZ code, 100 bits/s (month, day of month, hour, minute), during minute interruptions BCD PWM code, 1 bit/s (year, month, day of month, day of week, hour, minute) from seconds 17 to 59 in each minute DUT1: CCIR code by double pulses
The MSF and GBR transmission are controlled by a cesium beam frequency standard. Accuracy ! 2 # 10-12
Figure 27.76(a), although this can cause errors in time measurement, as shown in Figure 27.76(b). The quartz crystal oscillator in a frequency counter or universal counter timer can be uncompensated, temperature compensated, or oven stabilized. The frequency stability of quartz oscillators is affected by aging, temperature, variations in supply voltage, and changes in power supply mode, that is, changing from line-frequency supply to battery supply. Table 27.10 gives comparative figures for the three types of quartz oscillator. The uncompensated oscillator gives sufficient accuracy for five- or six-digit measurement in most room-temperature applications. The temperaturecompensated oscillator has a temperature-dependent compensating network for frequency correction and can give sufficient accuracy for a six- or seven-digit instrument. Oven-stabilized oscillators maintain the temperature of the crystal typically at 70 ! 0.01°C. They generally employ higher mass crystals with lower resonant frequencies and operate at an overtone of their fundamental frequency. They have better aging performance than the other two types of crystal and are suitable for use in seven- to nine-digit instruments. The microprocessor provides control of the counting operation and the display and post-measurement computation. Conventional frequency counters count the number of cycles, ni, of the input waveform of frequency, fi, in a gating period, tg, which corresponds to a number of counts, nosc, of the 10 MHz crystal oscillator. They have an uncertainty corresponding to !1 count of the input waveform. The relative resolution is given by
Relative resolution Smallest measurable change in measurement value = Measurement value and for the measurement of frequency is thus !
1 1 =! t g $ fi Gating period # input frequency
To achieve measurements with good relative resolution for low-frequency signals, long gating times are required. Reciprocal frequency counters synchronize the gating time to the input waveform, which then becomes an exact number of cycles of the input waveform. The frequency of the input waveform is thus calculated as Number of cycles of input waveform Gating period ni = n # 10 - 7 Hz osc
fi =
The relative resolution of the reciprocal method is !
10 - 7 10 - 7 1 =! = !n tg Gating time osc
independent of the input frequency, and thus it is possible to provide high-resolution measurements for low-frequency signals. Modern frequency counters often employ both methods, using the conventional method to obtain the high resolution at high frequencies.
491
Chapter | 27 Electrical Measurements
Figure 27.75 Digital frequency counter.
The period, T1, of the input wave is calculated from Ti = =
Gating period 1 = fi Number of cycles of input waveform n osc # 10 - 7 ni
with a relative resolution of !1 in nosc.
The accuracy of frequency counters is limited by four factors. These are the system resolution and the trigger, systematic, and time-base errors. Trigger error (TE) is the absolute measurement error due to input noise causing triggering that is too early or too late. For a sinusoidal input waveform it is given by TE = !
1 (input signal to noise ratio) rfi
492
PART | IV Electrical and Radiation Measurements
and for a nonsinusoidal wave TE = !
The relative accuracy of frequency measurement is given by
Peak - to - peak noise voltage Signal slew rate
!
Systematic error (SE) is caused by differential propagation delays in the start and stop sensors or amplifier channels of the counter or by errors in the trigger level settings of the start and stop channels. These errors can be removed by calibration. The time-base error (TBE) is caused by deviation on the frequency of the crystal frequency from its calibrated value. The causes of the deviation have been considered previously.
Resolution of fi fi
!
TE ! Relative TBE tg
and the relative accuracy of period measurement is given by !
Resolution of Ti Ti
!
TE ! Relative TBE tg
Figure 27.77 shows the techniques employed in single-shot time interval measurement and frequency ratio measurement. Table 27.11 gives the characteristics of a series of 200 MHz universal timer/counters (Racal-Dana 9902/9904/9906).
27.8.2 Time-Interval Averaging Single-shot time interval measurements using a 10 MHz clock have a resolution of !100 ns. However, by performing repeated measurements of the time interval, it is possible to significantly improve the resolution of the measurement (Hewlett-Packard, 1977b). As shown in Figure 27.78, the number of counts of the digital clock in the time interval, T, will be either n or n + 1. It can be shown that if the measurement clock and the repetition rate are asynchronous the best estimate of the time interval, T, is given by T = n Tosc
Figure 27.76 (a) The use of hysteresis to reduce the effects of noise; (b) timing errors caused by hysteresis.
where n is the average number of counts taken over N repetitions and Tosc is the period of the digital clock. The standard deviation, σT, which is a measure of the resolution in time-interval averaging (TIA) for large N is given by Tosc vT = 6F ]F - 1g@ N
Table 27.10 Quartz oscillator characteristics Stability against
Uncompensated
Temperature compensated
Oven stabilized
Aging: /24 h
n.a.
n.a.
15 # 10-10*
/month
15 # 10-7
11 # 10-7
11 # 10-8
/year
15 # 10
-7
11 # 10
15 # 10-8
Temperature: 0-50°C ref. to +23°C
11 # 10-5
11 # 10-6
15 # 10-9
Change in measuring and supply mode:
13 # 10-7
15 # 10-8
13 # 10-9
Line voltage: !10%
11 # 10-8
11 # 10-9
15 # 10-10
Warm-up time to reach within 10-7
n.a.
n.a.
115 min
-6
line/int. battery/ext. D.C. 12-26 V
of final value *After 48 h of continuous operation.
493
Chapter | 27 Electrical Measurements
Figure 27.77 (a) Single-shot timeinterval measurement; (b) frequency ratio measurement.
where F lies between 1 and 0, dependent on the time interval being measured. Thus the maximum standard deviation on the time estimate is Tosc ^ 2 N h . By employing repeated measurements using a 10 MHz clock, it is possible to obtain a resolution of 10 ps. Repeated measurements also reduce
errors due to trigger errors caused by noise. The relative accuracy of TIA measurements is given by ! Resolution of T ! T
SE TE ! ! Relative TBE T ]N g $ T
494
PART | IV Electrical and Radiation Measurements
TABLE 27.11 Universal timer/counter specifications Measuring functions Modes of operation
Frequency Single and multiple period Single and multiple ratio Single and double-line time interval Single and double-line time interval averaging Single and multiple totalizing
Frequency measurement Input Coupling Frequency range
Accuracy Gate times (9000 and 9902)
Channel A AC or DC DC to 50 MHz (9902 and 9904) HF DC to 30 MHz VHF 10 MHz to 200 MHz prescaled by 4 (9906) !1 count ! timebase accuracy Manual: 1 ms to 100 s Automatic: gate times up to 1 s are selected automatically to avoid overspill Hysteresis avoids undesirable range changing for small frequency changes 1 ms to 100 s in decade steps (9904) HF: 1 ms to 100 s VHF: 4 ms to 400 s
Single- and multiple-period measurement Input Range Clock unit Coupling Periods averaged Resolution Accuracy
Channel A 1 ns to 1 s single period 100 ns to 1 s multiple period (9902 and 9904) 1 ns to 100 s single period 100 ns to 100 s multiple period (9906) 1 ns AC or DC 1 to 105 in decade steps 10 ps maximum ! 0.3% Number of periods averaged ! count ! timebase accuracy
Bandwidth
(measured at 50 mV rms input with 40 dB S/N ratio) Automatically reduced to 10 MHz (3 dB) when period selected
Time interval single and double input Input Time range
Accuracy Clock units Start/stop signals Manual start/stop Trigger slope selection Manual start/stop (9900) Trigger slope selection (9900) Bounce protection (9900)
Single input: channel B Double input: start channel B stop channel A 100 ns to 104 s (2.8 h approx.) (9902) 100 ns to 105 s (28 h approx.) (9904) 100 ns to 106 s (280 h approx.) (9906) !1 count ! trigger error ! timebase accuracy 5 Trigger error = Signal slope at the trigger point (V/ns) ns 100 ns to 10 ms in decade steps Electrical or contact By single push button on front panel Positive or negative slope can be selected on both start and stop By single push button on front panel N.B. Input socket automatically biased for contact operation (1 mA current sink) Electrical-positive or negative slopes can be selected on both start and stop signals Contact-opening or closure can be selected on both start and stop signals A 10 ms dead time is automatically included when contact operation is selected
Time-interval averaging single and double input Input
Single input: channel B Double input: Start channel B Stop channel A
495
Chapter | 27 Electrical Measurements
TABLE 27.11 (Continued ) Time-interval averaging single and double input Time range
150 ns to 100 ms (9902) 150 ns to 1 s 9904 150 ns to 10 s (9906)
Dead time between intervals Clock unit Time intervals averaged Resolution Accuracy
150 ns 100 ns 1 to 105 in decade steps 100 ns to 1 ps ! Timebase accuracy ! system error ! averaging error System error: 10 s per input channel. This is the difference in delays between start and stop signals and can be minimized by matching externally Averaging error =
Trigger error ! 100 Intervals averaged
Trigger error =
ns
5 ns Signal slope at the trigger point (V/ns)
Ratio Higher-frequency input Higher-frequency range Lower-frequency input Lower-frequency range reads
Channel A 10 Hz to 30 MHz (9900) DC to 50 MHz (9902, 9904) Channel B DC to 10 MHz Frequency A Frequency B
#n
Multiplier n
1 to
Accuracy
!1count ! trigger error on Channel B
105
in decade steps No. of gated periods
Trigger error =
5 ns Signal slope at the trigger point (V/ns)
Totalizing Input Max. rate Pulse width Pre-scaling
Channel A (10 MHz max.) 107 events per second 50 ns minimum at trigger points Events can be pre-scaled in decade multiples (n) from 1 to 105
Reads
No. of input events ! 1 count - 0
Manual start/stop Electrical start/stop
n By single push button on front panel By electrical signal applied to Channel B
With a high degree of confidence this can be expressed as !
1 1 $ ! N n
SE TE ! ! Relative TBE n Tosc $ (N) $ n $ T
27.8.3 Microwave-Frequency Measurement
Figure 27.78 Resolution of one-shot time-interval measurement.
By the use of pre-scaling, as shown in Figure 27.79, in which the input signal is frequency divided before it goes into the gate to be counted, it is possible to measure frequencies up
496
PART | IV Electrical and Radiation Measurements Figure 27.79 Frequency measurement range extension by input pre-scaling.
Figure 27.80 (a) Heterodyne converter counter; (b) transfer oscillator counter (from Hewlett-Packard, 1977a).
497
Chapter | 27 Electrical Measurements
to approximately 1.5 GHz. Higher frequencies typically up to 20 GHz can be measured using the heterodyne converter counter shown in Figure 27.80(a), in which the input signal is down-mixed by a frequency generated from a harmonic generator derived from a crystal-controlled oscillator. In the transfer oscillator technique shown in Figure 27.80(b), a low-frequency signal is phase-locked to the microwave input signal. The frequency of the low-frequency signal is measured, together with its harmonic relationship to the microwave signal. This technique typically provides measurements up to 23 GHz. Hybrid techniques using both heterodyne down-conversion and transfer oscillator extend the measurement range, typically to 40 GHz. Further details of these techniques can be found in Hewlett-Packard (1977a).
27.9 Frequency and phase measurement using an oscilloscope Lissajous figures can be used to measure the frequency or phase of a signal with respect to a reference source, though with an accuracy much lower than for other methods described. Figure 27.81 (a) shows the technique for frequency measurement. One signal is applied to the X plates of the oscilloscope and the other to the Y plates. Figure 27.81(b) shows the resulting patterns for various ratios of the frequency fx applied to the X plates to the frequency fx applied to the Y plates. If fx is the known frequency, it is adjusted until a stationary pattern is obtained. fy is then given by fy =
fx $ n x ny
where nx is the number of crossings of the horizontal line and ny the number of crossings of the vertical line, as shown in Figure 27.81(b). If the two signals are of the same frequency, their relative phases can be determined from the figures shown in Figure 27.81(c). The phase angle between the two signals is given by sin i =
AB CD
The accuracy of the method can be significantly increased by the use of a calibrated or calculable phase shift introduced to ensure zero phase shift between the two signals applied to the plates of the oscilloscope, as shown in Figure 27.81(d).
References Figure 27.81 (a) Frequency measurement using Lissajous figures; (b) Lissajous figures for various ratios of fx to fy; (c) phase measurement using Lissajous figures; (d) improved phase measurement using Lissajous figures.
Analog Devices, Data Acquisition Databook, Analog Devices, Norwood, Mass., 10, 123-125 (1984).
498
Arbel, A. F., Analog Signal Processing and Instrumentation, Cambridge University Press, Cambridge (1980). Bailey, A. E., “Units and standards of measurement,” J. Phys. E: Sci. Instrum., 15, 849-856 (1982). Bishop, J. and E. Cohen, “Hall effect devices in power measurement,” Electronic Engineering (GB), 45, No. 548, 57-61 (1973). British Standards Institution, BS 3938: 1973: Specification for Current Transformers, BSI, London (1973). British Standards Institution, BS 3941: 1974: Specification for Voltage Transformers, BSI, London (1974). British Standards Institution, BS 89: 1977: Specification for Direct Acting Electrical Measuring Instruments and their Accessories, BSI, London (1977). Brodie, B., “A 160ppm digital voltmeter for use in ac calibration,” Electronic Engineering (GB), 56, No. 693, 53-59 (1984). Clarke, F. J. J., and Stockton, J. R., “Principles and theory of wattmeters operating on the basis of regularly spaced sample pairs,” J. Phys. E: Sci. Instrum., 15, 645-652 (1982). Dix, C. H., “Calculated performance of a digital sampling wattmeter using systematic sampling,” Proc. I.E.E., 129, Part A, No. 3, 172-175 (1982). Dix, C. H., and Bailey, A. E. “Electrical Standards of Measurement - Part 1 D. C. and low frequency standards,” Proc. I.E.E., 122, 1018-1036 (1975). Fantom, A. E., Microwave Power Measurement, Peter Peregrinus for the IEE, Hitchin (1985). Froelich, M., “The influence of the pH value of the electrolyte on the e.m.f. stability of the international Weston cell,” Metrologia, 10, 35-39 (1974). Goldman, D. T. and R. J. Bell, SI: The International System of Units, HMSO, London (1982). Golding, E. W. and Widdis, F. C., Electrical Measurements and Measuring Instruments, 5th ed., Pitman, London (1963). Gregory, B. A., Electrical Instrumentation, Macmillan, London (1973). Gumbrecht, A. J., Principles of Interference Rejection, Solartron DVM Monograph No. 3, Solartron, Farnborough, U.K. (1972). Hague, B. and Foord, T. R., Alternating Current Bridge Methods, Pitman, London (1971). Harris, F. K., Electrical Measurements, John Wiley, New York (1966). Hewlett-Packard, Fundamentals of Time and Frequency Standards, Application Note 52-1, Hewlett-Packard, Palo Alto, Calif. (1974). Hewlett-Packard, Fundamentals of Microwave Frequency Counters, Application Note 200-1, Hewlett-Packard, Palo Alto, Calif. (1977a). Hewlett-Packard, Understanding Frequency Counter Specifications, Application Note 200-4, Hewlett-Packard, Palo Alto, Calif. (1977b). Hewlett-Packard, Fundamentals of RF and Microwave Power Measurements, Application Note 64-1, Hewlett-Packard, Palo Alto, Calif. (1978). Hugill, A. L., “Displacement transducers based on reactive sensors in transformer ratio bridge circuits,” in Instrument Science and Technology, Volume 2, ed. Jones, B. E., Adam Hilger, Bristol (1983). Josephson, B. D., “Supercurrents through barriers,” Phys. Letters, 1, 251 (1962). Kibble, B. P., Smith, R. C., and Robinson, I. A., “The NPL moving coilampere determination,” I.E.E.E. Trans., IM-32, 141-143 (1983). Matouka, M. F., “A wide-range digital power/energy meter for systems with non-sinusoidal waveforms,” I.E.E.E. Trans., IE-29, 18-31 (1982).
PART | IV Electrical and Radiation Measurements Miljanic, P. N., Stojanovic, B. and Petrovic, V. “On the electronic threephase active and reactive power measurement,” I.E.E.E. Trans., IM27, 452-455 (1978). NBS, NBS Monograph 84: Standard Cells—Their Construction, Maintenance, and Characteristics, NBS, Washington, D.C. (1965). Neubert, H. P. K., Instrument Transducers, 2nd ed., Oxford University Press, London (1975). Oliver, B. M. and Cage, J. M., Electronic Measurements and Instrumentation, McGraw-Hill, New York (1971). Owens, A. R., “Digital signal conditioning and conversion,” in Instrument Science and Technology, Volume 2 (ed. B. E. Jones), Adam Hilger, Bristol (1983). Pearce, J. R., “Scanning, A-to-D conversion and interference,” Solartron Technical Report Number 012183, Solartron Instruments, Farnborough, U.K. (1983). Pitman, J. C., “Digital voltmeters: a new analog to digital conversion technique,” Electronic Technology (GB), 12, No. 6, 123-125 (1978). Rathore, T. S., “Theorems on power, mean and RMS values of uniformly sampled periodic signals,” Proc. I.E.E., 131, Part A, No. 8, 598-600 (1984). Rayner, G. H., “An absolute determination of resistance by Campbell’s method,” Metrologia, 3, 8-11 (1967). Simeon, A. O. and McKay, C. D., “Electronic wattmeter with differential inputs,” Electronic Engineering (GB), 53, No. 648, 75-85 (1981). Sheingold, D. H., Analog/digital Conversion Notes, Analog Devices, Norwood, Mass. (1977). Somlo, P. I. and Hunter, J. D., Microwave Impedance Measurement, Peter Peregrinus for IEE, Hitchin (1985). Spreadbury, P. J., “Electronic voltage standards,” Electronics and Power, 27, 140-142 (1981). Steele, J. A., Ditchfield, C. R., and Bailey, A. E., “Electrical standards of measurement Part 2: RF and microwave standards,” Proc. I.E.E., 122, 1037-1053 (1975). Stone, N. W. B., et al., “Electrical standards of measurement Part 3: submillimeter wave measurements and standards,” Proc. I.E.E., 122, 10531070 (1975). Tagg, G. F., Electrical Indicating Instruments, Butterworths, London (1974). Thompson, A. M. and Lampard, D. G., “A new theorem in electrostatics with applications to calculable standards of capacitance,” Nature (GB), 177, 888 (1956). Vigoureux, P., “A determination of the ampere,” Metrologia, 1, 3-7 (1965). Vigoureux, P., Units and Standards for Electromagnetism, Wykenham Publications, London (1971).
Further Reading Carr, J. J., Elements of Electronic Instrumentation and Measurement, Prentice Hall, Englewood Cliffs, N.J. (1986). Coombs, F., Electronic Instrument Handbook (1994). Fantom, A. E., Bailey, A. E., and Lynch, A. C., (eds.), Radio Frequency and Microwave Power Measurement, Institution of Electrical Engineers (1990). O’Dell, Circuits for Electronic Instrumentation, Cambridge University Press, Cambridge (1991). Schnell, L. (ed.), Technology of Electrical Measurements, Wiley.
Chapter 28
Optical Measurements A. W. S. Tarrant
28.1 Introduction A beam of light can be characterized by its spectral composition, its intensity, its position and direction in space, its phase, and its state of polarization. If something happens to it to alter any of those quantities and the alterations can be quantified, a good deal can usually be found out about the “something” that caused the alteration. Consequently, optical techniques can be used in a huge variety of ways, but it would be quite impossible to describe them all here. This chapter describes a selection of widely used instruments and techniques. Optical instruments can be conveniently thought of in two categories: those basically involving image formation (for example, microscopes and telescopes) and those that involve intensity measurement (for example, photometers). Many instruments (for example, spectrophotometers) involve both processes, and it is convenient to regard these as falling in the second, intensity measurement category. For the purposes of this book we are almost entirely concerned with instruments in this second category. Image-formation instruments are well described in familiar textbooks such as R. S. Longhurst’s Geometrical and Physical Optics. The development of optical fibers and light guides has enormously broadened the scope of optical techniques. Rather than take wires to some remote instrument to obtain a meaningful signal from it, we can often now take an optical fiber, an obvious advantage where rapid response times are involved or in hazardous environments. In all branches of technology, it is quite easy to make a fool of oneself if one has no previous experience of the particular techniques involved. One purpose of this book is to help the nonspecialist to find out what is and what is not possible. The author would like to pass on one tip to people new to optical techniques: When we consider what happens in an optical system, we must consider what happens in the whole optical system; putting an optical system together is not quite like putting an electronic system together. Take, for example, a spectrophotometer, which consists of a light source, a ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00028-0
monochromator, a sample cell, and a detector. An alteration in the position of the lamp will affect the distribution of light on the detector and upset its operation, despite the fact that the detector is several units “down the line.” Optical systems must be thought of as a whole, not as a group of units acting in series. It should be noted that the words optical and light are used in a very loose sense when applied to instruments. Strictly, these terms should only refer to radiation within the visible spectrum, that is, the wavelength range 380–770 nm. The techniques used often serve equally well in the near-ultraviolet and near-infrared regions of the spectrum, and naturally we use the same terms. It is quite usual to hear people talking about ultraviolet light when they mean ultraviolet radiation, and many “optical” fibers are used with infrared radiation.
28.2 Light sources Light sources for use in instruments may be grouped conveniently under two headings: (1) conventional or incoherent sources and (2) laser or coherent sources. Conventional sources are dealt with in Sections 28.2.1 to 28.2.3 and laser sources in Section 28.2.4. The principal characteristics of a conventional light source for use in instruments are: 1. Spectral power distribution. 2. Luminance or radiance. 3. Stability of light output. 4. Ease of control of light output. 5. Stability of position. Other factors that may have to be taken into account in the design of an instrument system are heat dissipation, the nature of auxiliary equipment needed, source lifetime, cost, and ease of replacement. By the radiance of a light source, we mean the amount of energy per unit solid angle radiated from a unit area of it. Very often this quantity is much more important than the actual power of the lamp. For example,
499
500
if a xenon arc is to be used with a spectroscopic system, the quantity of interest is the amount of light that can be got through the slit of the spectrometer. A low-power lamp with a high radiance can be focused down to give a small, intense image at the slit and thus get a lot of light through; but if the lamp has a low radiance, it does not matter how powerful it is, it cannot be refocused to pass the same amount of radiation through the system. It can be easily shown in this case that the radiance is the only effective parameter of the source. If light output in the visible region only is concerned, luminance is sometimes used instead of radiance. There is a strict parallel between units and definitions of “radiant” quantities and “luminous” quantities (BS Spec. 4727; IEC– CIE, International Lighting Vocabulary). The unit of light, the lumen, can be thought of as a unit of energy weighted with regard to wavelength according to its ability to produce a visible sensation of light (Walsh, 1958, p. 138).
PART | IV Electrical and Radiation Measurements
Figure 28.1 Spectral power distribution of tungsten lamp.
28.2.1 Incandescent Lamps Incandescent sources are those in which light is generated by heating material electrically until it becomes white hot. Normally this material is tungsten, but if only infrared radiation is wanted it may be a ceramic material. In a tungsten lamp the heating is purely resistive, and the use of various filament diameters enables lamps to be made of similar power but different voltage ratings. The higher the voltage, the finer and more fragile is the filament. For instrument purposes, small and compact filaments giving the highest radiance are usually needed, so low-voltage lamps are often used. For lamps used as radiation standards it is customary to use a solid tungsten ribbon as a filament, but these require massive currents at low voltage (for example, 18 A, 6 V). The spectral power distribution of a tungsten lamp corresponds closely to that of a Planckian radiator, as shown in Figure 28.1, and the enormous preponderance of red energy will be noted. Tungsten lamps have a high radiance, are very stable in light output provided that the input power is stabilized, and are perfectly stable in position. The light output can be precisely controlled by varying the input power from zero to maximum, but because the filament has a large thermal mass, there is no possibility of deliberately modulating the light output. If a lamp is run on an AC supply at mains frequency, some modulation at twice that frequency invariably occurs and may cause trouble if other parts of the instrument system use mains–frequency modulation. The modulation is less marked with low-voltage lamps, which have more massive filaments, but can only be overcome by using either a smoothed dc supply or a high-frequency power supply (10 kHz). The main drawback to tungsten lamps is the limited life. The life depends on the voltage (see Figure 28.2), and it is common practice in instrument work to under-run lamps to get a longer life. Longer lamp lives are obtained with tungsten halogen lamps. These have a small amount of a halogen—usually
Figure 28.2 Variation with voltage of life and light output of a tungsten lamp (after Henderson and Marsden).
bromine or iodine—in the envelope that retards the deterioration of the filament. It is necessary for the wall temperature of the bulb to be at least 300°C, which entails the use of a small bulb made of quartz. However, this allows a small amount of ultraviolet radiation to escape, and with its small size and long life, the tungsten halogen lamp is a very attractive light source for instrument purposes.
28.2.1.1 Notes on Handling and Use After lengthy use troubles can arise with lamp holders, usually with contact springs weakening. Screw caps are preferable to bayonet caps. The envelopes of tungsten halogen lamps should not be touched by hand—doing so will leave grease on them, which will burn into the quartz when hot and ruin the surface.
28.2.2 Discharge Lamps Discharge lamps are those in which light is produced by the passage of a current through a gas or vapor, hence producing mostly line spectra. Enclosed arcs of this kind have negative temperature/resistance characteristics, so current-limiting devices are necessary. Inductors are often used if the lamp is to be run on an ac mains supply. Many types of lamp are
501
Chapter | 28 Optical Measurements
the larger sizes, unless “ozone-free” lamps are used. These are lamps with envelopes that do not transmit the shorter ultraviolet wavelengths.
28.2.3 Electronic Sources: Light-emitting Diodes
Figure 28.3 Typical spectral power distribution for a deuterium lamp.
available (Henderson and Marsden, 1972); some commonly met with in instrument work are mentioned here.
28.2.2.1 Deuterium Lamps The radiation is generated by a low current density discharge in deuterium. Besides the visible line spectrum, a continuous spectrum (see Figure 28.3) is produced in the ultraviolet. The radiance is not high, but provided that the input power is stabilized, these lamps are stable in both output and position. To obtain the necessary ultraviolet transmission, either the whole envelope is made of silica or a silica window is used. These lamps are used as sources in ultraviolet spectrophotometers and are superior to tungsten lamps for that purpose at wavelengths below 330 nm.
28.2.2.2 Compact Source Lamps Some types of discharge lamp offer light sources of extremely high luminance. These involve discharges of high current density in a gas at high pressure. The lampfilling gases may be xenon, mercury, mercury plus iodine, or a variety of other “cocktails.” The light emitted is basically a line spectrum plus some continuous radiation, but in many cases the spectrum is, in effect, continuous. The spectrum of xenon, for example, has over 4,000 lines and extends well down into the ultraviolet region. Xenon lamps are widely used in spectrofluorimeters and other instruments for which a very intense source of ultraviolet radiation is required. Many lamps of this kind require elaborate starting arrangements and are particularly difficult to restart if switched off when hot. “Igniter” circuits are available, but since these involve voltages up to 50 kV, special attention should be given to the wiring involved. All these lamps contain gas under high pressure, even when cold, and the maker’s safety instructions should be rigidly followed. Xenon arcs produce quite dangerous amounts of ultraviolet radiation—dangerous both in itself and in the ozone that is produced in the atmosphere. They must never be used unshielded, and to comply with the Occupational Safety and Health Administration (OSHA) rules, interlocks should be arranged so that the lamp is switched off if the instrument case is opened. Force-ducted ventilation should be used with
By applying currents to suitably doped semiconductor junctions, it is possible to produce a small amount of light. The luminance is very low, but the light output can be modulated, by modulating the current, up to very high frequencies, and thus the light-emitting diode (LED) is a good source for a fiber optic communication system. LEDs are also commonly used in display systems. The spectral power distribution depends on the materials used in the junction. For electro-optical communication links, there is no need to keep to the visible spectrum, and wavelengths just longer than visible (for example, 850 nm) are often used. There are no serious operating problems and only low voltages are needed, but the light output is miniscule compared with, say, a tungsten lamp.
28.2.4 Lasers Light from lasers differs from that from conventional sources by virtue of being coherent, whereas conventional sources produce incoherent light. In an incoherent beam, there is no continuous phase relationship between light at one point of the beam and any other. The energy associated with any one quantum or wave packet can be shown to extend over a finite length—somewhere around 50 cm—as it travels through space. For that reason no interference effects can be observed if the beam is divided and subsequently superimposed if the path difference is longer than 50 cm or so. The same effect is responsible for the fact that monochromatic light from a conventional source in fact has a measurable bandwidth. However, in a laser, the light is produced not from single events occurring randomly within single atoms but from synchronized events within a large number of atoms—hence the “finite length of wave train” is not half a meter but can be an immense distance. Consequently, laser light is much more strictly monochromatic than that from conventional sources; it is also very intense and is almost exactly unidirectional. Thus it is easy to focus a laser beam down to a very small spot at which an enormous density of energy can be achieved. Lasers are valuable in applications in which (1) the extended length of wave train is used (for example, holography and surveying), (2) a high energy density is needed (for example, cutting of sheet metal, ophthalmic surgery), and (3) the narrowness of the beam is used (for example, optical alignment techniques in engineering or building construction). The operating principle of the laser is the stimulated emission of radiation. In any normal gas the number of electrons in atoms in each of the possible energy levels is determined by the temperature and other physical factors. In a laser this normal distribution is deliberately upset so as to overpopulate one of the higher levels. The excited atoms then not only
502
release their excess energy as radiation, they do so in phase, so the emissions from vast numbers of atoms are combined in a single wave train. Lasing action can also be produced in solid and liquid systems; hundreds of atomic systems are now known that can be used in lasers, so a wide range of types is available for either continuous or pulsed operation. A simple explanation of the principle is given in Heavens (1971) and numerous specialist textbooks (Dudley, 1976; Koechner, 1976; Mooradian et al., 1976). Although lasers are available in many types and powers, by far the most commonly used in laboratory work is the helium–neon laser operating at a wavelength of 632.8 nm. The power output is usually a few milliwatts. For applications in which a high-energy density is needed (for example, metal cutting), CO2 lasers are often used. Their wavelength is in the infrared range (about 10.6 nm), and the output power may be up to 500 W. Their advantage in industrial work is their relatively high efficiency—about 10 percent of the input power appears as output power. For other wavelengths in the visible region, krypton, argon, or “tuneable dye” lasers can be used. The krypton and argon types can be made to operate at a variety of fixed wavelengths. Tuneable dye lasers use a liquid system involving organic dyes. In such systems the operating frequency can be altered within a limited range by altering the optical geometry of the system.
28.2.4.1 Laser Safety Lasers are, by their nature, dangerous. The foremost risk is that of damage to eyesight caused by burning of the retina. Even a moment’s exposure may be catastrophic. Consequently, safety precautions must be taken and strictly maintained. It is an OSHA requirement that all due precautions are taken. In practice this means that, among other things, all rooms in which lasers are used must be clearly marked with approved warning notices. The best precautions are to use the lowest laser powers that are possible and to design equipment using lasers to be totally enclosed. A full description of safety requirements is given in Standards for the Safe Use of Lasers, published by the American National Standards Institute, which should be read and studied before any work with lasers is started. Useful guidance may also be obtained from BS 4803 and Safety in Universities: Notes for Guidance.
28.3 Detectors The essential characteristics of a radiation detector are: 1. The spectral sensitivity distribution; 2. The response time. 3. The sensitivity. 4. The smallest amount of radiation that it can detect. 5. The size and shape of its effective surface. 6. Its stability over a period of time.
PART | IV Electrical and Radiation Measurements
Other factors to be borne in mind in choosing a detector for an instrument are the precision and linearity of response, physical size, robustness, and the extent of auxiliary equipment needed. Detectors can be used in three ways: 1. Those in which the detector is used to effect an actual measurement of light intensity. 2. Those in which it is used to judge for equality of intensity between two beams. 3. Those in which it is required to establish the presence or absence of light. In case (1), there needs to be an accurately linear relationship between the response and the intensity of radiation incident on the detector. Many detectors do not have this property, and this fact may determine the design of an instrument; for example, many infrared spectrophotometers have elaborate optical arrangements for matching the intensity in order that a nonlinear detector can be used. It should be noted that in almost all detectors the sensitivity varies markedly from point to point on the operative surface. Consequently, if the light beam moves with respect to the detector, the response will be altered. This effect is of critical importance in spectrophotometers and like instruments; if a solution cell with faces that are not perfectly flat and parallel is put into the beam, it will act as a prism, move the beam on the detector, and produce an erroneous result. Precautions should be taken against this effect by ensuring that imperfect cells are not used. Some detectors are sensitive to the direction of polarization of incident light. Although in most optical instruments the light is randomly polarized, the distribution of intensities between the polarization directions is by no means uniform, especially after passage through monochromators. This effect often causes no trouble, but it can be an extremely abstruse source of error.
28.3.1 Photomultipliers Photomultipliers rely on the photoemissive effect. The light is made to fall on an emitting surface (the photocathode; see Figure 28.4) with a very low work function within a vacuum tube and causes the release of electrons. These electrons are attracted to a second electrode at a strongly positive voltage where each causes the emission of several secondary electrons, which are attracted to a third electrode, and so on. By repeating this process at a series of “dynodes,” the original electron stream is greatly multiplied and results in a current of up to 100 nA or so at the final anode. The spectral sensitivity is determined by the nature of the photocathode layer. The actual materials need not concern us here; different types of cathodes are referred to by a series of numbers from S1 upward. The response is linear, response time rapid, and the sensitivity is large. The sensitivity may be widely varied by varying the voltage applied to the dynode chain. Thus S a V n /2
503
Chapter | 28 Optical Measurements
In photovoltaic detectors the energy of the light is actually converted into electrical energy, often to such effect that no external energy source is needed to make a measurement; the solar cell uses this principle. The more sensitive photovoltaic detectors need an external power source, as do all photoconductive detectors.
28.3.2.1 Simple Photovoltaic Detectors
Figure 28.4 Construction of photomultiplier and typical circuit.
where S = sensitivity, V = voltage applied, and n = number of dynode stages. Conversely, where accurate measurements are needed, the dynode chain voltage must be held extremely stable, since between 8 and 14 stages are used. A variety of cathode shapes and multiplier configurations are available. There is no point in having a cathode of a much larger area than that actually to be used, because this will add unnecessarily to the noise. Emission from the photocathode also occurs as a result of thermionic emission, which produces a permanent “dark current.” It is random variations in this dark current— noise—that limit the ultimate sensitivity. Photomultipliers are also discussed in Chapter 29, and excellent information on their use is given in the makers’ catalogues, to which readers are referred. It should be noted that photomultipliers need very stable high-voltage power supplies, are fragile, and are easily damaged by overloads. When they are used in instruments, it is essential that interlocks are provided to remove the dynode voltage before any part of the case is opened. Moreover, photomultipliers must never be exposed to sunlight, even when disconnected.
28.3.2 Photovoltaic and Photoconductive Detectors (Photodiodes) When light falls on a semiconductor junction, there is nearly always some effect on the electrical behavior of that junction, and such effects can be made use of in light detectors. There are two main categories: (1) those in which the action of the light is used to generate an EMF, and (2) those in which the action of light is used to effectively alter the resistance of the device. Those of the first type are referred to as photovoltaic detectors—sometimes called solar cells; those of the second type are called photoconductive detectors. There are some materials that show photoconductive effects but that are not strictly semiconductors (for example, lead sulfide), but devices using these are included in the category of photoconductive detectors.
In many applications there is sufficient light available to use a photovoltaic cell as a self-powered device. This might be thought surprising, but it is perfectly feasible. If we consider a detector of 15 mm diameter, illuminated to a level of 150 lux (average office lighting), the radiant power received in the visible spectrum will be about 100 nW. If the conversion efficiency is only 5 percent, that gives an output power of 5 nW, and if that is fed to a galvo or moving coil meter of 50 X resistance, a current of around 300 nA will be obtained—more than enough for measurement purposes. Detectors of this type were in use for many years before the advent of the semiconductor era in the 1950s and were then called rectifier cells or barrier-layer cells. The simplest type (see Figure 28.5) consists of a steel plate on which a layer of selenium is deposited. A thin transparent film of gold is deposited on top of that to serve as an electrode. In earlier models an annular contact electrode was sputtered on to facilitate contact with the gold film. Nowadays a totally encapsulated construction is used for cells of up to 30 mm diameter. The semiconductor action takes place at the steel– selenium junction; under the action of light there is a buildup of electrons in the selenium, and if an external circuit is made via the gold electrode, a current will flow. If there is zero resistance in that external circuit, the current will be proportional to the light intensity. Normally there will be some resistance, and the EMF developed in it by the current will oppose the current-generation process. This means that the response will not be linear with light intensity (see Figure 28.6). If the external resistance is quite high (for example, 1,000 X), the response will approximate to a logarithmic one, and it is this feature that enables this type of detector to be used over a very wide
Figure 28.5 Simple photovoltaic detector.
504
PART | IV Electrical and Radiation Measurements
from the photomultiplier, with the advantages of a wider spectral range, less bulk, and much less complexity.
28.3.2.3 Photoconductive Detectors
Figure 28.6 Nonlinear response of photovoltaic detectors.
range of intensities in the photographic exposure meter and various daylight recording instruments. A circuit known as the Campbell–Freeth circuit was devised to present zero resistance to the detector (Thewlis, 1961) so as to obtain a truly linear response. It is not widely used in current practice since if high accuracy is sought, it is unlikely that a photovoltaic detector will be used. Detectors of this variety, based on steel plates, are perhaps best described as “cheap and cheerful.” The accuracy is not high—1 or 2 percent at best; they are not particularly stable with temperature changes, and they show marked fatigue effects. They are also extremely noisy and have a relatively long time response compared with more modern versions. However, they are cheap and robust, may be fabricated into a variety of shapes, and have the outstanding advantage that no external power supply is required. Consequently, they are widely used in exposure meters, street-lighting controls, photometers, “abridged” spectrophotometers and colorimeters, flame photometers, simple densitometers, and so on—all those cases in which high accuracy is not needed but price is an important consideration. The spectral response of this type of detector is broader than that of the human eye, but a filter can be used to produce a spectral response that is a good enough match for most photometric purposes. Such filters are usually encapsulated with the detectors into a single unit.
28.3.2.2 The Silicon Diode The advent of doped semiconductor materials has enabled many other “barrier-layer” systems to be used. One such is the diffused silicon photodiode; in this case the barrier layer is provided by a p–n junction near the surface of a silicon layer. This is very much less noisy than the detector described previously and can thus be used for much lower light levels. External amplification is needed, but rapid response can be obtained (50 ns in some cases) so that the silicon diode is not far short of the performance obtained
Almost all semiconductor materials are light sensitive, since light falling on them produces an increased number of current carriers and hence an increase in conductivity; semiconductor devices have to be protected from light to prevent the ambient lighting upsetting their operation. Consequently, it is possible to make a wide variety of light-sensitive devices using this principle. These devices may be in the form of semiconductor diodes or triodes. In the former case the material is usually deposited as a thin film on a glass plate with electrodes attached; under the action of light the resistance between the electrodes drops markedly, usually in a nonlinear fashion. Since there are dozens of semiconductor materials available, these devices can be made with many different spectral responses, covering the visible and near infrared spectrum up to 5 nm or so. The response time is also dependent on the material, and though many are fast, some are very slow—notably cadmium sulfide, which has a response time of about 1 s. Photoconductive detectors of the triode type are, in effect, junction transistors that are exposed to light.* They offer in-built amplification, but again, usually produce a nonlinear signal. Devices are now available in which a silicon diode is combined with an amplifier within a standard transistor housing. All photoconductive devices are temperature sensitive, and most drift quite badly with small changes of temperature. For that reason they are used with “chopped” radiation (see Section 28.4.3 on detector techniques) in all critical applications. They are commonly used as detectors in spectrophotometers in the near-infrared range. A common technique is to use them as null-balance detectors comparing the sample and reference beams (see Section 28.6.1 on spectrophotometers) so that a nonlinear response does not matter.
28.3.3 Pyroelectric Detectors Although they are strictly “thermal” detectors, pyroelectric detectors are used very widely as light detectors. They rely on the use of materials that have the property of temperaturedependent spontaneous electric polarization. These may be in the form of crystals, ceramics, or thin plastic films. When radiation falls on such a material, thermal expansion takes place, minutely altering the lattice spacing in the crystal, which alters the electrical polarization and results in an *The
joke is often made that the light-detecting properties of one famous type of phototransistor were discovered only when one batch of transistors was accidentally left unpainted. The author regrets that he cannot confirm this story.
505
Chapter | 28 Optical Measurements
EMF and a charge being developed between two faces of the crystal. These faces are equipped with electrodes to provide connection to an external circuit and to an appropriate amplifier. Pyroelectric detectors using this principle are extremely sensitive, and, being basically thermal detectors, they are sensitive to an enormous range of wavelengths. In practice the wavelength range is limited by the transmission of windows used, the absorption characteristics of the pyroelectric material, and the reflection characteristics of its surfaces. The last-mentioned quality can be adjusted by the deposition of thin films so that relatively wide spectral responses may be obtained; alternatively, the sensitivity to a particular wavelength can be considerably enhanced. It is important to remember that pyroelectric detectors respond in effect only to changes in the radiation falling on them and not to steady-state radiation. Their response speed is extremely fast, but because they inherently have some capacity, there is a compromise between sensitivity and response speed that can be determined by an appropriate choice of load resistor. The ability to respond only to changes in the radiation field enables them to be used for laser pulse measurements, and at a more mundane level they make excellent sensors for burglar alarms. They are usually made in quite small sizes (for example, 1 # 2 mm), but they can be made up to 1 cm in diameter with a lower response speed. They can also be made in the form of linear and two-dimensional arrays. If they are required to measure steady-state radiation, as in a spectrophotometer, the usual technique of beam chopping (see Section 28.4.3) can be used; the pyroelectric detector will then respond only to the chopped beam and nothing else. A variety of materials are used, notably lithium tantalate and doped lead zirconate titanate. The pyroelectric detector, with its wide spectral range, fast response, and relatively low cost, is probably capable of further development and application than any other and will be seen in a very wide range of applications in the next few years.
28.3.4 Array Detectors The devices described thus far are suitable for making a single measurement of the intensity of a beam of light at any one instant. However, often the need arises to measure the intensity of many points in an optical image, as in a television camera. In television camera tubes the image is formed on a photocathode resembling that of a photomultiplier, which is scanned by an electron beam. Such tubes are outside the scope of this book, but they are necessarily expensive and require much supporting equipment. In recent years array detectors using semiconductor principles have been developed, enabling measurements to be made simultaneously at many points along a line or, in some cases, over a whole area comprising many thousands
of image points. All these devices are based on integratedcircuit technology, and they fall into three main categories: 1. Photo-diode arrays. 2. Charge-coupled devices. 3. Charge injection devices. It is not possible to go into their operation in detail here, but more information can be found in the review article by Fry (1975) and in the book by Beynon (1979). 1. A photodiode array consists of an array of photodiodes of microscopic dimensions, each capable of being coupled to a signal line in turn through an associated transistor circuit adjacent to it on the chip. The technique used is to charge all the photodiode elements equally; on exposure to the image, discharging takes place, those elements at the points of highest light intensity losing the most charge. The array is read by connecting each element in turn to the signal line and measuring the amount of charge needed to restore each element to the original charge potential. This can be carried out at the speeds normally associated with integrated circuits, the scan time and repetition rate depending on the number of elements involved—commonly one or two thousand. It should be noted that since all array detectors are charge-dependent devices, they are in effect time integrating over the interval between successive readouts. 2. Charge-coupled devices (CCDs) consist of an array of electrodes deposited on a substrate, so that each electrode forms part of a metal-oxide-semiconductor device. By appropriate voltage biasing of electrodes to the substrate it is possible to generate a potential well under each electrode. Furthermore, by manipulating the bias voltages on adjacent electrodes, it is possible to transfer the charge in any one potential well to the next, and with appropriate circuitry at the end of the array the original charge in each well may be read in turn. These devices were originally developed as delay lines for computers, but since all semiconductor processes are affected by incident light, they make serviceable array detectors. The image is allowed to fall on the array, and charges will develop at the points of high intensity. These are held in the potential wells until the reading process is initiated at the desired interval. 3. Charge-injection devices (CIDs) use a similar principle but one based on coupled pairs of potential wells rather than a continuous series. Each pair is addressed by using an X–Y coincident voltage technique. Light is measured by allowing photogenerated charges to build up on each pair and sensing the change as the potential well fills. Once reading has been completed, this charge is injected into the substrate by removing the bias from both electrodes in each pair simultaneously. Recent developments have made photodiode arrays useful. One vendor, Hamamatsu, offers silicon photodiode arrays consisting of multiple photodiode elements, formed
506
in a linear or matrix arrangement in one package. Some arrays are supplied coupled with a CMOS multiplexer. The multiplexer simplifies design and reduces the cost of the output electronic circuit. Hamamatsu’s silicon photodiode arrays are used in a wide range of applications such as laser beam position detection, color measurement and spectrophotometry. CCD sensors are now widely used in cameras. The recent NASA Kepler spacecraft, for example, has a CCD camera with a 0.95-meter aperture, wide field-of-view Schmidt telescope, and a 1.4-meter primary mirror. With more than 95 megapixels, Kepler’s focal plane array of 42 backside illuminated CCD90s from E2v Technologies forms the largest array of CCDs ever launched into space by NASA. New CCDs are ideal for applications in low light level and highspeed industrial inspection, particularly where UV and NIR sensitivity are required. By the 1990s, as CMOS sensors were gaining popularity, CIDs were adapted for applications demanding high dynamic range and superior antiblooming performance (Bhaskaran, et al). CID-based cameras have found their niche in applications requiring extreme radiation tolerance and the high dynamic range scientific imaging. CID imagers have progressed from passive pixel designs using proprietary silicon processes to active pixel devices using conventional CMOS processing. Scientific cameras utilizing active pixel CID sensors have achieved a factor of 7 improvement in read noise (30 electrons (rms) versus 225 electrons (rms)) at vastly increased pixel frequencies (2.1 MHz versus 50 kHz) when compared to passive pixel devices. Radiation-hardened video cameras employing active pixel CIDs is the enabling technology in the world’s only solid-state radiation-hardened color camera, which is tolerant to total ionizing radiation doses of more than 5 Mega-rad. Performance-based CID imaging concentrates on leveraging the advantages that CIDs provide for demanding applications.
28.4 Detector techniques In nearly all optical instruments that require a measurement of light intensity, we rely on the detector producing a signal (usually a current) that is accurately proportional to the light intensity. However, all detectors, by their nature, pass some current when in total darkness, and we have to differentiate between the signal that is due to “dark current” and that due to “dark current plus light current.” A further problem arises if we try to measure very small light currents or larger light currents very accurately. In all detectors there are small random variations of the dark current—that is, noise—and it is this noise that limits the ultimate sensitivity of any detector. The differentiation between “dark and dark plus light” signals is achieved by taking the difference of the signals when the light beam falls on the detector and when it is obscured. In some manually operated instruments, two
PART | IV Electrical and Radiation Measurements
Figure 28.7 Chopper disc.
settings are made, one with and one without a shutter in the beam. In most instruments a chopper disc (see Figure 28.7) is made to rotate in the light beam so that the beam is interrupted at a regular frequency and the signal is observed by AC circuits that do not respond to the continuous dark current. This technique is called beam chopping. The effects of noise can be reduced in three ways: 1. Prolonging the time constant of the detector circuitry. 2. Cooling the detector. 3. Using synchronized techniques.
28.4.1 Detector Circuit Time Constants Dark current noise is due to random events on the atomic scale in the detector. Hence if the time constant of the detector circuit is made sufficiently long, the variations in the output current will be smoothed out, and it will be of a correct average value. The best choice of time constant will depend on the circumstances of any application, but clearly long time constants are not acceptable in many cases. Even in manually read instruments, a time constant as long as 1 s will be irritating to the observer.
28.4.2 Detector Cooling It is not possible to go into a full discussion of detector noise here but in many detectors the largest contribution to the noise is that produced by thermionic emission within the detector. Thermionic emission from metals follows Langmuir’s law: i a e 3T/2 where T is absolute temperature and i represents the emission current. Room temperature is around 295 K in terms of absolute temperature, so a significant reduction in emission current and noise can be achieved by cooling the detector.
507
Chapter | 28 Optical Measurements
Detectors may be cooled by the use of liquid nitrogen (77 K, –196°C), solid CO2 (195 K, –79°C), or by Peltier effect cooling devices. The use of liquid nitrogen or solid CO2 is cumbersome; it usually greatly increases the complexity of the apparatus and requires recharging. Another problem is that of moisture from the air condensing or freezing on adjacent surfaces. Peltier-effect cooling usually prevents these problems but does not achieve such low temperatures. Unless special circumstances demand it, detector cooling is less attractive than other methods of noise reduction.
28.4.3 Beam Chopping and Phase-Sensitive Detection Random noise may be thought of as a mixture of a large number of signals, all of different frequencies. If a beam chopper is used and running at a fixed frequency and the detector circuitry is made to respond preferentially to that frequency, a considerable reduction in noise can be achieved. Although this can be effected by the use of tuned circuits in the detector circuitry, it is not very reliable, since the chopper speed usually cannot be held precisely constant. A much better technique is to use a phase sensitive detector, phase locked to the chopper disc by means of a separate pickup. Such a system is illustrated in Figure 28.8. The light beam to be measured is interrupted by a chopper disc, A, and the detector signal (see Figure 28.9) is passed to a phase-
sensitive detector. The gating of the detector is controlled by a separate pickup, B, and the phase relationship between A and B is adjusted by moving B until the desired synchronization is achieved (see Figure 28.10); a double-beam oscilloscope is useful here. The chopper speed should be kept as uniform as possible. Except when thermal detectors are used, the chopping frequency is usually in the range 300–10,000 Hz. The phaselocking pickup may be capacitative or optical, the latter being greatly preferable. Care must be taken that the chopper disc is not exposed to room light or this may be reflected from the back of the blades into the detector and falsely recorded as a “dark” signal. It should be noted that because the beam to be measured has finite width, it is not possible to “chop square.” By making the light pulses resemble the shape and phase of the gating pulses, a very effective improvement in signal-to-noise ratio can be obtained—usually at least 1,000:1. This technique is widely used, but stroboscopic trouble can occur when pulsating light sources are involved, such as fluorescent lamps or CRT screens, when the boxcar system can be used.
28.4.4 The Boxcar Detector A further improvement in signal-to-noise ratio can be obtained with the boxcar system. In some ways this is similar to the phase-sensitive detector system, but instead of the chopping being done optically, it is carried out electronically. When this method is used with a pulsating source (for example, a fluorescent lamp), the detector signal is sampled at intervals and phase-locked with the source, the phase locking being provided from the power supply feeding the
Figure 28.8 Beam chopper used with a phase-sensitive detector.
Figure 28.9 Typical output from a beam-chopped detector.
Figure 28.10 Input signals to phase-sensitive detector: (a) signal from detector; (b) gating signal derived from pickup, incorrectly phased; (c) detector and gating signals, correctly phased.
508
source (see Figure 28.11). The sampling period is made very narrow, and the position of the sampling point within the phase is adjusted by a delay circuit (see Figure 28.12). It is necessary to establish the dark current in a separate experiment. If a steady source is to be measured, an oscillator is used to provide the “phase” signal. This system both offers a considerable improvement in noise performance and enables us to study the variation of light output of pulsating sources within a phase. Although more expensive than the beam chopping-PSD system, it is very much more convenient to use, since it eliminates all
Figure 28.11 Boxcar detector system applied to a pulsating source.
PART | IV Electrical and Radiation Measurements
the mechanical and optical problems associated with the chopper. (The American term boxcar arises from the fact that the gating signal, seen upside down on a CRT, resembles a train of boxcars.)
28.4.5 Photon Counting When measurement of a very weak light with a photomultiplier is required, the technique of photon counting may be used. In a photomultiplier, as each electron leaves the photocathode, it gives rise to an avalanche of electrons at the anode of very short duration (see Section 28.3.1). These brief bursts of output current can be seen with the help of a fast CRT. When very weak light falls on the cathode, these bursts can be counted over a given period of time, giving a measure of the light intensity. Although extremely sensitive, this system is by no means easy to use. It is necessary to discriminate between pulses due to photoelectrons leaving the cathode and those that have originated from spurious events in the dynode chain; this is done with a pulse-height discriminator, which has to be very carefully adjusted. Simultaneous double pulses are also a problem. Another problem in practice is the matching of photon counting with electrometer (i.e., normal) operation. It also has to be remembered that it is a counting and not a measuring operation, and since random statistics apply, if it is required to obtain an accuracy of !1 percent on a single run, then at least 10,000 counts must be made. This technique is regularly used successfully in what might be termed research laboratory instrumentation— Raman spectrographs and the like—but is difficult to set up and its use can really only be recommended for those cases where the requirement for extreme sensitivity demands it.
28.5 Intensity measurement
Figure 28.12 Signals used in boxcar detector system: (a) signal from detector produced by pulsating source (e.g., CRT); (b) synchronizing signal derived from source; (c) gating signal.
The term light-intensity measurement can be used to refer to a large range of different styles of measurement. These can be categorized loosely into (1) those where the spectral sensitivity of the detector is used unmodified and (2) those where the spectral sensitivity of the detector is modified deliberately to match some defined response curve. Very often we are concerned with the comparison of light intensities, where neither the spectral power distribution of the light falling on the detector nor its geometrical distribution with respect to the detector will change. Such comparative measurements are clearly in category (1), and any appropriate detector can be used. However, if, for example, we are concerned with the purposes of lighting engineering where we have to accurately measure “illuminance” or “luminance” with sources of any spectral or spatial distribution, we must use a photometer with a spectral response accurately matched to that of the human eye and a geometrical
509
Chapter | 28 Optical Measurements
response accurately following a cosine law; that is, a cate gory (2) measurement. Some “measurements” in category (1) are only required to determine the presence or absence of light, as in a very large number of industrial photoelectric controls and counters, burglar alarms, street-lighting controls, and so on. The only critical points here are that the detector should be arranged so that it receives only light from the intended source and that it shall have sufficient response speed. Photodiodes and phototransistors are often used and are robust, reliable, and cheap. Cadmium sulfide photoconductive cells are often used in lighting controls, but their long response time (about 1 s) restricts their use in many other applications.
28.5.1 Photometers A typical photometer head for measurements of illuminance (the amount of visible light per square meter falling on a plane) is shown in Figure 28.13. Incident light falls on the opal glass cylinder A, and some reaches the detector surface C after passing through the filter layer B. The detector may be of either the photoconductive or photovoltaic type. The filter B is arranged to have a spectral transmission characteristic such that the detector-filter combination has a spectral sensitivity matching that of the human eye. In many instruments the filter layer is an integral part of the detector. The cosine response is achieved by careful design of the opal glass A in conjunction with the cylindrical protuberance D in its mounting. The reading is displayed on an appropriately calibrated meter connected to the head by a meter or two of a thin cable, so that the operator does not “get in his own light” when making a reading.
Instruments of this kind are available at a wide range of prices, depending on the accuracy demanded; the best instruments of this type can achieve an accuracy of 1 or 2 percent of the illuminance of normal light sources. Better accuracy can be achieved with the use of a Dresler filter, which is built up from a mosaic of different filters rather than a single layer, but this is considerably more expensive. A widely used instrument is the Hagner photometer, which is a combined instrument for measuring both illuminances and luminances. The “illuminance” part is as described previously, but the measuring head also incorporates a telescopic optical system focused on to a separate, internal detector. A beam divider enables the operator to look through the telescope and point the instrument at the surface whose luminance is required; the internal detector output is indicated on a meter, also arranged to be within the field of view. The use of silicon detectors in photometers offers the possibility of measuring radiation of wavelengths above those of the visible spectrum, which is usually regarded as extending from 380 to 770 nm. Many silicon detectors will operate at wavelengths up to 1170 nm. In view of the interest in the 700–1100 nm region for the purposes of fiber optic communications, a variety of dual-function “photometer/ radiometer” instruments are available. Basically these are photometers that are equipped with two interchangeable sets of filters: (1) to modify the spectral responsivity of the detector to match the spectral response of the human eye and (2) to produce a flat spectral response so that the instrument responds equally to radiation of all wavelengths and thus produces a reading of radiant power. In practice this “flat” region cannot extend above 1,170 nm, and the use of the phrase radiometer is misleading because that implies an instrument capable of handling all wavelengths; traditional radiometers operate over very much wider wavelength ranges. In construction, these instruments resemble photometers, as described, except that the external detector head has to accommodate the interchangeable filters and the instrument has to have dual calibration. These instruments are usually restricted to the measurement of illuminance and irradiance, unlike the Hagner photometer, which can measure both luminance and illuminance. The effective wavelength operating range claimed in the radiometer mode is usually 320–1,100 nm.
28.5.2 Ultraviolet Intensity Measurements
Figure 28.13 Cosine response photometer head. A: opal glass cylinder; B: filter; C: detector; D: light shield developed to produce cosine response; E: metal case.
In recent years, a great deal of interest has developed in the nonvisual effects of radiation on humans and animals, especially ultraviolet radiation. Several spectral response curves for photobiological effects (for example, erythema and photokeratitis) are now known (Steck, 1982). Ultraviolet photometers have been developed accordingly, using the same general principles as visible photometers but with appropriate detectors and filters. Photomultipliers are usually used as
510
PART | IV Electrical and Radiation Measurements
detectors, but the choice of filter materials is restricted; to date, nothing like the exact correlation of visible response to the human eye has been achieved. An interesting development has been the introduction of ultraviolet film badges—on the lines of X-ray film badges— for monitoring exposure to ultraviolet radiation. These use photochemical reactions rather than conventional detectors (Young et al., 1980).
28.5.3 Color-Temperature Meters The color of incandescent light sources can be specified in terms of their color temperature, that is, the temperature at which the spectral power distribution of a Planckian blackbody radiator most closely resembles that of the source concerned. (Note: This is not the same as the actual temperature.) Since the spectral power distribution of a Planckian radiator follows the law E m = c1 $ ms ^ e c /mT - 1 h . 2
-1
(see Section 28.8), it is possible to determine T by determining the ratio of the Em values at two wavelengths. In practice, because the Planckian distribution is quite smooth, broadband filters can be used. Many photometers (see Section 28.6.1) are also arranged to act as color-temperature meters, usually by the use of a movable shade arranged so that different areas of the detectors can be covered by red and blue filters; thus the photometer becomes a “red-to-blue ratio” measuring device. Devices of this kind work reasonably well with incandescent sources (which include sunlight and daylight) but will give meaningless results if presented to a fluorescent or discharge lamp.
Note that theoretically it does not matter whether the light passes first through the monochromator and then the sample, or vice versa; the former is usual in visible or ultraviolet instruments, but for infrared work the latter arrangement offers some advantages. Spectrophotometers may be either single beam, in which the light beam takes a single fixed path and the measurements are effected by taking measurements with and without the sample present, or double beam, in which the light is made to pass through two paths, one containing the sample and the other a reference; the intensities are then compared. In work on chemical solutions the reference beam is usually passed through a cell identical to that of the sample containing the same solvent so that solvent and cell effects are cancelled out; in reflection work the reference sample is usually a standard white reflector. The single-beam technique is usual in manually operated instruments and the double-beam one in automatic instruments, nowadays the majority. Two main varieties of double-beam techniques are used (see Figure 28.14). That shown in Figure 28.14(a) relies for accuracy on the linearity of response of the detector and is sometimes called the linearity method. The light beam is made to follow alternate sample and reference paths, and the detector is used to measure the intensity of each in turn; the ratio gives the transmission or reflection factor at the wavelength involved. The other method, shown in Figure 28.14(b), is called the opticalnull method. Here the intensity of the reference beam is reduced to equal that of the sample beam by some form of servocontrolled optical attenuator, and the detector is called on only to judge for equality between the two beams. The accuracy thus depends on the optical attenuator, which may take the form of a variable aperture or a system of polarizing prisms.
28.6 Wavelength and color 28.6.1 Spectrophotometers Instruments that are used to measure the optical transmission or reflection characteristics of a sample over a range of wavelengths are termed spectrophotometers. This technique is widely used for analytical purposes in all branches of chemistry and is the physical basis of all measurement of color; thus it is of much interest in the consumer industries. Transmission measurements, usually on liquid samples, are the most common. All spectrophotometers contain four elements: 1. A source of radiation. 2. An optical system, or monochromator, to isolate a narrow band of wavelengths from the whole spectrum emitted by the source. 3. The sample (and its cell if it is liquid or gaseous). 4. A detector of radiation and its auxiliary equipment.
Figure 28.14 Double-beam techniques in spectrophotometry: (a) linearity method; (b) optical null method.
511
Chapter | 28 Optical Measurements
Since spectrophotometric results are nearly always used in extensive calculations and much data can easily be collected, spectrophotometers are sometimes equipped with microprocessors. These microprocessors are also commonly used to control a variety of automatic functions—for example, the wavelength-scanning mechanism, automatic sample changing, and so on. For chemical purposes it is nearly always the absorbance (i.e., optical density) of the sample rather than the transmission that is required: A = log 10
1 T
where A is the absorbance or optical density and T is the transmission. The relation between transmission and absorbance is shown in Table 28.1. Instruments for chemical work usually read in absorbance only. Wave number—the number of waves in one centimeter—is also used by chemists in preference to wavelength. The relation between the two is shown in Table 28.2. It is not economic to build all-purpose spectrophotometers in view of their use in widely different fields. The most common varieties are: 1. Transmission/absorbance, ultraviolet, and visible range (200–600 nm).
2. Transmission and reflection, near-ultraviolet, and visible (300–800 nm). 3. Transmission/absorbance, infrared (2.5–25 nm). The optical parts of a typical ultraviolet-visible instrument are shown in Figure 28.15. Light is taken either from a tungsten lamp A or deuterium lamp B by moving mirror C to the appropriate position. The beam is focused by a mirror on to the entrance slit E of the monochromator. One of a series of filters is inserted at F to exclude light of submultiples of the desired wavelength. The light is dispersed by the diffraction grating G, and a narrow band of wavelengths is selected by the exit slit K from the spectrum formed. The wavelength is changed by rotating the grating by a mechanism operated by a stepper motor. The beam is divided into two by an array of divided mirror elements at L, and images of the slit K are formed at R and S, the position of the reference and sample cells. A chopper disc driven by a synchronous motor M allows light to pass through only one beam at a time. Both beams are directed to T, a silica diffuser, so that the photomultiplier tube is presented alternately with sample and reference beams; the diffuser is necessary to overcome nonuniformities in the photomultiplier cathode (see Section 28.4).
TABLE 28.2 Relation between wavelength and wavenumber
TABLE 28.1 Relation between transmission and absorbance
Wavelength
Warenumber (no. of waves/cm)
Transmission (%)
Absorbance
200 mm
50,000
100
0
400 nm
25,000
50
0.301
500 nm
20,000
10
1.0
1n
10,000
5
1.301
5n
2,000
1
2.0
10 n
1,000
0.1
3.0
50 n
200
Figure 28.15 Typical ultraviolet– visible spectrophotometer.
512
The signal from U is switched appropriately to sample or reference circuits by the signal driving the chopper M and the magnitudes compared. Their ratio gives the transmission, which may be recorded directly, or an absorbance figure may be calculated from it and recorded. A microprocessor is used to control the functions of wavelength scanning, slit width, filter, and lamp selection and to effect any desired calculations on the basic transmission results.
28.6.2 Spectroradiometers The technique of measuring the spectral power distribution (SPD) of a light source is termed spectroradiometry. We may be concerned with the SPD in relative or absolute terms. By “relative” we refer to the power output per unit waveband at each wavelength of a range, expressed as a ratio of that at some specified wavelength. (For the visible spectrum this is often 560 nm.) By “absolute” we mean the actual power output per steradian per unit waveband at each wavelength over a range. Absolute measurements are much more difficult than relative ones and are not often carried out except in specialized laboratories. Relative SPD measurements are effected by techniques similar to those of spectrophotometry (see Section 28.6.1). The SPD of the unknown source is compared with that of a source for which the SPD is known. In the single-beam method, light from the source is passed through a monochromator to a detector, the output of which is recorded at each wavelength of the desired range. This is repeated with the source for which the SPD is known, and the ratio of the readings of the two at each wavelength is then used to determine the unknown SPD in relative terms. If the SPD of the reference source is known in absolute terms, the SPD of the unknown can be determined in absolute terms. In the double-beam method, light from the two sources is passed alternately through a monochromator to a detector, enabling the ratio of the source outputs to be determined wavelength by wavelength. This method is sometimes offered as an alternative mode of using double-beam spectrophotometers. The experience of the author is that with modern techniques, the single-beam technique is simpler, more flexible, more accurate, and as rapid as the double-beam one. It is not usually worthwhile to try to modify a spectrophotometer to act as a spectroradiometer if good accuracy is sought. A recent variation of the single-beam technique is found in the optical multichannel analyzer. Here light from the source is dispersed and made to fall not on a slit but on a multiple-array detector; each detector bit is read separately with the aid of a microprocessor. This technique is not as accurate as the conventional single-beam one but can be used with sources that vary rapidly with time (for example, pyrotechnic flares). Both single-beam and double-beam methods require the unknown source to remain constant in intensity while its spectrum is scanned—which can take up to 3 minutes or so. Usually this is not difficult to arrange, but if a source is inherently unstable (for example, a carbon arc lamp) the
PART | IV Electrical and Radiation Measurements
whole light output or output at a single wavelength can be monitored to provide a reference (Tarrant, 1967). Spectroradiometry is not without its pitfalls, and the worker intending to embark in the field should consult suitable texts (Forsythe, 1941; Commission Internationale de I’Eclairage, 1984). Particular problems arise (1) where line and continuous spectra are present together and (2) where the source concerned is time modulated (for example, fluorescent lamps, cathode ray tubes, and so on). When line and continuous spectra are present together, they are transmitted through the monochromator in differing proportions, and this must be taken into account or compensated for (Henderson, 1970; Moore, 1984). The modulation problem can be dealt with by giving the detector a long time constant (which implies a slow scan speed) or by the use of phase-locked detectors (Brown and Tarrant, 1981; see Section 28.4.3). Few firms offer spectroradiometers as stock lines because they nearly always have to be custom built for particular applications. One instrument—the Surrey spectroradiometer— is shown in Figure 28.16. This instrument was developed for work on cathode ray tubes but can be used on all steady sources (Brown and Tarrant, 1981). Light from the source is led by the front optics to a double monochromator, which allows a narrow waveband to pass to a photomultiplier. The output from the photomultiplier tube is fed to a boxcar detector synchronized with the tube-driving signal so that the tube output is sampled only over a chosen period, enabling the initial glow or afterglow to be studied separately. The output from the boxcar detector is recorded by a desktop computer, which also controls the wavelength-scanning mechanism by means of stepper motor. The known source is scanned first and the readings are also held in the computer, so that the SPD of the unknown source can be printed out as soon as the wavelength scan is completed. The color can also be computed and printed out. The retro-illuminator is a retractable unit used when setting up. Light can be passed backward through the monochromator to identify the precise area of the CRT face viewed by the monochromator system. This instrument operates in the visible range of wavelengths 380–760 nm and can be used with screen luminances as low as 5 cd/m2. When used for color measurement, an accuracy of !0.001 in x and y (CIE 1931 system) can be obtained.
28.6.3 The Measurement of Color 28.6.3.1 Principles The measurement of color is not like the measurement of physical quantities such as pressure or viscosity because color is not a physical object. It is a visual phenomenon, a part of the process of vision. It is also a psychophysical phenomenon, and if we attempt to measure it we must not lose sight of that fact.
513
Chapter | 28 Optical Measurements
Figure 28.16 The Surrey spectroradiometer.
The nature of color is discussed briefly by the author elsewhere (Tarrant, 1981). The newcomer to the subject should consult the excellent textbooks on the subject by Wright (1969) and by Judd and Wysecki (1975). Several systems of color measurement are in use; for example, the CIE system and its derivatives, the Lovibond system (Chamberlin, 1979), and the Munsell system. It is not possible to go into details of these systems here; we shall confine ourselves to a few remarks on the CIE 1931 system. (The letters CIE stand for Commission Internationale de l’Eclairage, and the 1931 is the original fundamental system. About six later systems based on it are in current use.) There is strong evidence to suggest that in normal daytime vision, our eyes operate with three sets of visual receptors, corresponding to red, green, and blue (a very wide range of colors can be produced by mixing these together), and that the responses to these add in a single arithmetical way. If we could then make some sort of triple photometer (each channel having spectral sensitivity curves corresponding to those of the receptor mechanisms), we should be able to make physical measurements to replicate the functioning of eyes. This cannot in fact be done, since the human visual mechanisms have negative responses to light of certain wavelengths. However, it is possible to produce photocell filter combinations that correspond to red, green, and blue and that can be related to the human color-matching functions (within a limited range of colors) by simple matrix equations. This principle is used in photoelectric colorimeters, or tristimulus colorimeters, as they are sometimes called. One is illustrated in Figure 28.17. The sample is illuminated by a lamp and filter combination, which has the SPD of one of the defined Standard Illuminants. Light diffusely reflected from the sample is passed to a photomultiplier through a set of filters, carefully designed so that the three filter/
Figure 28.17 Tristimulus colorimeter.
photomultiplier spectral sensitivity combinations can be related to the human color-matching functions. By measuring each response in turn and carrying out a matrix calculation, the color specification in the CIE 1931 system (or its derivatives) can be found. Most instruments nowadays incorporate microprocessors, which remove the labor from these calculations so that the determination of a surface color can be carried out rapidly and easily. Consequently, colorimetric measurements are used on a large scale in all consumer industries, and the colors of manufactured products can now be very tightly controlled.
514
It is possible to achieve a high degree of precision so that the minimum color difference that can be measured is slightly smaller than that which the human eye can perceive. It should be noted that nearly all surfaces have markedly directional characteristics, and hence if a sample is measured in two instruments that have different viewing/illuminating geometry, different results must be expected. Diagrams of these instruments, such as Figure 28.17, make them look very simple. In fact, the spectral sensitivity of the filter/photomultiplier combination has to be controlled and measured to a very high accuracy; it certainly is not economic to try to build a do-it-yourself colorimeter. It is strongly emphasized that the foregoing remarks are no substitute for a proper discussion of the fascinating subject of colorimetry, and no one should embark on color measurement without reading at least one of the books mentioned.
28.7 Measurement of optical properties Transparent materials affect light beams passing through them, notably changing the speed of propagation; refractive index is, of course, a measure of this. In this section we describe techniques for measuring the material properties that control such effects.
PART | IV Electrical and Radiation Measurements
28.7.1 Refractometers The precise measurement of the refractive index of transparent materials is vital to the design of optical instruments but is also of great value in chemical work. Knowledge of the refractive index of a substance is often useful in both identifying and establishing the concentration of organic substances, and by far the greatest use of refractometry is in chemical laboratories. Britton has used the refractive index of gases to determine concentration of trilene in air, but this involves an interferometric technique that will not be discussed here. When light passes from a less dense to a denser optical medium—for example, from air into glass—the angle of the refracted ray depends on the angle of incidence and the refractive indices of the two media (see Figure 28.18(a)), according to Snell’s law: n2 sin i1 n1 = sin i 2 In theory, then, we could determine the refractive index of an unknown substance in contact with air by measuring these angles and assuming that the refractive index of air is unity (in fact, it is 1.000 27). In practice, for a solid sample we have to use a piece with two nonparallel flat surfaces; this involves also measuring the angle between them. This method can be used with the aid of a simple table spectrometer. Liquid samples can be measured in this way with the use of a hollow prism, but it is a laborious method and requires a considerable volume of the liquid.
Figure 28.18 (a) Refraction of ray passing from less dense to more dense medium; (b) refraction of ray passing from more dense to less dense medium; (c) total internal reflection; (d) the critical angle case, where the refracted ray can just emerge.
515
Chapter | 28 Optical Measurements
Most refractometers instead make use of the critical angle effect. When light passes from a more dense to a less dense medium, it may be refracted, as shown in Figure 28.18(b), but if the angle i2 becomes so large that the ray cannot emerge from the dense medium, the ray is totally internally reflected, as illustrated in Figure 28.18(c). The transition from refraction to internal reflection occurs sharply, and the value of the angle of i2 at which this occurs is called the critical angle, illustrated in Figure 28.18(d). If we call that angle ic, then n2 1 n1 = sin i c
Hence by determining ic we can find n1, if n2 is known.
28.7.1.1 The Abbé Refractometer The main parts of the Abbé refractometer which uses this principle are shown in Figure 28.19. The liquid under test is placed in the narrow space between prisms A and B. Light from a diffuse monochromatic source (L), usually a sodium lamp, enters prism A, and thus the liquid layer, at a wide variety of angles. Consequently, light will enter prism B at a variety of angles, sharply limited by the critical angle. This light then enters the telescope (T), and on moving the telescope around, a sharp division is seen at the critical angle; one half of the field is bright and the other is almost totally dark. The telescope is moved to align the crosswires on the light/dark boundary and the refractive index can be read off from a directly calibrated scale attached to it. This calibration also takes into account the glass/air refraction that occurs when the rays leave prism B. Although simple to use, this instrument suffers from all the problems that complicate refractive index measurements. It should be noted that: 1. In all optical materials, the refractive index varies markedly with wavelength in a nonlinear fashion. Hence either monochromatic sources or “compensating” devices must be used. For high-accuracy work, monochromatic sources are invariably used. 2. The refractive index of most liquids also varies markedly with temperature, and for accurate work temperature control is essential.
Figure 28.19 Abbé refractometer.
3. Since the refractive index varies with concentration, difficulties may be encountered with concentrated solutions, especially of sugars, which tend to become inhomogeneous under the effects of surface tension and gravity. 4. The range of refractive indices that can be measured in critical-angle instruments is limited by the refractive index of the prism A. Commercial instruments of this type are available for measuring refractive indices up to 1.74. 5. In visual instruments the light/dark field boundary presents such severe visual contrast that it is sometimes difficult to align the crosswires on it.
28.7.1.2 Modified Version of the Abbé Refractometer If plenty of liquid is available as a sample, prism A of Figure 28.19 may be dispensed with and prism B simply dipped in the liquid. The illuminating arrangements are as shown in Figure 28.20. Instruments of this kind are usually called dipping refractometers. When readings are required in large numbers or continuous monitoring of a process is called for, an automatic type of Abbé refractometer may be used. The optical system is essentially similar to that of the visual instrument except that instead of moving the whole telescope to determine the position of the light/dark boundary, the objective lens is kept fixed and a differentiating detector is scanned across its image plane under the control of a stepper motor. The detector’s response to the sudden change of illuminance at the boundary enables its position, and thus the refractive index, to be determined. To ensure accuracy, the boundary is scanned in both directions and the mean position is taken.
28.7.1.3 Refractometry of Solid Samples If a large piece of the sample is available, it is possible to optically polish two faces at an angle on it and then to measure the deviation of a monochromatic beam that it produces with a table spectrometer. This process is laborious and expensive. The Hilger–Chance refractometer is designed for the determination of the refractive indices of optical glasses
Figure 28.20 Dipping refractometer with reflector enabling light to enter prism B at near-grazing incidence.
516
PART | IV Electrical and Radiation Measurements
and requires that only two roughly polished surfaces at right angles are available. It can also be used for liquids. The optical parts are shown in Figure 28.21. Monochromatic light from the slit (A) is collected by the lens (B) and passes into the V-shaped prism block (C), which is made by fusing two prisms together to produce a very precise angle of 90 degrees between its surfaces. The light emerges and enters the telescope (T). When the specimen block is put in place, the position of the emergent beam will depend on the refractive index of the sample. If it is greater than that of the V block, the beam will be deflected upward; if lower, downward. The telescope is moved to determine the precise beam direction, and the refractive index can be read off from a calibrated scale. In the actual instrument the telescope is mounted on a rotating arm with the reflecting prism, so that the axis of the telescope remains horizontal. A wide slit with a central hairline is used at A, and the telescope eyepiece is equipped with two lines in its focal plane so that the central hairline may be set between them with great precision (see Figure 28.22). Since it is the bulk of the sample, not only the surfaces, that is responsible for the refraction, it is possible to place a few drops of liquid on the V-block so that perfect optical contact may be achieved on a roughly polished specimen. The V-block is equipped with side plates so that it forms a trough suitable for liquid samples. When this device used for measuring optical glasses, an accuracy of 0.0001 can be obtained. This is very high indeed, and the points raised about accuracy in connection with the Abbé refractometer should be borne in mind. Notice that the instrument is arranged so that the rays pass the air/ glass interfaces at normal or near-normal incidence so as to reduce the effects of changes in the refractive index of air with temperature and humidity.
28.7.1.4 Solids of Irregular Shape The refractive index of irregularly shaped pieces of solid materials may theoretically be found by immersing them in a liquid of identical refractive index. When this happens, rays traversing the liquid are not deviated when they encounter the solid but pass straight through, so that the liquid–solid boundaries totally disappear. A suitable liquid may be made up by using liquids of different refractive index together. When a refractive index match has been found, the refractive index of the liquid may be measured with an Abbé refractometer. Suitable liquids are given by Longhurst (1974): Benzene Nitrobenzene Carbon bisulphide a-monobromonaphthalene
n 1.504 1.553 1.632 1.658
The author cannot recommend this process. Granted, it can be used for a magnificent lecture-room demonstration, but in practice these liquids are highly toxic, volatile, and have an appalling smell. Moreover, the method depends on
Figure 28.21 Hilger–Chance refractometer.
Figure 28.22 Appearance of field when the telescope is correctly aligned.
both the solid and the liquid being absolutely colorless; if either has any trace of color it is difficult to judge the precise refractive index match at which the boundaries disappear.
28.7.2 Polarimeters Some solutions and crystals have the property that when a beam of plane-polarized light passes through them, the plane is rotated. This phenomenon is known as optical activity, and in liquids it occurs only with those molecules that have no degree of symmetry. Consequently, few compounds show this property, but one group of compounds of commercial importance does: sugars. Hence the measurement of optical activity offers an elegant way of determining the sugar content of solutions. This technique is referred to as polarimetry or occasionally by its old-fashioned name of saccharimetry. Polarimetry is one of the oldest physical methods applied to chemical analysis and has been in use for almost 100 years. The original instruments were all visual, and though in recent years photoelectric instruments have appeared, visual instruments are still widely used because of their simplicity and low cost. If the plane of polarization in a liquid is rotated by an angle i on passage through a length of solution l, then i = a cl where c is the concentration of the optically active substance and a is a coefficient for the particular substance called the specific rotation. In all substances the specific rotation increases rapidly with decreasing wavelength (see Figure 28.23), and for that reason monochromatic light sources are always used—very often a low-pressure sodium lamp. Some steroids show anomalous behavior of the specific rotation with wavelength, but the spectropolarimetry involved is beyond the scope of this book.
28.7.2.1 The Laurent Polarimeter The main optical parts of this instrument are shown in Figure 28.24. The working is best described if the solution
517
Chapter | 28 Optical Measurements
Figure 28.23 Variation of specific rotation with wavelength. A: typical sugar; B: steroid showing reversion.
tube is at first imagined not to be present. Light from a monochromatic source (A) passes through a sheet of Polaroid (B) so that it emerges plane-polarized. It then encounters a halfwave plate (C), which covers only half the area of the beam. The effect of the half-wave plate is to slightly alter the plane of polarization of the light that passes through it so that the situation is as shown in Figure 28.24(b). If the solution tube (D) is not present, the light next encounters a second Polaroid sheet at E. This is mounted so that it can be rotated about the beam. On looking through the eyepiece (F), the two halves of the field will appear of unequal brilliance until E is rotated and the plane it transmits is as shown in Figure 28.24(c). Since the planes in J and K differ only by a small angle, the position of equal brilliance can be judged very precisely. If the position of the analyzer (E) is now read, the solution tube (D) can be put in position and the process repeated so that the rotation i may be determined. Since the length of the solution tube is known, the concentration of the solution may be determined if the specific rotation is known.
28.7.2.2 The Faraday-Effect Polarimeter
Figure 28.24 Laurent polarimeter. (a) Plane-polarized light after passage through B; (b) polarization directions in field after passage through C; (c) broken arrow shows plane of analyzer at position of equal brilliance with no sample present.
Among the many effects that Faraday discovered was the fact that glass becomes weakly optically active in a magnetic field. This discovery lay unused for over 100 years until its employment in the Faraday-effect polarimeter. The main optical parts are shown schematically in Figure 28.25. A tungsten lamp, filter, and Polaroid are used to provide plane-polarized monochromatic light. This is then passed through a Faraday cell (a plain block of glass situated within a coil), which is energized from an oscillator at about 380 Hz, causing the plane of polarization to swing about 3° either side of the mean position. If we assume for the time being that there is no solution in the cell and no current in the second Figure 28.25 Faraday-effect polarimeter. (a) Photomultiplier signal with no sample present; (b) photomultiplier signal with uncompensated rotation.
518
PART | IV Electrical and Radiation Measurements
Faraday cell, this light will fall unaltered on the second Polaroid. Since this is crossed on the mean position, the photomultiplier will produce a signal at twice the oscillator frequency, because there are two pulses of light transmission in each oscillator cycle, as shown in Figure 28.25(a). If an optically active sample is now put in the cell, the rotation will produce the situation in Figure 28.25(b) and a component of the same frequency as the oscillator output. The photomultiplier signal is compared with the oscillator output in a phase-sensitive circuit so that any rotation produces a dc output. This is fed back to the second Faraday cell to oppose the rotation produced by the solution, and, by providing sufficient gain, the rotation produced by the sample will be completely restored. In this condition the current in the second cell will in effect give a measure of the rotation, which can be indicated directly with a suitably calibrated meter. This arrangement can be made highly sensitive and a rotation of as little as 1/10,000th of a degree can be detected, enabling a short solution path length to be used—often 1 mm. Apart from polarimetry, this technique offers a very precise method of measuring angular displacements.
28.8 Thermal imaging techniques Much useful information about sources and individual objects can be obtained by viewing them not with the visible light they give off but by the infrared radiation they emit. We know from Planck’s radiation law: Pm =
C1
5 7 C 2 /mT
m e
Figure 28.26 Thermal imaging techniques applied to a house. Courtesy of Agema Infrared Systems.
- 1A
(where Pm represents the power radiated from a body at wavelength m, T the absolute temperature, and C1 and C2 are constants) that objects at the temperature of our environment radiate significantly, but we are normally unaware of this because all that radiation is well out in the infrared spectrum. The peak wavelength emitted by objects at around 20°C (293 K) is about 10 nm, whereas the human eye is only sensitive in the range 0.4–0.8 nm. By the use of an optical system sensitive to infrared radiation, this radiation can be studied, and since its intensity depends on the surface temperature of the objects concerned, the distribution of temperatures over an object or source can be made visible. It is possible to pick out variations in surface temperature of less than 1 K in favorable circumstances, and so this technique is of great value; for example, it enables the surface temperature of the walls of a building to be determined and so reveals the areas of greatest heat loss. Figure 28.26 illustrates this concept: (a) is an ordinary photograph of a house; (b) gives the same view using thermal imaging techniques. The higher temperatures associated with heat loss from the windows and door are immediately apparent. Thermal imaging can be used in medicine to reveal variations of surface temperature on a patient’s body and thus reveal failures of circulation; it is of great military value since it will
Figure 28.27 Log scale presentation of Planck’s law.
function in darkness, and it has all manner of applications in the engineering field. An excellent account of the technique is given by Lawson (1979) in Electronic Imaging; Figure 28.27 and Table 28.3
519
Chapter | 28 Optical Measurements
Table 28.3 Solar and black-body radiation at various wavelengths (after Lawson, 1979) Wavelength band (nm)
Typical value of solar radiation (Wm–2)
Emission from black body at 300 K (Wm–2)
0.4–0.8
750
0
3–5
24
6
8–13
1.5
140
Note: Complete tables of data relating to the spectral power distribution of blackbody radiators are given by M. Pivovonsky and M. Nagel.
are based on this work. Although the spectrum of a body around 300 K has its peak at 10 nm, the spectrum is quite broad. However, the atmosphere is effectively opaque from about 5–8 nm and above 13 nm, which means that in practice the usable bands are 3–5 nm and 8–13 nm. Although there is much more energy available in the 8–13 nm band (see Table 28.3) and if used outdoors there is much less solar radiation, the sensitivity of available detectors is much better in the 3–5 nm band, and both are used in practice. Nicholas (1968) has developed a triple waveband system using visible 2–5 nm and 8–13 nm ranges, but this technique has not so far been taken up in commercially available equipment. In speaking of black-body radiations we must remember that, in practice, no surfaces have emissivities of 1.0 and often they have much less, so there is not a strict relationship between surface temperatures and radiant power for all the many surfaces in an exterior source. In daylight, reflected solar power is added to the emitted power and is a further complication. However, the main value of the technique is in recognizing differences of temperature rather than temperature in absolute terms.
References American National Standards Institute, Standards for the Safe Use of Lasers. Beynon, J. D. E., Charge Coupled Devices and their Applications, McGraw-Hill, New York (1979). Bhaskaran, S., Chapman, T., Pilon, M., VanGorden, S., Performance Based CID Imaging—Past, Present and Future, Thermo Fisher Scientific, Liverpool, NY. http://www.thermo.com/eThermo/CMA/PDFs/Product/ productPDF_8997.pdf
BS 4803, Guide on Protection of Personnel against Hazards from Laser Radiation (1983). Brown, S. and Tarrant, A. W. S., A Sensitive Spectroradiometer using a Boxcar Detector, Association International de la Couleur (1981). Chamberlin, G. J. and Chamberlin, D. G., Color: its Measurement, Computation and Application, Heyden, London (1979). Commission Internationale de I’Eclairage, The Spectroradiometric Measurement of Light Sources, CIE Pub. No. 63 (1984). Dudley, W. W., Carbon Dioxide Lasers, Effects and Applications, Academic Press, London (1976). Dresler, A. and Frühling, H. S., “Uber ein Photoelektrische Dreifarben Messgerät,” Das Licht, 11, 238 (1938). Forsythe, W. E., The Measurement of Radiant Energy, McGraw-Hill, New York (1941). Fry, P. W., “Silicon photodiode arrays,” J. Sci. Inst., 8, 337 (1975). Geutler, G., Die Farbe, 23, 191 (1974). Heavens, O. S., Lasers, Duckworth, London (1971). Henderson, S. T., J. Phys. D., 3, 255 (1970). Henderson, S. T. and Marsden, A. M., Lamps and Lighting, Edward Arnold, London (1972). IEC-CIE, International Lighting Vocabulary. Judd, D. B. and Wysecki, G., Color in Business, Science and Industry, Wiley, New York (1975). Koechner, W., Solid State Laser Engineering, Springer, New York (1976). Lawson, W. D., “Thermal imaging,” in Electronic Imaging (eds., T. P. McLean and P. Schagen), Academic Press, London (1979). Longhurst, R. S., Geometrical and Physical Optics, Longman, London (1974). Mooradian, A., Jaeger, T., and Stokseth, P., Tuneable Lasers and Applications, Springer, New York (1976). Moore, J., “Sources of error in spectro-radiometry,” Lighting Research and Technology, 12, 213 (1984). Nichols, L. W. and Laner, J., Applied Optics, 7, 1757 (1968). Pivovonsky, M. and Nagel, M., Tables of Blackbody Radiation Functions, Macmillan, London (1961). Safety in Universities—Notes for Guidance, Part 2: 1, Lasers, Association of Commonwealth Universities (1978). Steck, B., “Effects of optical radiation on man,” Lighting Research and Technology, 14, 130 (1982). Tarrant, A. W. S., Some Work on the SPD of Daylight, PhD thesis, University of Surrey (1967). Tarrant, A. W. S., “The nature of color – the physicist’s viewpoint,” in Natural Colors for Food and other Uses (ed. J. N. Counsell), Applied Science, London (1981). Thewlis, J. (ed.), Encyclopaedic Dictionary of Physics, Vol. 1, p. 553, Pergamon, Oxford (1961). Walsh, J. W. T., Photometry, Dover, London (1958). Wright, W. D., The Measurement of Color, Hilger, Bristol (1969). Young, A. R., Magnus, I. A., and Gibbs, N. K., “Ultraviolet radiation radiometry of solar simulation,” Proc. Conf. on Light Measurement, SPIE. Vol. 262 (1980).
This page intentionally left blank
Chapter 29
Nuclear Instrumentation Technology D. Aliaga Kelly and W. Boyes
29.1 Introduction Nuclear gauging instruments can be classified as those that measure the various radiations or particles emitted by radioactive substances or nuclear accelerators, such as alpha particles, beta particles, electrons and positrons, gammaand X-rays, neutrons, and heavy particles such as protons and deuterons. A variety of other exotic particles also exist, such as neutrinos, mesons, muons, and the like, but their study is limited to high-energy research laboratories and uses special detection systems; they will not be considered in this book. An important factor in the measurements to be made is the energy of the particles or radiations. This is expressed in electron-volts (eV) and can range from below 1 eV to millions of eV (MeV). Neutrons of very low energies (0.025 eV) are called thermal neutrons because their energies are comparable to those of gas particles at normal temperatures. However, other neutrons can have energies of 10 MeV or more; X-rays and gamma-rays can range from a few eV to MeV and sometimes GeV (109eV). The selection of a particular detector and detection system depends on a large number of factors that have to be taken into account in choosing the optimum system for a particular project. One must first consider the particle or radiation to be detected, the number of events to be counted, whether the energies are to be measured, and the interference with the measurement by background radiation of similar or dissimilar types. Then the selection of the detector can be made, bearing in mind cost and availability as well as suitability for the particular problem. Choice of electronic units will again be governed by cost and availability as well as the need to provide an output signal with the information required. It can be seen that the result will be a series of compromises, since no detector is perfect, even if unlimited finance is available. The radioactive source to be used must also be considered, and a list of the more popular types is given in Tables
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/8978-0-7506-8308-1.00029-2
29.1 and 29.2. Some other sources, used particularly in X-ray fluorescence analysis, are given in Table 30.1 in the next chapter.
29.1.1 Statistics of Counting The variability of any measurement is measured by the standard deviation v, which can be obtained from replicate determinations by well-known methods. There is an inherent variability in radioactivity measurements because the disintegrations occur in a random manner, described by the Poisson distribution. This distribution is characterized by the property that the standard deviation v of a large number of events, N, is equal to its square root, that is,
v ]N g =
N
(29.1)
For ease in mathematical application, the normal (Gaussian) approximation to the Poisson distribution is ordinarily used. This approximation, which is generally valid for numbers of events, N equal to or greater than 20, is the particular normal_distribution _ whose mean is N and whose standard deviation is :N. Generally, the concern is not with the standard deviation of the number of counts but rather with the deviation in the rate (= number of counts per unit time):
Rl =
N t
(29.2)
where t is the time of observation, which is assumed to be known with such high precision that its error may be neglected. The standard deviation in the counting rate, v(R’ ), can be calculated by the usual methods for propagation of error:
v ] Rlg =
N t
=b
1
Rl l 2 t
(29.3)
In practice, all counting instruments have a background counting rate, B, when no radioactive source is present.
521
522
PART | IV Electrical and Radiation Measurements
Table 29.1 Radiation Sources in General Use Isotope
Half-life
Emissions (and energies–MeV) Beta
3H
(Tritium)
Emax
Eav
Gamma
Alpha
12.26 yr
0.018
0.006
Nil
Nil
14
5730 yr
0.15
0.049
Nil
Nil
22N
2.6 yr
0.55
0.21
1.28
Nil
Also 0.511 MeV annihilation
24Na
15 h
1.39
1.37
2.75
32
14.3 d
1.71
0.69
Nil
Nil
Pure beta emitter
36Cl
3 # 105 yr
0.71
0.32
Nil
Nil
Betas emitted simulate fission products
60
5.3 yr
0.31
0.095
1.17 (100)
Nil
Used in radiography, etc.
1.33 (100)
Nil
C
P
Co
90
28 yr
0.54
0.196
Nil
Nil
90Y
64.2 h
2.25
0.93
Nil
Nil
131I
8.04 d
0.61+
0.18+
0.36 (79)+
Nil
Used in medical applications
137Cs
30 yr
0.5+
0.19+
0.66 (86)
Nil
Used as standard gamma calibration source
198Au
2.7 d
0.99+
0.3+
0.41 (96)
Nil
Gammas adopted recently as universal standard
226Ra
1600 yr
–
–
0.61 (22)
Sr
Pure beta emitter
Earliest radioactive source isolated by Mme Curie, still used in medical applications
1.13 (13) 1.77 (25) + others 241Am
457 yr
–
–
0.059 (35) 5.42 (12) Alpha X-ray calibration source
+ others 5.48 (85)
Note: Figures in parentheses show the percentage of primary disintegration that goes into that particular emission (i.e., the abundance). + Indicates other radiation of lower abundance.
Table 29.2 Neutron sources Source and type
Neutron emission n/s per unit activity or mass
124Sb–Be
(c, n)
5 # 10-5/Bq
60 dyas
Low: 30 keV
For field assay of beryllium ores
226
Ra–Be (a, n)
3 # 10 /Bq
1622 yr
Max: 12 MeV
Early n source, now replaced by Am/Be
210
7 # 10-5/Bq
138 days
Max: 10.8 MeV
Short life is disadvantage
-4
Half-life energies of neutrons emitted
Av: 4 MeV Po–Be (a, n)
Av: 4.2 MeV 241
Am–Be (a, n)
6 # 10-5/Bq
433 yr
Max: 11 MeV
Most popular neturon source
Av: 3–5 MeV 252Cf
fission
2.3 # 10
6/ng
2.65 yr
Simulates reactor neutron spectrum
Short life and high cost
523
Chapter | 29 Nuclear Instrumentation Technology
When a source is present, the counting rate increases to R0. The counting rate R due to the source is then
R = R 0 - B
Table 29.3 Limits of the quantity |2 for sets of counts with random errors
(29.4)
By propagation-of-error methods, the standard deviation of R can be calculated as follows:
v ] Rlg = f
1 2
R0 B + p t1 t2
(29.5)
where t1 and t2 are the times over which source-plus-background and background counting rates were measured, respectively. Practical counting times depend on the activity of the source and of the background. For low-level counting, one has to reduce the background by the use of massive shielding, careful material selection for the components of the counter, the use of such devices as anticoincidence counters, and as large a sample as possible. The optimum division of a given time period for counting source and background is given by
t2 = t1
1 1
1 + _ R 0 /B i 2
(29.6)
29.1.1.1 Nonrandom Errors These may be due to faults in the counting equipment, personal errors in recording results or operating the equipment, or errors in preparing the sample. The presence of such errors may be revealed by conducting a statistical analysis on a series of repeated measurements. For errors in the equipment or in the way it is operated, the analysis uses the same source, and for source errors a number of sources are used. Although statistical errors will follow a Gaussian distribution, errors that are not random will not follow such a distribution, so that if a series of measurements cannot be fitted to a Gaussian distribution curve, nonrandom errors must be present. The chi-squared test allows one to test the goodness of fit of a series of observations to the Gaussian distribution. If nonrandom errors are not present, the values of |2 as determined by the relation given in Equation (29.7) should lie between the limits quoted in Table 29.3 for various groups of observations: i =q
2
| =
/ _ n - n i i2
i =1
(29.7) n where n is the average count observed, ni is the number counted in the ith observation, and q is the number of observations. If a series of observations fits a Gaussian distribution, there is a 95 percent probability that |2 will be greater than or equal to the lower limit quoted in Table 29.1 but only a 5 percent probability that |2 will be greater than or equal to the upper limit quoted. Thus, for 10 observations, if |2
Number of observations
Lower limit for |2
Upper limit for |2
3
0.103
5.99
4
0.352
7.81
5
0.711
9.49
6
1.14
11.07
7
1.63
12.59
8
2.17
14.07
9
2.73
15.51
10
3.33
16.92
15
6.57
23.68
20
10.12
30.14
25
13.85
36.42
30
17.71
42.56
lies outside the region of 3.33–16.92 it is very probable that errors of a nonrandom kind are present. In applying the chi-squared test, the number of counts recorded in each observation should be large enough to make the statistical error less than the accuracy required for the activity determination. Thus if 10,000 counts are recorded for each observation and for a series of observations |2 lies between the expected limits, it can be concluded that nonrandom errors of a magnitude greater than about !2 percent are not present.
29.1.1.2 Radioactive Decay Radioactive sources have the property of disintegrating in a purely random manner, and the rate of decay is given by the law dN (29.8) = - mN dt where m is called the decay constant and N is the total number of radioactive atoms present at a time t. This may be expressed as
N = N 0 exp ] - mt g
(29.9)
where N0 is the number of atoms of the parent substance present at some arbitrary time 0. Combining these equations, we have
dN = - mN 0 exp ] - mt g dt
(29.10)
524
showing that the rate of decay falls off exponentially with time. It is usually more convenient to describe the decay in terms of the “half-life” T½ of the element. This is the time required for the activity, dN/dt, to fall to half its initial value and m = 0.693/T ½. When two or more radioactive substances are present in a source, the calculation of the decay of each isotope becomes more complicated and will not be dealt with here. The activity of a source is a measure of the frequency of disintegration occurring in it. Activity is measured in Becquerels (Bq), one Becquerel corresponding to one disintegration per second. The old unit, the Curie (Ci), is still often used, and 1 Megabecquerel = 0.027 millicuries. It is also often important to consider the radiation that has been absorbed—the dose. This is quoted in grays, the gray being defined as that dose (of any ionizing radiation) that imparts 1 joule of energy per kilogram of absorbing matter at the place of interest. So 1 Gy = 1 J kg-1. The older unit, the rad, 100 times smaller than the gray, is still often referred to.
29.1.2 Classification of Detectors Various features of detectors are important and have to be taken into account in deciding the choice of a particular system, notably: 1. Cost. 2. Sizes available. 3. Complexity in auxiliary electronics needed. 4. Ability to measure energy and/or discriminate between various types of radiations or particles. 5. Efficiency, defined as the probability of an incident particle being recorded. Detectors can be grouped generally into the following classes. Most of these are covered in more detail later, but Cherenkov detectors and cloud chambers are specialized research tools and are not discussed in this book.
PART | IV Electrical and Radiation Measurements
29.1.2.3 Cherenkov Detectors When a charged particle traverses a transparent medium at a speed greater than that of light within the medium, Cherenkov light is produced. Only if the relative velocity b = o/c and the refractive index n of the medium are such that nb 2 1 will the radiation exist. When the condition is fulfilled, the Cherenkov light is emitted at the angle given by the relation
cos i =
1 nb
(29.11)
where i is the angle between the velocity vector for the particle and the propagation vector for any portion of the conical radiation wavefront.
29.1.2.4 Solid-State Detectors Some semiconductor materials have the property that when a potential is applied across them and an ionizing particle or ionizing radiation passes through the volume of material, ions are produced just as in the case of a gas-ionization chamber, producing electronic pulses in the external connections that can be amplified, measured, or counted. A device can thus be made, acting like a solid ionization chamber. The materials that have found greatest use in this application are silicon and germanium.
29.1.2.5 Cloud Chambers These were used in early research work and are still found in more sophisticated forms in high-energy research laboratories to demonstrate visually (or photographically) the actual paths of ionizing particles or radiation by means of trails of liquid droplets formed after the passage of such particles or radiation through supersaturated gas. Photographic film can be used to detect the passage of ionizing particles or radiation, since they produce latent images along their paths in the sensitive emulsion. On development, the grains of silver appear along the tracks of the particles or ions.
29.1.2.1 Gas Detectors Gas detectors include ionization chambers, gas proportional counters, Geiger counters, multiwire proportional chambers, spark counters, drift counters, and cloud chambers.
29.1.2.2 Scintillation Counters Some substances have the property that, when bombarded with nuclear particles or ionizing radiation, they emit light that can be picked up by a suitable highly sensitive light detector that converts the light pulse into an electronic pulse that can be amplified and measured.
29.1.2.6 Plastic Film Detectors Thin (5 nm) plastic films of polycarbonate can be used as detectors of highly ionizing particles which can cause radiation damage to the molecules of the polycarbonate film. These tracks may be enlarged by etching with a suitable chemical and visually measured with a microscope. Alternatively, sparks can be generated between two electrodes, one of which is an aluminized mylar film, placed on either side of a thin, etched polycarbonate detector. The sparks that pass through the holes in the etched detector can be counted using a suitable electronic scaler.
525
Chapter | 29 Nuclear Instrumentation Technology
29.1.2.7 Thermoluminescent Detectors For many years it was known that if one heated some substances, particularly fluorites and ceramics, they could be made to emit light photons and, in the case of ceramics, could be made incandescent. When ionizing radiation is absorbed in matter, most of the absorbed energy goes into heat, whereas a small fraction is used to break chemical bonds. In some materials a very minute fraction of the energy is stored in metastable energy states. Some of the energy thus stored can be recovered later as visible light photons if the material is heated, a phenomenon known as thermoluminescence (TL). In 1950 Daniels proposed that this phenomenon could be used as a measurement of radiation dose, and in fact it was used to measure radiation after an atom-bomb test. Since then interest in TL as a radiation dosimeter has progressed to the stage that it could now well replace photographic film as the approved personnel radiation badge carried by people who may be involved with radioactive materials or radiation.
29.1.2.8 Materials for TL Dosimetry The most popular phosphor for dosimetric purposes is lithium fluoride (LiF). This can be natural LiF or with the lithium isotopes 6Li and ’Li enriched or depleted, as well as variations in which an activator such as manganese (Mn) is added to the basic LiF. The advantages of LiF are: 1. Its wide and linear energy response from 30 KeV up to and beyond 2 MeV. 2. Its ability to measure doses from the mR to 105R without being affected by the rate at which the dose is delivered; this is called dose-rate independence. 3. Its ability to measure thermal neutrons as well as X-rays, gamma rays, beta rays, and electrons. 4. Its dose response is almost equivalent to the response of tissue, that is, it has almost the same response as the human body. 5. It is usable in quite small amounts, so it can be used to measure doses to the fingers of an operator without impeding the operator’s work. 6. It can be reused many times, so it is cheap. Another phosphor that has become quite popular in recent years is calcium fluoride with manganese (CaF2:Mn), which has been found to be more sensitive than LiF for lowdose measurements (some 10 times) and can measure a dose of 1 mR yet is linear in doserate response up to 105 R. However, it exhibits a large energy dependence and is not linear below 300 KeV. Thermoluminescence has also been used to date ancient archaeological specimens such as potsherds, furnace floors, ceramic pots, and so on. This technique depends on the fact that any object heated to a high temperature loses inherent
thermoluminescent powers and, if left for a long period in a constant radioactive background, accumulates an amount of TL proportional to the time it has lain undisturbed in that environment.
29.1.3 Health and Safety Anyone who works with radioactive materials must understand clearly the kinds of hazards involved and their magnitude. Because radioactivity is not directly observable by the body’s senses, it requires suitable measuring equipment and handling techniques to ensure that any exposure is minimized; because of this, suitable legislation governs the handling and use of all radioactive material. In Part 4 an outline of the regulations is given as well as advice on contacting the local factory inspector before the use of any radioactive source is contemplated. Because everyone in the world already receives steady radiation (from the natural radiopotassium in the human body and from the general background radiation to which all are subjected), the average human body acquires a dose of about 300 micro-grays (nGy) (equivalent to 30 millirads) per year. Hence, though it is almost impossible to reduce radiation exposure to zero, it is important to ensure that using a radioactive source does not increase the dose to a level greater than many other hazards commonly met in daily life. There are three main methods for minimizing the hazards due to the use of a radioactive source: 1. Shielding. A thickness of an appropriate material, such as lead, should be placed between the source and the worker. 2. Distance. An increase in distance between source and worker reduces the radiation intensity. 3. Time. The total dose to the body of the worker depends on the length of time spent in the radiation field. This time should be reduced to the minimum necessary to carry out the required operation. These notes are for sources that are contained in sealed capsules. Those that are in a form that might allow them to enter the body’s tissues must be handled in ways that prevent such an occurrence (for example, by operating inside a “glove box,” which is a box allowing open radioactive sources to be dealt with while the operator stays outside the enclosure). Against internal exposure the best protection is good housekeeping, and against external radiation the best protection is good instrumentation, kept in operating condition and used. Instruments capable of monitoring radioactive hazards depend on the radiation or particles to be monitored. For gamma rays, emitted by the most usual radioactive sources to be handled, a variety of instruments is available. Possibly the cheapest yet most reliable monitor contains a Geiger counter, preferably surrounded by a suitable metal covering to modify the counter’s response to the varied energies from
526
gamma-emitting radioisotopes to make it similar to the response of the human body. There are many such instruments available from commercial suppliers. More elaborate ones are based on ionization chambers, which are capable of operating over a much wider range of intensities and are correspondingly more expensive. For beta emitters, a Geiger counter with a thin window to allow the relatively easily absorbed beta particles to enter the counter is, again, the cheapest monitor. More expensive monitors are based on scintillation counters, which can have large window areas, useful for monitoring extended sources or accidents where radioactive beta emitters have been spilt. Alpha detection is particularly difficult, since most alphas are absorbed in extremely thin windows. Geiger counters with very thin windows can be used, or an ionization chamber with an open front, used in air at normal atmospheric pressure. More expensive scintillation counters or semiconductor detectors can also be used. Neutrons require much more elaborate and expensive monitors, ranging from ionization or proportional counters containing BF3 or 3He, to scintillation counters using 6LiI, 6Li-glass, or plastic scintillators, depending on the energies of the neutrons.
29.2 Detectors 29.2.1 Gas Detectors Gas-filled detectors may be subdivided into those giving a current reading and those indicating the arrival of single particles. The first class comprises the current ionization chambers and the second the counting or pulse-ionization chambers, proportional counters, and Geiger counters. The object of the ionization chamber is always the same: to measure the rate of formation of ion pairs within the gas. One must therefore be certain that the voltage applied to the electrodes is great enough to give saturation, that is, to ensure that there will be no appreciable recombination of positive and negative ions. To understand the relation between the three gas-filled detectors, we can consider a counter of very typical geometry: two coaxial cylinders with gas between them. The inner cylinder, usually a fine wire (the anode), is at a positive potential relative to the outer cylinder (the cathode). Let us imagine ionization to take place in the gas, from a suitable radioactive source, producing, say, 10 electrons. The problem is to decide how many electrons (n) will arrive at the anode wire. Figure 29.1 shows the voltage applied across the counter V, plotted against the logarithm of n, that is, log10 n. When V is very small, on the order of volts or less, all 10 electrons do not arrive at the anode wire, because of recombination. At V1 the loss has become negligible because saturation has been achieved and the pulse contains 10 electrons. As V is increased, n remains at 10 until V2 is reached, usually some tens or hundreds of volts. At this point the electrons begin to acquire sufficient energy between collisions at the end of
PART | IV Electrical and Radiation Measurements
Figure 29.1 Response of gas counter to increase in voltage.
their paths for ionization by collision to occur in the gas and this multiplication causes n to rise above 10, more or less exponentially with V as each initial electron gives rise to a small avalanche of secondary electrons by collision close to the wire anode. At any potential between V2 and V3 the multiplication is constant, but above this the final number of electrons reaching the wire is no longer proportional to the initial ionization. This is the region of limited proportionality, V4. Above this, from V5 to V6 the region becomes that of the Geiger counter, where a single ion can produce a complete discharge of the counter. It is characterized by a spread of the discharge throughout the whole length of the counter, resulting in an output pulse size independent of the initial ionization. Above V6 the counter goes into a continuous discharge. The ratio of n, the number of electrons in the output pulse, to the initial ionization at any voltage is called the gas-amplification factor A and varies from unity in the ionization chamber region to 103 to 104 in the proportional region, reaching 105 just below the Geiger region. Ionization-chamber detectors can be of a variety of shapes and types. Cylindrical geometry is the most usual one adopted, but parallel plate chambers are often used in research. These chambers are much used in radiation-protection monitors, since they can be designed to be sufficiently sensitive for observing terrestrial radiation yet will not overload when placed in a very high field of radiation such as an isotopic irradiator. They can also be used, again in health physics, to integrate over a long period the amount of ionizing radiation passing through the chamber. An example is the small integrating chamber used in X-ray and accelerator establishments to observe the amount of radiation produced in personnel who carry the chambers during their working day. Proportional counters are much more sensitive than ionization chambers; this allows weak sources of alpha and beta particles and low energy X-rays to be counted. The
527
Chapter | 29 Nuclear Instrumentation Technology
end-window proportional counter is particularly useful for counting flat sources, because it exhibits nearly 2r geometry—that is, it counts particles entering the counter over a solid angle of nearly 2r. Cylindrical proportional counters are used in radiocarbon dating systems because of their sensitivity for the detection of low-energy 14C beta particles (Emax = 156 keV) and even tritium 3H beta particles (Emax = 156 keV).
29.2.1.1 Geiger–Mueller Detectors The Geiger counter has been and is the most widely used detector of nuclear radiation. It exhibits several very attractive features, some of which are: 1. Its cheapness. Manufacturing techniques have so improved the design that Geiger–Mueller tubes are a fraction of the cost of solid-state or scintillation detectors. 2. The output signal from a Geiger–Mueller tube can be on the order of 1 V, much higher than that from proportional, scintillation, or solid-state detectors. This means that the cost of the electronic system required is a fraction of that of other counters. A Geiger–Mueller tube with a simple high-voltage supply can drive most scaler units directly, with minimal or no amplification. 3. The discharge mechanism is so sensitive that a single ionizing particle entering the sensitive volume of the counter can trigger the discharge. With these advantages there are, however, some disadvantages which must be borne in mind. These include: 1. The inability of the Geiger–Mueller tube to discriminate between the energies of the ionizing particles triggering it. 2. The tube has a finite life, though this has been greatly extended by the use of halogen fillings instead of organic gases. The latter gave lives of only about 1010 counts, whereas the halogen tubes have lives of 1013 or more counts. 3. There is a finite period between the initiation of a discharge in a Geiger–Mueller counter and the time when it will accept a new discharge. This, called the dead time, is on the order of 100 ns.
counter, and it is customary to increase this time electronically in a unit often used with Geiger counters so that the total dead time is constant but greater than any variations in the tube’s dead time, since between individual pulses the dead time can vary. The counting rate characteristics can be understood by reference to Figure 29.2. The starting voltage Vs is the voltage that, when applied to the tube viewing a fixed radioactive source, makes it just start to count. As the high voltage is increased, the counting rate rapidly increases until it reaches what is called the plateau. Here the increase in counting rate from VA to VB is small, on the order of 1–5 percent per 100 V of high voltage. Above VB the counting rate rises rapidly and the tube goes into a continuous discharge, which will damage the counter. An operating point is selected (Vop) on the plateau so that any slight variation of the high-voltage supply has a minimal effect on the counting rate. To count low-energy beta or alpha particles with a Geiger counter, a thin window must be provided to allow the particles to enter the sensitive volume and trigger the counter. Thin-walled glass counters can be produced, with wall thicknesses on the order of 30 mg/cm2, suitable for counting high-energy beta particles. (In this context, the mass per unit area is more important than the linear thickness: for glass, 25 mg/cm2 corresponds to about 0.1 mm.) For lowenergy betas or alphas, a very thin window is called for, and these have been made with thicknesses as low as 1.5mg/ cm2. Figure 29.3 gives the transmission through windows of
Figure 29.2 Geiger counter characteristic response.
It is important to ensure that with Geiger counters the counting rate is such that the dead-time correction is only a small percentage of the counting rate observed. This correction can be calculated from the relation:
Rl =
R 1 - Rx
(29.12)
where R is the observed counting rate per unit time, R’ is the true counting rate per unit time, and x is the counter dead time. Dead time for a particular counter may be evaluated by a series of measurements using two sources. Geiger tube manufacturers normally quote the dead time of a particular
Figure 29.3 Transmission of thin windows.
528
PART | IV Electrical and Radiation Measurements
Table 29.4 Transmission of thin windows Nuclide
Max. energy Emax (MeV)
Percentage transmission for window thickness of 30mg/ cm2
20mg/ cm2
7mg/ cm2
3mg/ cm2
14
C
0.15
0.01
0.24
12
40
32P
1.69
72
80.3
92
96 Figure 29.4 Relative output of Csl (Na) as a function of temperature.
different thickness, and Table 29.4 shows how it is applied to typical sources. Alternatively, the source can be introduced directly into the counter by mixing as a gas with the counting gas, or if a solid source, by placing it directly inside the counter and allowing the flow of counting gas to continuously pass through the counter. This is the flow-counter method.
29.2.2 Scintillation Detectors Scintillation counters comprise three main items: the scintillator, the light-to-electrical pulse converter (generally a photomultiplier), and the electronic amplifier. Here we consider the wide variety of each of these items that are available today. The scintillator can consist of a single crystal of organic or inorganic material, a plastic fluor, an activated glass, or a liquid. Shapes and sizes can vary enormously, from the small NaI (Tl) (this is the way of writing “sodium iodide with thallium additive”) crystal used in medical probes up to tanks of liquid scintillator of thousands of cubic meters used in cosmic ray research. The selection of a suitable material—some of the characteristics of which are given in Tables 29.5 and 29.6—depends on a number of competing factors. No material is perfect, so compromises must be made in their selection for particular purposes.
29.2.2.1 Inorganic Scintillators NaI (Tl) is still after many years the best gamma- and X-ray detector actually available, yet it is very hygroscopic and must be completely sealed from moisture. It is sensitive to shock, except in the new form of extruded NaI (Tl) called Polyscin, developed by Harshaw, and it is expensive, especially in large sizes. Its light output is, in general, proportional to the energy of the photon absorbed, so the pulse height is a measure of the energy of the incident photon and, when calibrated with photons of known energy, the instrument can be used as a spectrometer. The decay lifetime of NaI (Tl) is relatively slow, being about 230 ns, although NaI without any thallium when operated at liquid-nitrogen temperatures (77 K) has a decay lifetime of only 65 ns.
The next most used inorganic scintillator is CsI. When activated with thallium as CsI (Tl), it can be used at ambient temperatures with a light output some 10 percent lower than NaI (Tl) but with considerable resistance to shock. Its absorption coefficient is also greater than that of NaI (Tl), and these two characteristics have resulted in its use in space vehicles and satellites, since less mass is necessary and its resistance to the shock of launch is valuable. In thin layers it can be bent to match a circular light guide and has been used in this manner on probes to measure excited X-rays in soil. When activated with sodium instead of thallium, the light output characteristics are changed. The light output is slightly higher than CsI (Tl), and the temperature/light output relation is different (see Figure 29.4). The maximum light output is seen to occur, in fact, at a temperature of about 80°C. This is of advantage in borehole logging, where increased temperatures and shock are likely to occur as the detector is lowered into the drill hole. CsI (Tl) has been a popular detector for alpha particles, since it is not very much affected by moisture from the air and so can be used in windowless counters. CsI (Na), on the other hand, quickly develops a layer impervious to alpha particles of 5–10 MeV energies when exposed to ambient air and thus is unsuitable for such use. CaF and CaF (Eu) are scintillators that have been developed to give twice the light output of NaI (Tl), but only very small specimens have been grown and production difficulties make their use impossible at present. B4Ge3O12 (bismuth germanate, or BGO) was developed to meet the requirements of the medical tomographic scanner, which calls for large numbers of very small scintillators capable of high absorption of photons of about 170 keV energy yet able to respond to radiation changes quickly without exhibiting “afterglow,” especially when the detector has to integrate the current output of the photomultiplier. Its higher density than NaI (Tl) allows the use of smaller crystals, but its low light output (8 percent of NaI (Tl)) is a disadvantage. Against this it is nonhygroscopic, so it can be used with only light shielding. CsF is a scintillator with a very fast decay time (about 5 ns) but a low light output (18 percent of NaI (Tl)). It also has been used in tomographic scanners.
Decay constant (s)
Wavelength of maximum emission (nm)
Operating temperature (°C)
Hygroscopic
Room
Yes
Material1
Density (g/cm3)
Refractive index (n)
Light output2 (% anthracene)
NaI (Tl)
3.67
1.775
230
0.23 # 10-6
413
NaI (pure)
3.67
1.775
440
0.06 # 10-6
303
CsI (Tl)
4.51
1.788
95
1.1 # 10-6
580
Room
No
CsI (Tl)
4.51
1.787
150–190
0.65 # 10-6
420
Room
Yes
CsI (pure)
4.51
1.788
500
0.6 # 10-6
400
CaF2 (EU)
3.17
1.443
110
1 # 10-6
435
Room
No
LiI (EU)
4.06
1.955
75
1.2 # 10-6
475
Room
Yes
Ca WO4
6.1
1.92
36
6 # 10-6
430
Room
No
ZnS (Ag)
4.09
2.356
300
0.2 #
1450
Room
No
ZnO (Ga)
5.61
2.02
90
0.4 # 10
385
Room
No
CdWO4
7.90
530
Room
No
Bi4Ge3O12
7.13
2.15
480
Room
No
CsF
4.64
1.48
390
Room
No
0-6 -9
0.9–20 # 10
-6
0.3 # 40
5#
10-6
10-12
Yes
No
Chapter | 29 Nuclear Instrumentation Technology
Table 29.5 Inorganic scintillator materials
1. The deliberately added impurity is given in parentheses. 2. Light output is expressed as a percentage of that of a standard crystal of anthracene used in the same geometry.
529
Table 29.6 Properties of organic scintillators
Refractive index (n)
Boiling melting or softening point (°C)
Decay constant Light output (% anthracene) (ns)
Wavelength of max. emission (nm)
Loading content (% by weight)
H/C Number of H atoms/ C atoms
Attenuation length (l/e m)
Principal applications
Material
Scintillator
Density (g/cm3)
Plastic
NE102A
1.032
1.581
75
65
2.4
423
–
1.104
2.5
c, a, b fast n
NE104
1.032
1.581
75
68
1.9
406
–
1.100
1.2
Ultra-fast counting
NE104B
1.032
1.58
75
59
3.0
406
–
1.107
1.2
Ditto with BBQ1 light guides
NE105
1.037
1.58
75
46
–
423
–
1.098
–
Air-equivalent for dosimetry
NE110
1.032
1.58
75
60
3.3
434
–
1.104
4.5
c , a, b fast n, etc.
NE111A
1.032
1.58
75
55
1.6
370
1.103
–
Ultra-fast timing
NE114
1.032
1.58
75
50
4.0
434
1.109
–
Cheaper for large arrays
NE160
1.032
1.58
80
59
2.3
423
–
1.105
–
For use at higher temperatures—usable up to 150°C
Pilot U
1.032
1.58
75
67
1.36
391
–
1.100
–
Ultra-fast timing
Pilot 425
1.19
1.49
100
–
425
1.6
NE213
0.874
1.508
141
78
3.7
425
1.213
Fast n (PSD)2 a, b (internal counting)
Ne216
0.885
1.523
141
78
3.5
425
1.171
Internal counting, dosimetry
NE220
1.306
1.442
104
65
3.8
425
29% O
1.669
a, b (internal counting)
NE221
1.08
1.442
104
55
4
425
Gel
1.669
c, fast n
NE224
0.887
1.505
169
80
2.6
425
1.330
c, insensitive to n
NE226
1.61
1.38
80
20
3.3
430
0
n (heptane-based)
NE228
0.71
1.403
99
45
–
385
2.11
Deuterated
NE230
0.945
1.50
81
60
3.0
425
14.2% D
0.984
Deuterated
NE232
0.89
1.43
81
60
4
430
24.5% D
1.96
a, b (internal counting)
NE233
0.874
1.506
117
74
3.7
425
1.118
For large tanks
NE235
0.858
1.47
350
40
4
420
2.0
Internal counting, dosimetry
NE250
1.035
1.452
104
50
4
425
Liquid
1. BBQ is wavelength-shifter. 2. PSD means pulse shape discrimination.
Cherenkov detector
32% O
1.760
531
Chapter | 29 Nuclear Instrumentation Technology
LiI (Eu) is a scintillator particularly useful for detecting neutrons, and since the lithium content can be changed by using enriched 6Li to enhance the detection efficiency for slow neutrons or by using almost pure 7Li to make a detector insensitive to neutrons, it is a very versatile, if expensive, neutron detector. When it is cooled to liquid-nitrogen temperature the detection efficiency for fast neutrons is enhanced and the system can be used as a neutron spectrometer to determine the energies of the neutrons falling on the detector. LiI (Eu) has the disadvantage that it is extremely hygroscopic, even more so than NaI (Tl). Cadmium tungstate (CdWO4) and calcium tungstate (CaWO4) single crystals have been grown, with some difficulty, and can be used as scintillators without being encapsulated, since they are nonhygroscopic. However, their refractive index is high, and this causes 60–70 percent of the light emitted in scintillators to be entrapped in the crystal. CaF2 (Eu) is a nonhygroscopic scintillator that is inert toward almost all corrosives and thus can be used for beta detection without a window or in contact with a liquid, such as the corrosive liquids used in fuel-element treatment. It can also be used in conjunction with a thick NaI (Tl) or CsI (Tl) or CsI (Na) crystal to detect beta particles in a background of gamma rays, using the Phoswich concept, where events occurring only in the thin CaF2 (Eu), unaccompanied by a simultaneous event in the thick crystal, are counted. That is, only particles that are totally absorbed in the CaF2 (Eu) are of interest—when both crystals display coincident events, these are vetoed by coincidence and pulse-shape discrimination methods. Coincidence counting is discussed further in Section 29.3.6.4.
29.2.2.2 Organic Scintillators The first organic scintillator was introduced by Kallman in 1947 when he used a naphthalene crystal to show that it would detect gamma rays. Later, anthracene was shown to exhibit improved detection efficiency and stilbene was also used. The latter has proved particularly useful for neutron detection. Mixtures of organic scintillators such as solutions of anthracene in naphthalene, liquid solutions, and plastic solutions were also introduced, and now the range of organic scintillators available is very great. The plastic and liquid organic scintillators are generally cheaper to manufacture than inorganic scintillators and can be made in relatively large sizes. They are generally much faster in response time than most inorganic scintillators and, being transparent to their own scintillation light, can be used in very large sizes. Table 29.6 gives the essential details of a large range of plastic and liquid scintillators. The widest use of organic scintillators is probably in the field of liquid-scintillation counting, where the internal counting of tritium, 14C, 55Fe, and other emitters of lowenergy beta particles at low activity levels is being carried out on an increasing scale. Biological samples of many types
have to be incorporated into the scintillator, and it is necessary to do this with the minimum quenching of the light emitted, minimum chemiluminescence, as well as minimum effort—the last being an important factor where large numbers of samples are involved. At low beta energies the counting equipment is just as important as the scintillator. Phosphorescence is reduced to a minimum by the use of special vials and reflectors. Chemiluminescence, another problem with biological and other samples, is not completely solved by the use of two photomultipliers in coincidence viewing the sample. This must be removed or reduced, for example, by removing alkalinity and/or peroxides by acidifying the solution before mixing with the scintillator.
29.2.2.3 Loaded Organic Scintillators To improve the detection efficiency of scintillators for certain types of particles or ionizing or nonionizing radiations, small quantities of some substances can be added to scintillators without greatly degrading the light output. It must be borne in mind that in nearly all cases there is a loss of light output when foreign substances are added to an organic scintillator, but the gain in detection efficiency may be worth a slight drop in this output. Suitable loading materials are boron, both natural boron and boron enriched in 10B and gadolinium—these are to increase the detection efficiency for neutrons. Tin and lead have been used to improve the detection efficiency for gamma rays and have been used in both liquid and plastic scintillators.
29.2.2.4 Plastic Scintillators Certain plastics such as polystyrene and polyvinyltoluene can be loaded with small quantities of certain substances such as p-terphenyl that cause them to scintillate when bombarded by ionizing particles or ionizing radiation. An acrylic such as methyl methacrylate can also be doped to produce a scintillating material but not with the same high light output as the polyvinyltoluene-based scintillators. It can be produced, however, much more cheaply and it can be used for many high-energy applications. Plastic scintillators have the ability to be molded into the most intricate shapes to suit a particular experiment, and their inertness to water, normal air, and many chemical substances allows their use in direct contact with the activity to be measured. Being of low atomic number constituents, the organic scintillators are preferred to inorganics such as NaI (Tl) or CsI (Tl) for beta counting, since the number of beta particles scattered out of an organic scintillator without causing an interaction is about 8 percent, whereas in a similar NaI (Tl) crystal the number scattered out would be 80–90 percent. When used to detect X- or gamma rays, organic scintillators differ in their response compared with inorganic
532
scintillators. Where inorganic scintillators in general have basically three main types of response, called photoelectric, Compton, and pair production, because of the high Z (atomic weight) of the materials of the inorganic scintillators, the low-Z characteristics of the basic carbon and similar components in organic scintillators lead only to Compton reactions, except at very low energies, with the result that for a monoenergetic gamma emitter, the spectrum produced is a Compton distribution. For study of the basic interactions between gamma and X-rays and scintillation materials, see Price (1964), since these reaction studies are beyond the scope of this chapter. The ability to produce simple Compton distribution spectra has proved of considerable use in cases in which one or two isotopes have to be measured at low intensities and a large inorganic NaI (Tl) detector might be prohibitively expensive. Such is the case with whole-body counters used to measure the40K and 137Cs present in the human body—the 40K being the natural activity present in all potassium, the 137Cs the result of fallout from the many atomic-bomb tests. Similarly, in measuring the potassium content of fertilizer, a plastic scintillator can carry this out more cheaply than an inorganic detector. The measurement of moisture in soil by gamma-ray transmission through a suitable sample is also performed more easily with a plastic scintillator, since the fast decay time of the organics compared with, say, NaI (Tl) allows higher counting rates to be used, with consequent reduction of statistical errors.
29.2.2.5 Scintillating Ion-exchange Resins By treating the surfaces of plastic scintillating spheres in suitable ways, the extraction and counting of very small amounts of beta-emitting isotopes may be carried out from large quantities of carrier liquid, such as rainwater, cooling water from reactors, effluents, or rivers, rather than having to evaporate large quantities of water to obtain a concentrated sample for analysis.
29.2.2.6 Flow Cells It is often necessary to continuously monitor tritium, 14C, and other beta-emitting isotopes in aqueous solution, and for this purpose the flow cell developed by Schram and Lombaert and containing crystalline anthracene has proved valuable. A number of improvements have been made to the design of this cell, resulting in the NE806 flow cell. The standard flow cell is designed for use on a single 2-in.-diameter lownoise photomultiplier and can provide a tritium detection efficiency of 2 percent at a background of 2 c/s and a 14C detection efficiency of 30 percent at a background of 1 c/s.
29.2.2.7 Photomultipliers The photomultiplier is the device that converts the light flash produced in the scintillator into an amplified electrical pulse.
PART | IV Electrical and Radiation Measurements
It consists generally of an evacuated tube with a flat glass end onto the inner surface of which is deposited a semitransparent layer of metal with a low “work function,” that is, it has the property of releasing electrons when light falls on it. The most usual composition of this photocathode, as it is called, is cesium plus some other metal. A series of dynodes, also coated with similar materials, form an electrical optical system that draws secondary electrons away from one dynode and causes them to strike the next one with a minimum loss of electrons. The anode finally collects the multiplied shower of electrons, which forms the output pulse to the electronic system. Gains of 106 to 107 are obtained in this process. Depending on the spectrum of the light emitted from the scintillator, the sensitivity of the light-sensitive photocathode can be optimized by choice of the surface material. One can detect single electrons emitted from the photocathode with gallium arsenide (GaAs) as the coating. Silicon photodiodes can also be used to detect scintillation light, but because the spectral sensitivity of these devices is in the red region (~500–800 nm) as opposed to the usual scintillator light output of ~400–500 nm, a scintillator such as CsI (T1) must be used for which the output can match the spectral range of a silicon photodiode. Light detectors are discussed further in Chapter 21.
29.2.3 Solid-state Detectors It was observed earlier that the operation of a solid-state or semiconductor detector could be likened to that of an ionization chamber. A study of the various materials that were thought to be possible for use as radiation detectors has been carried out in many parts of the world, and the two materials that proved most suitable were silicon and germanium. Both these materials were under intense development for the transistor industry, so the detector researchers were able to make use of the work in progress. Other materials tested, which might later prove valuable, were cadmium telluride (CdTe), mercuric iodide (HgI2), gallium arsenide (GaAs), and silicon carbide. CdTe and HgI2 can be used at room temperature, but to date, CdTe has been produced in only relatively small sizes and with great difficulty. Mercuric iodide (HgI) has been used successfully in the measurement of X-ray fluorescence in metals and alloy analysis. In this book no attempt has been made to go deeply into the physics of semiconductor detectors, but it is useful to give a brief outline of their operation, especially compared with the gas-ionization chamber, of which they are often regarded as the solid-state equivalent. First, a much smaller amount of energy is required to release electrons (and therefore holes) in all solids than in gases. An average energy of only 3 eV is required to produce an electron–hole pair in germanium (and 3.7 eV in silicon), whereas about 30 eV is required to produce the equivalent in gases. This relatively easy production of free holes and
533
Chapter | 29 Nuclear Instrumentation Technology
electrons in solids results from the close proximity of atoms, which causes many electrons to exist at energy levels just below the conduction band. In gases the atoms are isolated and the electrons much more tightly bound. As a result, a given amount of energy absorbed from incident radiation produces more free charges in solids than in gas, and the statistical fluctuations become a much smaller fraction of the total charge released. This is the basic reason for semiconductor detectors producing better energy resolution than gas detectors, especially at high energies. At low energies, because the signal from the semiconductor is some 10 times larger than that from the gas counter, the signal/noise ratio is enhanced. To obtain an efficient detector from a semiconductor material, we may consider what occurs should we take a slice of silicon 1 cm2 in area and 1 mm thick and apply a potential across the faces. If the resistivity of this material is 2,000 X cm, with ohmic contacts on each side, the slice would behave like a resistor of 200 X, and if 100 V is applied across such a resistor, Ohm’s law states that a current of 0.5 A would pass. If radiation now falls on the silicon slice, a minute extra current will be produced, but this would be so small compared with the standing 0.5 A current that it would be undetectable. This is different from the gas-ionization chamber, where the standing current is extremely small. The solution to this problem is provided by semiconductor junctions. The operation of junctions depends on the fact that a mass action law compels the product of electron and hole concentrations to be constant for a given semiconductor at a fixed temperature. Therefore, heavy doping with a donor such as phosphorus not only increases the free electron concentration, it also depresses the hole concentration to satisfy the relation that the product must have a value dependent only on the semiconductor. For example, silicon at room temperature has the relation n # p c 1020, where n is the number of holes and p is the number of electrons. Hence in a region where the number of donors is doped to a concentration of 1018, the number of holes will be reduced to about 102. McKay, of Bell Telephone Laboratories, first demonstrated in 1949 that if a reverse-biased p–n junction is formed on the semiconductor, a strong electric field may be provided across the device, sweeping away free holes from the junction on the p-side (doped with boron; see Figure 29.5) and electrons away from it on the n-side (doped with phosphorus). A region is produced that is free of holes or electrons and is known as the depletion region. However, if an ionizing particle or quantum of gamma energy passes through the region, pairs of holes and electrons are produced that are collected to produce a current in the external circuit. This is the basic operation of a semiconductor detector. The background signal is due to the collection of any pairs of holes plus electrons produced by thermal processes. By reducing the temperature of the detector by liquid nitrogen to about 77 K, most of this background is removed, but in practical detectors the effect of surface contaminants on those surfaces not forming the diode can be acute. Various
Figure 29.5 Schematic diagram of semiconductor detector.
methods of avoiding these problems, such as the use of guard rings, have reduced much of this problem. However, the effects of very small amounts of oxygen and the like can have devastating results on a detector, and most are enclosed in a high-vacuum chamber. By doping a germanium or silicon crystal with lithium (an interstitial donor), which is carried out at moderate temperatures using an electric field across the crystal, the acceptors can be almost completely compensated in p-type silicon and germanium. This allows the preparation of relatively large detectors suitable for high-energy charged particle spectroscopy. By this means coaxial detectors with volumes up to about 100 cm3 have been made, and these have revolutionized gamma-ray spectroscopy, since they can separate energy lines in a spectrum that earlier NaI (T1) scintillation spectrometers could not resolve. New work on purifying germanium and silicon has resulted in the manufacture of detectors of super-pure quality such that lithium drifting is not required. Detectors made from such material can be cycled from room temperature to liquid-nitrogen temperature and back when required without the permanent damage that would occur with lithium-drifted detectors. Surface-contamination problems, however, still require them to be kept in vacuo. Such material is the purest ever produced—about 1 part in 1013 of contaminants.
29.2.4 Detector Applications In all radiation-measuring systems there are several common factors that apply to measurements to be carried out. These are: 1. Geometry. 2. Scattering. 3. Backscattering. 4. Absorption. 5. Self-absorption. Geometry Since any radioactive source emits its products in all directions (in 4r geometry), it is important to be able
534
PART | IV Electrical and Radiation Measurements
in an evacuated chamber or insert the source directly into the gas of an ionization chamber or proportional counter. The most popular method at present uses a semiconductor detector, which allows the energies to be determined very accurately when the detector and source are operated in a small evacuated cell. This is especially important for alphas. Figure 29.6 Geometry of radiation collection.
to calculate how many particles or quanta may be collected by the active volume of the counter. If we consider a point source of radiation, as in Figure 29.6, all the emitted radiation from the source will pass through an imaginary sphere with the source at center, providing there is no absorption. Also, for any given sphere size, the average radiation flux in radiations per unit time per unit sphere surface area is constant over the entire surface. The geometry factor, G, can be therefore written as the fraction of total 4r solid angle subtended by source and detector. For the case of the point source at a distance d from a circular window of radius r, we have the following relation:
G = 0.5 ^ 1 - cos z h = 0.5 * 1 -
1
8 1 + ^ r 2 /d
4 2 h½
.
(29.13) D2 16d 2
(29.14)
Scattering Particles and photons are scattered by material through which they pass, and this effect depends on the type of particle or photon, its energy, its mass, the type of material traversed, its mass, and density. What we are concerned with here is not the loss of particle energy or particles themselves as they pass through a substance but the effects caused by particles deflected from the direct path between radioactive source and detector. It is found that some particles are absorbed into the material surrounding the source. Others are deflected away from it but are later rescattered into the detector, so increasing the number of particles in the beam. Some of these deflected particles can be scattered at more than 90° to the beam striking the detector—these are called backscattered, since they are scattered back in the direction from which they came. Scattering also occurs with photons, as is particularly demonstrated by the increase in counting rate of a beam of gamma rays when a high-Z material such as lead is inserted into the beam. Backscattering Backscattering increases with increasing atomic number Z and with decreasing energy of the primary particle. For the most commonly used sample planchets (platinum), the backscattering factor has been determined as 1.04 for an ionization chamber inside which the sample is placed, known as a 50 percent chamber. Absorption Because the particles to be detected may be easily absorbed, it is preferred to mount source and detector
Self-absorption If visible amounts of solid are present in a source, losses in counting rate may be expected because of self-absorption of the particles emitted from the lower levels of the source that are unable to leave the surface of the source. Nader et al. give an expression for the self-absorption factor for alpha particles in a counter with 2r geometry (this is the gas counter with source inside the chamber): and
fs = 1 fs =
s for s 1 pR 2pR
0.5R s for s 2 pR
(29.15) (29.16)
where s is the source thickness, R the maximum range of alpha particles in source material, and p the maximum fraction of R that particles can spend in source and still be counted. Radiation Shield The detector may be housed in a thick radiation shield to reduce the natural background that is found everywhere to a lower level where the required measurements can be made. The design of such natural radiation shields is a subject in itself and can range from a few centimeters of lead to the massive battleship steel or purified lead used for whole-body monitors, where the object is to measure the natural radiation from a human body.
29.2.4.1 Alpha-detector Systems The simplest alpha detector is the air-filled ionization chamber, used extensively in early work on radioactivity but now only used for alpha detection in health physics surveys of spilled activities on benches, and so on. Even this application is seldom used, since more sensitive semiconductor or scintillation counters are usual for this purpose. Thin-window ionization or gas-proportional counters can also be used, as can internal counters in which the sample is inserted into the active volume of a gas counter. Due to the intense ionization produced by alpha particles, it is possible to count them in high backgrounds from other radiation such as betas and gamma rays by means of suitable discrimination circuits. Ionization Chambers Because alpha particles are very readily absorbed in a small thickness of gas or solid, an early method used in counting them involved the radioactive source being placed inside a gas counter with no intermediate window. Figure 29.7 shows the schematic circuit of such a counter in which the gas, generally pure methane (CH4), is allowed to flow through the counting volume. The counter
Chapter | 29 Nuclear Instrumentation Technology
535
are measured, all particles are able to trigger the counter and no energy resolution is possible. On the other hand, the counter operating in the Geiger region is so sensitive that high electronic amplification is not required, and in many cases the counter will produce an output sufficient to trigger the scaler unit directly. To operate the system in its optimum condition, some amplification is desirable and should be variable. This is to set the operating high-voltage supply and the amplifier gain on the Geiger plateau (see Section 29.2.1.1) so that subsequent small variations in the highvoltage supply do not affect the counting characteristics of the system. Figure 29.7 Gas flow-type proportional counter system.
can be operated in the ionization counter region, but more usually the high-voltage supply is raised so that it operates in the proportional counter region, allowing discrimination to be made between alpha particles and beta particles, which may be emitted from the same source, and improving the ratio of signal to noise. Ionization chambers are used in two different ways. Depending on whether the time constants of the associated electronic circuits are small or large, they are either counting, that is, responding to each separate ionizing event, or integrating, that is, collecting the ionization over a relatively long period of time. The counting mode is little used nowadays except for alpha particles. Gas Proportional Counters Thin-window proportional counters allow alphas to be counted in the presence of high beta and gamma backgrounds, since these detectors can discriminate between the high ionization density caused by alpha particles passing through the gas of the counter and the relatively weak ionization produced by beta particles or gamma-ray photons. With counters using flowing gas, the source may be placed inside the chamber and the voltage across the detector raised to give an amplification of 5–50 times. In this case pure methane (CH4) is used with the same arrangement as shown in Figure 29.7. The detection efficiency of this system is considerably greater than that provided by the thin-window counter with the source external to the counter. This is due (1) to the elimination of the particles lost in penetrating the window and (2) to the improved geometry whereby all particles that leave the source and enter the gas are counted. Geiger Counters The observations made about alphaparticle detection by gas proportional counters apply also to Geiger counters. That is, thin entrance windows or internally mounted sources are usable and the system of Figure 29.7 is also applicable to Geiger-counter operation. However, some differences have to be taken into account. Since operation in the Geiger region means that no differences in particle energy
Scintillation Counters Since alpha particles are very easily absorbed in a very short distance into any substance, problems are posed if alphas are to enter a scintillator through a window, however thin, which would also prevent light from entering and overloading the photomultiplier. The earliest scintillation detector using a photomultiplier tube had a very thin layer of zinc sulfide powder sprinkled on the glass enve lope of a side-viewing photomultiplier of the RCA 931-A type, on which a small layer of adhesive held the zinc sulfide in place. Due to the low light transmission of powdered zinc sulfide, a layer 5–10 mg/cm2 is the optimum thickness. The whole system was enclosed in a light-tight box, source and detector together. Later experimenters used a layer of aluminum evaporated onto the zinc sulfide to allow the alphas in but to keep the light out. This was not successful, since an adequate thickness of aluminum to keep the light out also prevented the alphas from entering the zinc sulfide. When photomultipliers with flat entrance windows began to become available, the zinc sulfide was deposited on the face of a disc of transparent plastic such as Perspex/Lucite. This was easier to evaporate aluminum on than directly onto the photomultiplier, but the light problem was still present, and even aluminized mylar was not the answer. Operation in low- or zero-light conditions was still the only satisfactory solution. Thin scintillating plastics were also developed to detect alphas, and in thin layers, generally cemented by heat to a suitable plastic light guide, proved cheaper to use than zinc sulfide detectors. The light output from plastic scintillators was, however, less than that from zinc sulfide, but both scintillators, in their very thin layers, provide excellent discrimination against beta particles or gamma rays, often present as background when alpha-particle emitters are being measured. One major advantage of the plastic scintillator over the inorganic zinc sulfide detector is its very fast response, on the order of 4 # 10-9 s for the decay time of a pulse, compared with 4 – 10 # 10-6 s for zinc sulfide. Another scintillator that has been used for alpha detection is the inorganic crystal CsI (T1). This can be used without any window (but in the dark) because it is nonhygroscopic, and, if suitably beveled around the circumference, will produce an output proportional to the energy of the ionizing particle
536
incident on it, thus acting as a spectrometer. Extremely thin CsI (T1) detectors have been made by vacuum deposition and used where the beta and gamma background must be reduced to very small proportions, yet the heavy particles of interest must be detected positively. Inorganic single crystals are expensive, and when large areas of detector must be provided, either the zinc sulfide powder screen or a large-area very thin plastic scintillator is used. The latter is much cheaper to produce than the former, since the thin layer of zinc sulfide needs a great deal of labor to produce and it is not always reproducible. In health physics applications, for monitors for hand and foot contamination or bench-top contamination, various combinations of plastic scintillator, zinc sulfide, or sometimes anthracene powder scintillators are used. If the alpha emitter of interest is mixed in a liquid scintillator, the geometrical effect is almost completely eliminated and, provided that the radioactive isotope can be dissolved in the liquid scintillator, maximum detection efficiency is obtained. However, not all radioactive sources can be introduced in a chemical form suitable for solution, and in such cases the radioactive material can be introduced in a finely divided powder into a liquid scintillator with a gel matrix; such a matrix would be a very finely divided extremely pure grade of silica. McDowell (1980) has written a useful review of alpha liquid scintillation counting from which many references in the literature may be obtained.
29.2.4.2 Detection of Beta Particles Ionization Chambers Although ionization chambers were much used for the detection of beta particles in the early days, they are now used only for a few special purposes: 1. The calibration of radioactive beta sources for surface dose rate. An extrapolation chamber varies the gap between the two electrodes of a parallel plate ionization chamber, and the ionization current per unit gap versus air gap is plotted on a graph as the gap is reduced, which extrapolates to zero gap with an uncertainty that is seldom as much as 1 percent, giving a measure of the absolute dose from a beta source. 2. Beta dosimetry. Most survey instruments used to measure dose rate incorporate some sort of device to allow beta rays to enter the ionization chamber. This can take the form of a thin window with a shutter to cut off the betas and allow gamma rays only to enter when mixed beta-gamma doses are being measured. However, the accuracy of such measurements leaves a great deal to be desired, and the accurate measurement of beta dose rates is a field where development is needed. Proportional Counters Beta particles may originate from three different positions in a proportional counter; first, from part of the gaseous content of the counter, called internal
PART | IV Electrical and Radiation Measurements
counting; second, from a solid source inside the counter itself; and third, from an external source with the beta particles entering the counter by means of a thin window. The first method, internal counting, involves mixing the radioactive source in the form of a gas with a gas suitable for counting; or the radioactive source can be transformed directly into a suitable gaseous form. This is the case when detection of 14C involves changing solid carbon into gaseous carbon dioxide (CO2), acetylene (CH), or methane (CH4), any of which can be counted directly in the proportional counter. This method is used in measurement of radiocarbon to determine the age of the carbon: radiocarbon dating (see Section 4.4.2.1). The second method involves the use of solid sources introduced by means of a gas-tight arm or drawer so that the source is physically inside the gas volume of the counter. This method was much used in early days but is now used only in exceptional circumstances. The third method, with the source external to the counter, is now the most popular, since manufacturers have developed counters with extremely thin windows that reduce only slightly the energies of the particles crossing the boundary between the gas of the counter and the air outside. Thin mica or plastic windows can be on the order of a few mg/cm2, and they are often supported by wire mesh to allow large areas to be accommodated. A specialized form of proportional counter is the 2r/4r type used in the precise assay of radioactive sources. This is generally in the form of an open-ended pillbox, and the source is deposited on a thin plastic sheet, either between two counters face to face (the 4r configuration) or with only a single counter looking at the same source (the 2r configuration). Such counters are generally made in the laboratory, using plastic as the body onto which a conducting layer of suitable metal is deposited to form the cathode. One or more wires form the anode. A typical design is shown in Figure 29.8. Geiger Counters The Geiger counter has been the most popular detector for beta particles for a number of reasons. First, it is relatively cheap, either when manufactured in the laboratory or as a commercially available component. Second, it requires little in the way of special electronics to make it work, as opposed to scintillation or solid-state detectors. Third, the later halogen-filled versions have quite long working lives. To detect beta particles, some sort of window must be provided to allow the particles to enter the detector. The relations between the various parameters affecting the efficiency with which a particular radioactive source is detected by a particular Geiger counter have been studied extensively. Zumwalt (1950), in an early paper, provided the basic results that have been quoted in later books and papers, such as Price (1964) and Overman and Clark (1960). In general, the observed counting rate R and the corresponding disintegration rate A can be related by the equation
R = YA
(29.17)
Chapter | 29 Nuclear Instrumentation Technology
537
Figure 29.8 Typical design of 4r counter.
where Y is a factor representing the counting efficiency. Some 11 factors are contained in Y, and these are set out in those books we mentioned.
29.2.4.3 Detection of Gamma Rays (100 keV Upward) Ionization Chambers Dosimetry and dose-rate measurement of gamma rays are the main applications in which ionization chambers are used to detect gamma rays today. They are also used in some industrial measuring systems although scintillation, and other counters are now generally replacing them.
A device called the free air ionization chamber (see Figure 29.9) is used in X-ray work to calibrate the output of X-ray generators. This chamber directly measures the charge liberated in a known volume of air, with the surrounding medium being air. In Figure 29.9 only ions produced in the volume V1 + V + V2 defined by the collecting electrodes are collected and measured; any ionization outside this volume is not measured. However, due to the physical size of the chamber and the amount of auxiliary equipment required, this device is only used in standardizing laboratories, where small Bragg–Gray chambers may be calibrated by comparison with the free air ionization chamber.
538
PART | IV Electrical and Radiation Measurements
Figure 29.9 Free air ionization chamber.
The Bragg–Gray chamber depends on the principle that if a small cavity exists in a solid medium, the energy spectrum and the angular distribution of the electrons in the cavity are not disturbed by the cavity, and the ionization occurring in the cavity is characteristic of the medium, provided that: 1. The cavity dimensions are small compared with the range of the secondary electrons. 2. Absorption of the gamma radiation by the gas in the cavity is negligible. 3. The cavity is surrounded by an equilibrium wall of the medium. 4. The dose rate over the volume of the medium at the location of the cavity is constant. The best-known ionization chamber is that designed by Farmer; this has been accepted worldwide as a substandard to the free air ionization chamber for measuring dose and dose rates for human beings. Solid-state Detectors The impact of solid-state detectors on the measurement of gamma-rays has been dramatic; the improvement in energy resolution compared with a NaI (T1) detector can be seen in Figure 29.10, which shows how the solid-state detector can resolve the 1.17 and 1.33 MeV lines from 60Co with the corresponding NaI (T1) scintillation detector result superimposed. For energy resolution the solid-state detector provides a factor of 10 improvement on the scintillation counter. However, there are a number of disadvantages which prevent this detector superseding the scintillation detector. First, the present state of the art limits the size of solid-state detectors to some 100 cm3 maximum, whereas scintillation crystals of NaI (T1) may be grown to sizes of 76 cm diameter by 30 cm, while plastic and liquid scintillators can be even larger. Second, solid-state detectors of germanium have to be operated to liquid-nitrogen temperatures and in vacua. Third, large solid-state detectors are very expensive. As a result, the present state of the art has tended towards the use of solid-state detectors when the problem is the determination of energy spectra, but scintillation counters are used in cases where extremely low levels of activity are required to be detected. A very popular combined use of solid-state and scintillation detectors is embodied in the technique of surrounding the relatively small solid-state detector in its liquid nitrogen-cooled and evacuated cryostat with a suitable scintillation counter, which can be an inorganic
Figure 29.10 Comparison of energy resolution by different detectors.
crystal such as NaI (T1) or a plastic or liquid scintillator. By operating the solid-state detector in anticoincidence with the annular scintillation detector, Compton-scattered photons from the primary detector into the anticoincidence shield (as it is often called) can be electronically subtracted from the solid-state detector’s energy response spectrum.
29.2.4.4 The Detection of Neutrons The detection of nuclear particles usually depends on the deposition of ionization energy and, since neutrons are uncharged, they cannot be detected directly. Neutron sensors therefore need to incorporate a conversion process by which the incoming particles are changed to ionizing species and nuclear reactions such as 235U (fission) + ~200 MeV or 10B(n, a)Li7 + ~2 MeV are often used. Exothermic reactions are desirable because of the improved signal-to-noise ratio produced by the increased energy per event, but there are limits to the advantages that can be gained in this way. In particular, these reactions are not used for neutron spectrometry because of uncertainty in the proportion of the energy carried by the reaction products, and for that purpose the detection of proton recoils in a hydrogenous material is often preferred. There are also many ways of detecting the resultant ionization. It may be done in real time with, for example, solid-state semiconductor detectors, scintillators, or ionization chambers, or it may be carried out at some later, more convenient time by measuring the activation generated by the neutrons in a chosen medium (such as 56M from 55Mn). The choice of technique depends on the information required and the constraints of the environment. The latter are often imposed by the neutrons themselves. For example, boronloaded scintillators can be made very sensitive and are convenient for detecting low fluxes, but scintillators and photomultipliers are vulnerable to damage by neutrons and are sensitive to the gamma fluxes which tend to accompany
Chapter | 29 Nuclear Instrumentation Technology
them. They are not suitable for the high-temperature, highflux applications found in nuclear reactors. Neutron populations in reactors are usually measured with gas-filled ionization chambers. Conversion is achieved by fission in 235U oxide applied as a thin layer (~1 mg cm-2) to the chamber electrode(s). The 10B reaction is also used, natural or enriched boron being present either as a painted layer or as BF3 gas. Ionization chambers can be operated as pulse devices, detecting individual events; as DC generators, or in which events overlap to produce current; or in the so-called “current fluctuation” or “Campbell” mode in which neutron flux is inferred from the magnitude of the noise present on the output as a consequence of the way in which the neutrons arrive randomly in time. Once again the method used is chosen to suit operational requirements. For example, the individual events due to neutrons in a fission chamber are very much larger than those due to gammas, but the gamma photon arrival rate is usually much greater than that of the neutrons. Thus, pulse counters have good gamma rejection compared with DC chambers at low fluxes. On the other hand, the neutron-to-gamma flux ratio tends to improve at high reactor powers while counting losses increase and gamma pulse pile-up tends to simulate neutron events. DC operation therefore takes over from pulse measurement at these levels. The current fluctuation mode gives particularly good gamma discrimination because the signals depend on the mean square charge per event, i.e., the initial advantage of neutrons over gammas is accentuated. Such systems can work to the highest fluxes, and it is now possible to make instruments which combine pulse counting and current fluctuation on a single chamber and which cover a dynamic range of more than 10 decades. The sensitivity of a detector is proportional to the probability of occurrence of the expected nuclear reaction and can conveniently be described in terms of the cross-section
539
of a single nucleus for that particular reaction. The unit of area is the barn, that is, 10-24cm2. 10B has a cross-section of order 4,000 b to slow (thermal) neutrons whilst that of 235U for fission is only ~550 b. In addition, the number of reacting atoms present in a given thickness of coating varies inversely with atomic weight so that, in principle, 10B sensors are much more sensitive than those that depend on fission. This advantage is offset by the lower energy per event and by the fact that boron is burnt up faster at a given neutron flux, that is, that such detectors lose sensitivity with time. Neutrons generate activation in most elements, and if detector constructional materials are not well chosen, this activation can produce residual signals analogous to gamma signals and seriously shorten the dynamic range. The electrodes and envelopes of ion chambers are therefore made from high-purity materials that have small activation crosssections and short daughter half-lives. Aluminum is usually employed for low-temperature applications, but some chambers have to operate at ~550°C (a dull red), and these use titanium and/or special low-manganese, low-cobalt stainless steels. Activity due to fission products from fissile coatings must also be considered and is a disadvantage to fission chambers. The choice of insulators is also influenced by radiation and temperature considerations. Polymers deteriorate at high fluences, but adequate performance can be obtained from high-purity, polycrystalline alumina and from artificial sapphire, even at 550°C. Analogous problems are encountered with cables and special designs with multiple coaxial conductors insulated with high-purity compressed magnesia have been developed. Electrode/cable systems of this type can provide insulation resistances of order 109 X at 550°C and are configured to eliminate electrical interference even when measuring microamp signals in bandwidths of order 30 MHz under industrial plant conditions. Figure 29.11 shows
Figure 29.11 Typical reactor control ionization chamber. Courtesy of UKAEA, Winfrith.
540
the construction of a boron-coated gamma-compensated dc chamber designed to operate at 550°C and 45 bar pressure in the AGR reactors. The gas filling is helium and b activity from the low-manganese steel outer case is screened by thick titanium electrodes. The diameter of this chamber is 9 cm; it is 75 cm long and weighs 25 kg. By contrast, Figure 29.12 shows a parallel plate design, some three hundred of which were used to replace fuel “biscuits” to determine flux distributions in the ZEBRA experimental fast reactor at AEE Winfrith. Boron trifluoride (BF3) proportional counters are used for thermal neutron detection in many fields; they are convenient and sensitive, and are available commercially with sensitivities between 0.3 and 196 S-1 (unit flux)-1. They tend to be much more gamma sensitive than pulse-fission chambers because of the relatively low energy per event from the boron reaction, but this can be offset by the larger sensitivity. A substantial disadvantage for some applications is that they have a relatively short life in terms of total dose (gammas plus neutrons), and in reactor applications it may be necessary to provide withdrawal mechanisms to limit this dose at high power. Proton-recoil counters are used to detect fast neutrons. These depend on the neutrons interacting with a material in the counter in a reaction of the (n, p) type, in which a proton is emitted that, being highly ionizing, can be detected. The material in the counter can be either a gas or a solid. It must have low-Z nuclei, preferably hydrogen, to allow the neutron to transfer energy. If a gas, the most favored are hydrogen or helium, because they contain the greatest number of
Figure 29.12 Pulse-fission ionization chamber for flux-distribution measurement. Courtesy of UKAEA, Winfrith.
PART | IV Electrical and Radiation Measurements
nuclei per unit volume. If a solid, paraffin, polyethylene, or a similar low-Z material can be used to line the inside of the counter. This type of counter was used by Hurst et al. to measure the dose received by human tissue from neutrons in the range 0.2–10 MeV. It was a three-unit counter (the gas being methane) and two individual sections contained a thin (13.0 mg/cm2) layer of polyethylene and a thick (100 mg/cm2) layer of polyethylene. The energy responses of the three sources of protons combine in such a way as to give the desired overall response, which matches quite well the tissue-dose curve over the energy range 0.2–10 MeV. This counter also discriminates well against the gamma rays which nearly always accompany neutrons, especially in reactor environments. Improvements in gas purification and counter design have led to the development of 3He-filled proportional counters. 3He pressures of 10–20 atm allow the use of these counters as direct neutron spectrometers to measure energy distributions, and in reactor neutron spectrum analysis they are found in most reactor centers all over the world. As explained, it is necessary for the measurement of neutrons that they shall interact with substances that will then emit charged particles. For thermal energy neutrons (around 0.025 eV in energy), a layer of fissionable material such as 235U will produce reaction products in the form of alpha particles, fission products, helium ions, and so on. However, more suitable materials for producing reaction products are 6Li and 10B. These have relatively high probability of a neutron producing a reaction corresponding to a cross-section of 945 barns for 6Li and for 10B of 3770 barns. By mixing 6Li or 10B with zinc sulfide powder and compressing thin rings of the mixture into circular slots in a methyl methacrylate disc, the reaction products from the 6Li or 10B atom disintegration when a neutron is absorbed strike adjacent ZnS particles and produce light flashes, which can be detected by the photomultiplier to which the detector is optically coupled. Figure 29.13 shows such a neutron-detector system. Neutron-proton reactions allow the detection of neutrons from thermal energies up to 200 MeV and higher. For this reaction to take place a hydrogen-type material is required with a high concentration of protons. Paraffin, polyethylene
Figure 29.13 Thermal neutron scintillation counter.
541
Chapter | 29 Nuclear Instrumentation Technology
or gases such as hydrogen, 3H, methane, etc., provide good sources of protons, and by mixing a scintillator sensitive to protons (such as ZnS) with such a hydrogenous material, the protons produced when a neutron beam interacts with the mixed materials can be counted by the flashes of light produced in the ZnS. Liquids can also be used, in which the liquid scintillator is mixed with boron, gadolinium, cadmium, and so on, in chemical forms which dissolve, in the scintillator. Large tanks of 500-1000-1 sizes have been made for high-energy studies, including the study of cosmic ray neutrons. 6Li can also be used dissolved in a cerium-activated glass, and this has proved a very useful neutron detector as the glass is inert to many substances. This 6Li glass scintillator has proved very useful for studies in neutron radiography, where the neutrons are used in the manner of an X-radiograph to record on a photographic film the image produced in the glass scintillator by a beam of neutrons. Another neutron detector is the single crystal of lithium iodide activated with europium 6Li (Eu). When this crystal is cooled to liquid-nitrogen temperature it can record the spectrum of a neutron source by pulse-height analysis. This is also possible with a special liquid scintillator NE213 (xylene-based), which has been adopted internationally as the standard scintillator for fast neutron spectrometry from 1 to 20 MeV. One of the problems in neutron detection is the presence of a gamma-ray background in nearly every practical case. Most of the detectors described here do have the useful ability of being relatively insensitive to gamma rays. That is with the exception of LiI (Eu), which, because of its higher atomic number due to the iodine, is quite sensitive to gamma rays. By reducing the size of the scintillator to about 4 mm square by 1 mm thick and placing it on the end of a long thin light guide placed so that the detector lies at the center of a polyethylene sphere, a detector is produced that can measure neutron dose rate with a response very close to the response of human tissue. Figure 29.14 shows a counter of this kind that is nearly isotropic in its response (being spherical) and with much reduced sensitivity to gamma rays. This is known as the Bonner sphere, and the diameter of the sphere can be varied between 10 and 30 cm to cover the whole energy range.
Figure 29.14 Sphere fast-neutron detector.
For thermal neutrons (E = 0.025 eV), an intermediate reaction such as 6Li(n, a) or 10B(n, a) or fission can be used, a solid-state detector being employed to count the secondary particles emitted in the reaction. For fast neutrons, a radiator in which the neutrons produce recoil protons can be mounted close to a solid-state detector, and the detector counts the protons. By sandwiching a thin layer of 6LiF between two solid-state detectors and summing the coincident alpha and tritium pulses to give an output signal proportional to the energy of the incident neutron plus 4.78 MeV, the response of the assembly with respect to incident neutron energy was found to be nearly linear.
29.3 Electronics A more general treatment of the measurement of electrical quantities is given in Chapter 20. We concentrate here on aspects of electronics that are particularly relevant to nuclear instrumentation.
29.3.1 Electronics Assemblies Although it is perfectly feasible to design a set of electronics to perform a particular task—and indeed this is often done for dedicated systems in industry—the more usual system is to incorporate a series of interconnecting individual circuits into a common frame. This permits a variety of arrangements to be made by plugging in the required elements into this frame, which generally also provides the necessary power supplies. This “building-block” system has become standardized worldwide under the title of NIM and is based on the U.S. Nuclear Regulatory Commission (USNRC) Committee on Nuclear Instrument Modules, presented in USNRC Publication TID-20893. The basic common frame is 483 mm (19 in.) wide and the plug-in units are of standard dimensions, a single module being 221 mm high # 34.4 mm wide # 250 mm deep (excluding the connector). Modules can be in widths of one, two, or more multiples of 34.4 mm. Most standard units are single or double width. The rear connectors that plug into the standard NIM bin have a standardized arrangement for obtaining positive and negative stabilized supplies from the common power supply, which is mounted at the rear of the NIM bin. The use of a standardized module system allows the use of units from a number of different manufacturers in the same bin, since it may not be possible or economic to obtain all the units required from one supplier. Some 70 individual modules are available from one manufacturer, which gives some idea of the variety. A typical arrangement is shown in Figure 29.15, where a scintillation counter is used to measure the gamma-ray energy spectrum from a small source of radioactivity. The detector could consist of a NaI (T1) scintillator optically coupled to the photocathode of a photomultiplier, the whole contained in a light-tight shielded enclosure of metal, with the dynode
542
resistor chain feeding each of the dynodes (see Section 21.3.1) located in the base of the photomultiplier, together with a suitable preamplifier to match the high-impedance output of the photomultiplier to the lower input impedance of the main pulse amplifier, generally on the order of 50 X. This also allows the use of a relatively long coaxial cable to couple the detector to the electronics if necessary. The main amplifier raises the amplitudes of the input pulses to the range of about 0.5–10 V. The single-channel analyzer can then be set to cover a range of input voltages corresponding to the energies of the source to be measured. If the energy response of the scintillator-photomultiplier is linear—as it is for NaI (T1)— the system can be calibrated using sources of known energies, and an unknown energy source can be identified by interpolation.
29.3.2 Power Supplies The basic power supplies for nuclear gauging instruments are of two classes: the first supplies relatively low dc voltages at high currents (e.g., 5–30 V at 0.5–50 A) and the second high voltages at low currents (e.g., 200–5000 V at 200nA -5mA). Alternatively, batteries, both primary and secondary (i.e., rechargeable) can be used for portable instruments. In general, for laboratory use dc supplies are obtained by rectifying and smoothing the mains ac supply. In the United Kingdom and most European countries, the mains ac power supply is 50 Hz, whereas in the United States and South America it is 60 Hz, but generally a supply unit designed for one frequency can be used on the other. However, mains-supply voltages vary considerably, being 240 V in the United Kingdom and 220 V in most of the EEC countries, with some countries having supplies of 110, 115, 120, 125, 127, and so on. The stability of some of these mains supplies can leave much to be desired, and fluctuations of plus and minus 50 V have been measured. As nuclear gauging instruments depend greatly on a stable mains supply, the use of special mainsstabilizing devices is almost a necessity for equipment that may have to be used on such varying mains supplies. Two main types of voltage regulator are in use at present. The first uses a saturable inductor in the form of a transformer with a suitably designed air gap in the iron core. This is a useful device for cases where the voltage swing to be compensated is not large. The second type of stabilizer selects a
PART | IV Electrical and Radiation Measurements
portion of the output voltage, compares it with a standard, and applies a suitable compensation voltage (plus or minus) to compensate. Some of these units use a motor-driven tapping switch to vary the input voltage to the system—this allows for slow voltage variations. A more sophisticated system uses a semiconductor-controlled voltage supply to add to or subtract from the mains voltage. The simplest power supply is obtained by a transformer and a rectifier. The best results are obtained with a fullwave rectifier (see Figure 29.16) or a bridge rectifier (see Figure 29.17). A voltage-doubling circuit is shown in Figure 29.18. The outputs from either system are then smoothed using a suitable filter, as shown in Figure 29.19. A simple stabilizer may be fitted in the form of a Zener diode, which has the characteristic that the voltage drop across it is almost independent of the current through it. A simple stabilizer is shown in Figure 29.20. Zeners may be used in series to allow quite high voltages to be stabilized. An improved stabilizer uses a Zener diode as a reference element rather than an actual controller. Such a circuit is
Figure 29.16 Full-wave rectifier.
Figure 29.17 Bridge rectifier.
Figure 29.18 Voltage-doubling circuit.
Figure 29.15 Typical arrangement of electronics. SC = scintillator; PM = photomultiplier; PA = preamplifier.
Figure 29.19 Smoothing filter.
543
Chapter | 29 Nuclear Instrumentation Technology
Figure 29.20 Simple stabilizer. Figure 29.22 Decade-counting circuit using binary units (flip-flops).
29.3.4 Sealers
Figure 29.21 Improved stabilizer.
shown in Figure 29.21, where the sensing element, the transistor TR3, compares a fraction of the output voltage that is applied to its base with the fixed Zener voltage. Through the series control transistor TR4, the difference amplifier TR1, TR3, and TR2 corrects the rise or fall in the output voltage that initiated the control signal.
29.3.2.1 High-voltage Power Supplies High voltages are required to operate photomultipliers, semiconductor detectors, multi- and single-wire gas proportional counters, and the like, and their output must be as stable and as free of pulse transients as possible. For photomultipliers the stability requirements are extremely important, since a variation in overall voltage across a photomultiplier of, say, 0.1 percent can make a 1 percent change in output. For this reason stabilities on the order of 0.01 percent over the range of current variation expected and 0.001 percent over mains ac supply limits are typically required in such power supplies.
29.3.3 Amplifiers 29.3.3.1 Preamplifiers Detectors often have to be mounted in locations quite distant from the main electronics, and if the cable is of any appreciable length, considerable loss of signal could occur. The preamplifier therefore serves more as an impedance transformer, to convert the high impedance of most detector outputs to a sufficiently low impedance which would match a 50- or 70-X connecting cable. If the detector is a scintillating counter, the output impedance of the photomultiplier is on the order of several thousand ohms, and there would be almost complete loss of signal if one coupled the output of the counter directly to the 50-X impedance of the cable–hence the necessity of providing a suitable impedance matching device.
From the earliest days it has been the counting of nuclear events which has been the means of demonstrating the decay of radioactive nuclei. Early counters used thermionic valves in scale-of-two circuits, which counted in twos. By using a series of these, scale-of-10 counters may be derived. However, solid-state circuits have now reduced such scale-of-10 counters to a single semiconductor chip, which is far more reliable than the thermionic valve systems and a fraction of the size and current consumption. As sealers are all based on the scale-of-two, Figure 29.22 shows the arrangement for obtaining a scale-of-10, and with present technology, many scales-of-10 may be incorporated on a single chip. The basic unit is called a J-K binary because of the lettering on the original large-scale version of the binary unit. Rates of 150–200 MHz may be obtained with modern decade counters which can be incorporated on a single chip together with the auxiliary units, such as standard oscillator, input and output circuits, and means of driving light displays to indicate the count achieved.
29.3.5 Pulse-Height Analyzers If the detector of nuclear radiation has a response governed by the energy of the radiation to be measured, the amplitude of the pulses from the detector is a measure of the energy. To determine the energy, therefore, the pulses from the detector must be sorted into channels of increasing pulse amplitude. Trigger circuits have the property that they can be set to trigger for all pulses above a preset level. This is acting as a discriminator. By using two trigger circuits, one set at a slightly higher triggering level than the other, and by connecting the outputs to an anticoincidence circuit (see Section 29.3.6.4), the output of the anticoincidence circuit will be only pulses that have amplitudes falling within the voltage difference between the triggering levels of the two discriminators. Figure 29.23 shows a typical arrangement; Figure 29.24 shows how an input pulse 1, below the triggering level V, produces no output, nor does pulse 3, above the triggering level V + 3V. However, pulse 2, falling between the two triggering levels, produces an output pulse. 3V is called the channel width.
544
PART | IV Electrical and Radiation Measurements
Figure 29.23 Single-channel pulse-height analyzer.
Figure 29.25 Block diagram of multichannel analyzer.
Figure 29.26 Principle of Wilkinson ADC using linear discharge of a capacitor. Figure 29.24 Waveforms and operation of Figure 3.23.
A multichannel analyzer (MCA) allows the separation of pulses from a detector into channels determined by their amplitudes. Early analyzers used up to 20 or more single-channel analyzers set to successively increasing channels. These, however, proved difficult to stabilize, and the introduction of the Hutchinson–Scarrott system of an analog-to-digital converter (ADC) combined with a computer memory enabled more than 8,000 channels to be provided with good stability and adequate linearity. The advantages of the MCA are offset by the fact that the dead time (that is, the time during which the MCA is unable to accept another pulse for analysis) is longer than that of a single-channel analyzer and so it has a lower maximum counting rate. A block diagram of a typical multichannel analyzer is shown in Figure 29.25. The original ADC was that of Wilkinson, in which a storage capacitor is first charged up so that it has a voltage equal to the peak height of the input pulse. The capacitor is then linearly discharged by a constant current, so producing a ramp waveform, and during this period a high-frequency clock oscillator is switched on (see Figure 29.26). Thus the period of the discharge and the number of cycles of the clock are proportional to the magnitude of the input pulse. The number of clock pulses recorded during the ramp gives the channel number, and after counting these in a register the classification can be recorded, usually in a ferrite-core memory. A later development is the use of the successive approximation analog-to-digital converter, due to Gatti, Kandiah, and so on, which provides improved channel stability and resolution. ADCs are further discussed in Chapter 20.
29.3.6 Special Electronic Units 29.3.6.1 Dynode Resistor Chains Each photomultiplier requires a resistor chain to feed each dynode an appropriate voltage to allow the electrons ejected by the scintillator light flash to be accelerated and multiplied at each dynode. For counting rates up to about 105 per second, the resistors are usually equal in value and high resistance, so that the total current taken by the chain of resistors is of the order of a few hundred microamperes. Figure 29.27 shows a typical dynode resistor chain. As has already been pointed out, the high voltage supplying the dynode chain must be extremely stable and free from voltage variations, spurious pulses, etc. A 0.1 percent change in voltage may give nearly 1 percent change of output. The mean current taken by the chain is small, and the fitting of bypass capacitors allows pulses of current higher in value than the standing current to be supplied to the dynodes, particularly those close to the anode where the multiplied electron cascade, which started from the photocathode, has now become a large number due to the multiplication effect. However, as the number of pulses to be counted becomes more than about 15,000–50,000 per second, it is necessary to increase the standing current through the dynode chain. Otherwise space charge effects cause the voltages on the dynodes to drop, so reducing the gain in the photomultiplier. When the counting rate to be measured is high, or very fast rise times have to be counted, then the dynode current may have to be increased from a few hundred microamperes to some 5–10 mA, and a circuit as shown in Figure 29.28 is used. Photomultipliers are also discussed in Chapter 21.
545
Chapter | 29 Nuclear Instrumentation Technology
Figure 29.27 Dynode resistor chain for counting rates up to about 15,000 c/s.
Figure 29.28 Dynode resistor chain for high counting rates ~106 cs.
29.3.6.2 Adders/Mixers When a number of detector signals have to be combined into a single output, a mixer (or fan-in) unit is used. This sums the signals from up to eight preamplifiers or amplifiers to give a common output. This is used, for example, when the outputs from several photomultipliers mounted on one large scintillator must be combined. Such a unit is also used in whole-body counters, where a number of separate detectors are distributed around the subject being monitored. Figure 29.29 shows the circuit of such a unit.
29.3.6.3 Balancing Units These units are, in effect, potentiometer units which allow the outputs from a number of separate detectors, often scintillation counters with multiple-photomultiplier assemblies on each large crystal, to be balanced before entering the main amplifier of a system. This is especially necessary when pulse-height analysis is to be performed, since otherwise the variations in output would prevent equal pulse heights being obtained for given energy of a spectrum line.
29.3.6.4 Coincidence and Anticoincidence Circuits A coincidence circuit is a device with two or more inputs that gives an output signal when all the inputs occur at the same time. These circuits have a finite resolving time—that is, the greatest interval of time x that may elapse between signals for the circuit still to consider them coincident. Figure 29.30 shows a simple coincidence circuit where diodes are used as switches. If either or both diodes are held at zero potential, the relevant diode or diodes conduct and the output of the circuit is close to ground. However, if both inputs are caused to rise to the supply voltage level Vc by simultaneous application of pulses of height Vc, both diodes cease to conduct, and the output rises to the supply level for as long as the input pulses are present. An improved circuit is shown in Figure 29.31. The use of coincidence circuits arose as a result of studies of cosmic rays, since it allowed a series of counters to be used as a telescope to determine the direction of path of such high-energy particles. By the use of highly absorbing slabs between counters, the nature and energies of these cosmic particles and the existence of showers of simultaneous
546
PART | IV Electrical and Radiation Measurements Figure 29.29 Fast signal mixer/ adder.
Figure 29.30 Simple coincidence circuit.
Figure 29.31 Improved coincidence circuit.
particles were established. The anticoincidence circuit was used in these measurements to determine the energies of particles which were absorbed in dense material such as lead, having triggered a telescope of counters before entering the lead, but not triggering counters below the lead slab. Nowadays coincidence circuits are used with detectors for the products of a nuclear reaction or particles emitted rapidly in cascade during radioactive decay or the two photons emitted in the annihilation of a positron. The latter phenomenon has come into use in the medical scanning
Figure 29.32 Anticoincidence circuit.
Figure 29.33 Use of anticoincidence circuit.
and analysis of human living tissue by computer-activated tomography (CAT scanning). Anticoincidence circuits are used in low-level counting by surrounding a central main counter, such as would be used in radiocarbon-dating measurements, with a guard counter such that any signal occurring simultaneously in the main and guard counters would not be counted but only signals originating solely in the main central counter. An anticoincidence circuit is shown in Figure 29.32; Figure 29.33 gives a block diagram of the whole system.
547
Chapter | 29 Nuclear Instrumentation Technology
References Birks, J. B., The Theory and Practice of Scintillation Counting, Pergamon Press, Oxford (1964). Dearnaley, G., and Northrup, D. C., Semiconductor Counters for Nuclear Radiations, Spon, London (1966). Eichholz, G. G., and Poston, J. W., Principles of Nuclear Radiation Detection, Wiley, Chichester, U.K. (1979). Fremlin, J. H., Applications of Nuclear Physics, English Universities Press, London (1964). Heath, R. L., Scintillation Spectrometry Gamma-Ray Spectrum Catalogue, Vols. I and II: USAEC Report IDO-16880, Washington, D.C. (1964). Hoffer, P. B., Beck, R. N., and Gottschalk, A., (eds.), Semiconductor Detectors in the Future of Nuclear Medicine, Society of Nuclear Medicine, New York (1971). Knoll, G. F., Radiation Detection and Measurement, Wiley, Chichester, U.K. (1979). McDowell, W. J., In-Liquid Scintillation Counting, Recent Applications and Developments (ed. C. T. Peng,), Academic Press, London (1980). Overman, R. T., and Clark, H. M., Radioisotope Techniques, McGraw-Hill New York (1960).
Price, W. J., Nuclear Radiation Detection, McGraw-Hill, New York (1964). Segre, E., Experimental Nuclear Physics, Vol. III, Wiley, Chichester, U.K. (1959). Sharpe, J., Nuclear Radiation Detection, Methuen, London (1955). Snell, A. H. (ed.), Nuclear Instruments and their Uses, Vol. I, Ionization Detectors, Wiley, Chichester, U.K. (1962) (Vol. II was not published). Taylor, D., The Measurement of Radio Isotopes, Methuen, London (1951) Turner, J. C., Sample Preparation for Liquid Scintillation Counting, The Radiochemical Center, Amersham, U.K. (1971, revised). Watt, D. E., and Ramsden, D., High Sensitivity Counting Techniques, Pergamon Press, Oxford (1964). Wilkinson, D. H., Ionization Chambers and Counters, Cambridge University Press, Cambridge (1950). Zumwalt, L. R., Absolute Beta Counting using End-Window GeigerMuller Counters and Experimental Data on Beta-Particle Scattering Effects, USAEC Report AECU-567 (1950).
Further Reading Eichholz, G. G., and Poston, J. W., Principles of Nuclear Radiation Detection, Lewis Publishers (1986).
This page intentionally left blank
Chapter 30
Measurements Employing Nuclear Techniques D. Aliaga Kelly and W. Boyes
30.1 Introduction There are two important aspects of using nuclear techniques in industry which must be provided for before any work takes place. These are: 1. Compliance with the many legal requirements when using or intending to use radioactive sources; 2. Adequate health physics procedures and instruments to ensure that the user is meeting the legal requirements The legal requirements cover the proposed use of the radioactive source, the way in which it is delivered to the industrial site, the manner in which it is used, where and how it is stored when not in use and the way it is disposed of, either through waste disposal or return to the original manufacturer of the equipment or the source manufacturer. Each governing authority, such as the U.S. Nuclear Regulatory Commission, the Atomic Energy Commission of Canada, and other national bodies, has set requirements for use of nuclear gauging instrumentation. When using these instruments, rely on the manufacturer to guide you through the requirements, procedures, and documentation. There are differences in regulatory requirements from country to country that make it impossible to list here. It must be noted that in some applications, operators must be badged and use monitoring devices, while in other applications, particularly in the chemical and hydrocarbon processing industries, these procedures are not necessarily required. It is impossible to cover exhaustively all the applications of radioisotopes in this book; here we deal with the following: 1. Density; 2. Thickness; 3. Level; 4. Flow; 5. Tracer applications; 6. Material analysis.
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00030-9
Before considering these applications in detail we discuss some general points that are relevant to all or most of them. One of the outstanding advantages of radioisotopes is the way in which their radiation can pass through the walls of a container to a suitable detector without requiring any contact with the substance being measured. This means that dangerous liquids or gases may be effectively controlled or measured without risk of leakage, either of the substance itself out from the container or of external contamination to the substance inside. Thus in the chemical industry many highly toxic materials may be measured and controlled completely without risk of leakage from outside the pipes, etc., conveying it from one part of the factory to another. (See also Section 30.1.4.) Another important advantage in the use of radioisotopes is that the measurement does not affect, for example, the flow of liquid or gas through a pipe, and such flow can be completely unimpeded. Thus the quantity of tobacco in a cigarette-making machine can be accurately determined as the continuous tube of paper containing the tobacco moves along, so that each cigarette contains a fixed amount of tobacco. Speed is another important advantage of the use of radioisotopes. Measurements of density, level, etc., may be carried out continuously so that processes may be readily controlled by the measuring signal derived from the radioisotope system. Thus, a density gauge can control the mixing of liquids or solids, so that a constant density material is delivered from the machine. Speed in determining flow allows, for example, measurement of the flow of cooling gas to a nuclear reactor and observation of small local changes in flow due to obstructions which would be imperceptible using normal flow-measuring instruments. The penetrating power of radiations from radio-isotopes is particularly well known in its use for gamma radiography. A tiny capsule of highly radioactive cobalt, for example, can
549
550
PART | IV Electrical and Radiation Measurements
be used to radiograph complex but massive metal castings where a conventional X-ray machine would be too bulky to fit. Gamma radiography is discussed further in Chapter 24. Also, leaks in pipes buried deep in the ground can be found by passing radioactive liquid along them and afterwards monitoring the line of the pipe for radiation from the active liquid which has soaked through the leak into the surrounding earth. If one uses a radioisotope with a very short halflife (half-life is the time taken for a particular radioisotope to decay to half its initial activity) the pipeline will be free of radioactive contamination in a short time, allowing repairs to be carried out without hazard to the workmen repairing it or to the domestic consumer when the liquid is, for example, the local water supply.
30.1.1 Radioactive Measurement Relations When radiations from radioactive isotopes pass through any material, they are absorbed according to (1) their energy and (2) the density and type of material. This absorption follows in general the relationship
I = I 0 B exp - _ nL x i
(30.1)
where I0 is the intensity of the incident radiation, I the intensity of the radiation after passing through the material, x the thickness of material (cm), nL the linear absorption coefficient (cm–1) and B the build-up factor. The absorption coefficient is a factor which relates the energy of the radiation and the density type of material, and suitable tables are available (Hubbell) from which this factor may be obtained for the particular conditions of source and material under consideration. As the tables are usually given in terms of the mass absorption coefficient (nL) (generally in cm2/g), it is useful to know that the linear absorption coefficient (nL) (in cm–1) may be derived by multiplying the mass absorption coefficient (nL) (in cm2/g) by the density ( p) of the material (in g/cm3). It must be borne in mind that in a mixture of materials each will have a different absorption coefficient for the same radiation passing through the mixture. The build-up factor, B, is necessary when dealing with gamma- or X-radiation, where scattering of the incident radiation can make the intensity of the radiation which actually falls on the detector different from what it would be if no scattering were to take place. For electrons or beta particles this factor can be taken equal to 1. The complication of gamma ray absorption is illustrated by the non-linearity of the curves in Figure 30.1, which gives the thickness of different materials needed to effect a ten-fold attenuation in a narrow beam. From Equation (30.1) we can obtain some very useful information in deciding on the optimum conditions for making a measurement on a particular material, or, conversely, if we have a particular radioactive source, we
Figure 30.1 Thickness needed to attenuate a narrow gamma-ray beam by a factor of 10.
can determine what are its limits for measuring different materials. First, it can be shown that the maximum sensitivity for a density measurement is obtained when 1 x= (30.2) nL that is, when the thickness of material is equal to the reciprocal of the linear absorption coefficient. This reciprocal is also called the mean-free path, and it can be shown that for any thickness of a particular substance there is an optimum type and intensity of radioactive source. For very dense materials, and for thick specimens, a source emitting high-energy radiation will be required. Therefore 60Co, which emits gamma rays of 1.33 and 1.17 MeV, is frequently used. At the other end of the scale, for the measurement of very thin films such as, for example, Melinex, a soft beta or alpha particle emitter would be chosen. For measurement of thickness it is generally arranged to have two detectors and one source. The radiation from the source is allowed to fall on one detector through a standard piece of the material to be measured, while the other detector measures the radiation from the source which has passed through the sample under test. The signal from each of the pair of detectors is generally combined as two dc levels, the difference between which drives a potentiometer-type pen recorder.
Chapter | 30 Measurements Employing Nuclear Techniques
30.1.2 Optimum Time of Measurement The basic statistics of counting were outlined in the previous chapter (Section 22.1.1). We now consider how that is applied to particular measurements. Suppose that the number of photons or particles detected per second is n and that the measurement is required to be made in a time t. The number actually recorded in t will be nt ! nt , where nt is the standard deviation of the measurement according to Poisson statistics and is a measure of the uncertainty of the true value of nt. The relative uncertainty (coefficient of variation) is given by (nt) = (nt)
1 (nt) A radioisotope instrument is used to measure some quality X of a material in terms of the output I of a radiation detector. The instrument sensitivity, or relative sensitivity, S, is defined as the ratio of the fractional change dI/I in detector output which results from a given fractional change dX/X in the quality being measured, i.e.,
S=
dI I
dX X
(30.3)
If in a measurement, the only source of error is the statistical variation in the number of recorded events, the coefficient of variation in the value of the quality measures (nt) dX 1 1 (30.4) = $ = nt X S S (nt) To reduce this to as small a value as possible, then S, n, or t or all three of these variables should be increased to as high a value as possible. In many cases, however, the time available for measurement is short. This is particularly true on high-speed production lines of sheet material, where only a few milliseconds may be available for the measurement. It can now be seen how measurement time, collimation, detector size, and absorber thickness may affect the error in the measurement. The shorter the measurement time, the greater the degree of collimation, the thicker the absorber and the smaller the detector, the greater will be the source activity required to maintain a constant error. A larger source will be more expensive, and in addition its physical size may impose a limit on the activity usable. Bearing in mind that a source radiates equally in all directions, only a very small fraction can be directed by collimation for useful measurement; the rest is merely absorbed in the shielding necessary to protect the source.
30.1.3 Accuracy/Precision of Measurements The precision or reproducibility of a measurement is defined in terms of the ability to repeat measurements of the same quantity. Precision is expressed quantitatively in terms of the standard deviation, v, from the average value obtained by repeated measurements. In practice it is determined by
551
statistical variations in the rate of emission from the radioactive source, instrumental instabilities, and variations in measuring conditions. The accuracy of a measurement is an expression of the degree of correctness with which an actual measurement yields the true value of the quantity being measured. It is expressed quantitatively in terms of the deviation from the true value of the mean of repeated measurements. The accuracy of a measurement depends on the precision and also on the accuracy of calibration. If the calibration is exact, then in the limit, accuracy and precision are equal. When measuring a quantity such as thickness it is relatively easy to obtain a good calibration. In analyzing many types of samples, on the other hand, the true value is often difficult to obtain by conventional methods and care may have to be taken in quoting the results. In general, therefore, a result is quoted along with the calculated error in the result and the confidence limits to which the error is known. Confidence limits of both one standard deviation, 1v (68 percent of results lying within the quoted error), and two standard deviations, 2v (95 percent of results lying within the quoted error), are used. In analytical instruments, when commenting on the smallest quantity or concentration which can be measured, the term “limit of detection” is preferred. This is defined as the concentration at which the measured value is equal to some multiple of the standard deviation of the measurement. In practice, the accuracy of radioisotope instruments used to measure the thickness of materials is generally within !1 percent, except for very lightweight materials, when it is about !2 percent. Coating thickness can usually be measured to about the same accuracy. Level gauges can be made sensitive to a movement of the interface of !1 mm. Gauges used to measure the density of homogeneous liquids in closed containers generally can operate to an accuracy of about !0.1 percent, though some special instruments can reduce the error to !0.01 percent. The accuracy of bulk density gauges is in the range !0.5 to !5 percent depending on the application and on the measuring conditions.
30.1.4 Measurements on Fluids in Containers Nuclear methods may be used to make certain measurements on fluids flowing in pipes from 12.7 mm to 1 m in diameter. For plastics or thin-walled pipes up to 76 mm in diameter the combined unit of source and detector shown in Figure 30.2(a) is used, while for larger pipes the system consisting of a separate holder and detector shown in Figure 30.2(b) is used. The gamma-ray source is housed in a shielded container with a safety shutter so that the maximum dose rate is less than 7.5 nGy/h, and is mounted on one side of the pipe or tank. A measuring chamber containing argon at 20 atm is fitted on the other side of the pipe or tank. It is fitted with
552
PART | IV Electrical and Radiation Measurements Figure 30.2 Fluid-density measuring systems. Courtesy of Nuclear Enterprises Ltd. (a) Combined detector/source holder; (b) separate units for larger pipes.
a standardizing holder and has a detection sensitivity of !0.1 percent. The system may be used to measure density over a range of 500–4000 kg/m2 with a sensitivity of 0.5 kg/m3, specific gravity with an accuracy of !0.0005, percentage solids !0.05 percent, and moisture content of slurries of a constant specific gravity to within !0.25 percent. The principle of the measurement is that the degree of absorption of the gamma rays in the flowing fluid is measured by the ionization chamber, where output is balanced against an adjustable source which is set by means of the calibrated control to the desired value of the material being measured. Deviations from this standard value are then shown on the calibrated meter mounted on the indicator front panel. Standardization of the system for the larger pipes is performed manually and a subsidiary source is provided for this purpose. The selection of the type of source depends on (1) the application, (2) the wall thickness and diameter of the pipe and (3) the sensitivity required. Sources in normal use are 137Cs (source life 30 yr), 241Am (460 yr) and 60CO (5 yr). The measuring head has a temperature range of –10 to +55°C and the indicator 5–40°C, and the response time minimum is 0.1 s, adjustable to 25 s.
30.2 Materials analysis Nuclear methods of analysis, particularly neutron activation analysis, offer the most sensitive methods available for most elements in nature. However, no one method is suitable for all elements, and it is necessary to select the techniques to be used with due regard to the various factors involved, some of which may be listed as follows: 1. Element to be detected; 2. Quantities involved; 3. Accuracy of the quantitative analysis required; 4. Costs of various methods available; 5. Availability of equipment to carry out the analysis to the statistical limits required; 6. Time required;
7. Matrix in which the element to be measured is located; 8. Feasibility of changing the matrix mentioned in (7) to a more amenable substance. For example, the environmental material sample may be analyzed by many of the methods described, but the choice of method must be a compromise, depending on all the factors involved.
30.2.1 Activation Analysis When a material is irradiated by neutrons, photons, alpha or beta particles, protons, etc. a reaction may take place in the material depending on a number of factors. The most important of these are: 1. The type of particle or photon used for the irradiation; 2. The energy of the irradiation; 3. The flux in the material; 4. The time of irradiation. The most useful type of particle has been found to be neutrons, since their neutral charge allows them to penetrate the high field barriers surrounding most atoms, and relatively low energies are required. In fact, one of the most useful means of irradiation is the extremely low energy neutrons of thermal energy (0.025 eV) which are produced abundantly in nuclear reactors. The interactions which occur cause some of the various elements present to become radioactive, and in their subsequent decay into neutral atoms again, to emit particles and radiation which are indicative of the elements present. Neutron activation analysis, as this is called, has become one of the most useful and sensitive methods of identifying certain elements in minute amounts without permanently damaging the specimen, as would be the case in most chemical methods of analysis. A detector system can be selected which responds uniquely to the radiation emitted as the excited atoms of the element of interest decay with emission of gamma rays or beta particles, while not responding to other types of radiation or to different energies from other elements which may be present in the sample. The decay
553
Chapter | 30 Measurements Employing Nuclear Techniques
half-life of the radiation is also used to identify the element of interest, while the actual magnitude of the response at the particular energy involved is a direct measure of the amount of the element of interest in the sample. Quantitatively the basic relation between A, the induced activity present at the end of the irradiation in Becquerels, i.e., disintegrations per second, is given by
A = Nvz > 1 - exp f -
0.693t i T1 2
p H
(30.5)
where N is the number of target atoms present, v the crosssection (cm2), z the irradiation flux (neutrons cm–2 s–1), ti the irradiation time and T½ the half-life of product nuclide. From this we may calculate N, the number of atoms of the element of interest present, after appropriate correction factors have been evaluated. Beside the activation of the element and measurement of its subsequent decay products one may count directly the excitation products produced whilst the sample is being bombarded. This is called “prompt gamma-ray analysis,” and it has been used, for example, to analyze the surface of the moon. Most elements do not produce radioactivity when bombarded with electrons, beta particles, or gamma rays. However, most will emit characteristic X-rays when bombarded and the emitted X-rays are characteristic of each element present. This is the basis of X-ray fluorescence analysis, discussed below. Electrons and protons have also been used to excite elements to emit characteristic X-rays or particles, but this technique requires operation in a very high vacuum chamber. High-energy gamma rays have the property of exciting a few elements to emit neutrons—this occurs with beryllium when irradiated with gamma rays of energy greater than 1.67 MeV and deuterium with gamma rays of energy greater than 2.23 MeV. This forms the basis of a portable monitor for beryllium prospecting in the field, using an 124Sb source to excite the beryllium. Higher-energy gamma rays from accelerators, etc., have the property of exciting many elements but require extremely expensive equipment.
30.2.2 X-ray Fluorescence Analysis
Figure 30.3 shows what happens when an X-ray is diffracted through a crystal of rock salt. The Braggs showed that the X-rays were reflected from the crystal, meeting the Bragg relationship
n m = 2a sin i
where m is the wavelength of the incident radiation, a the distance between lattice planes and n the order of the reflection. i is the angle of incidence and of reflection, which are equal. To measure the intensity distribution of an X-ray spectrum by this method the incident beam has to be collimated and the detector placed at the corresponding position on the opposite side of the normal (see Figure 30.4). The system effectively selects all rays which are incident at the appropriate angle for Bragg reflection. If the angle of incidence is varied (this will normally involve rotating the crystal through a controllable angle and the detector by twice this amount) then the detector will receive radiation at a different wavelength and a spectrum will be obtained. If the system is such that the angular range di is constant over the whole range of wavelength investigated, as in the geometry illustrated in Figure 30.4, we can write
n d m = 2a d ] sin i g = 2a cos i d i
(30.7)
The intensity thus received will be proportional to cos i and to di. After dividing by cos i and correcting for the variation with angle of the reflection coefficient of the crystal and the variation with wavelength of the detector efficiency, the recorded signal will be proportional to the intensity per unit wavelength interval, and it is in this form that continuous X-ray spectra are traditionally plotted.
Figure 30.3 Diffraction of X-rays through crystal of rock salt.
30.2.2.1 Dispersive X-ray Fluorescence Analysis In dispersive X-ray fluorescence analysis the energy spectrum of the characteristic X-ray emitted by the substance when irradiated with X-rays is determined by means of a dispersive X-ray spectrometer, which uses as its analyzing element the regular structure of a crystal through which the characteristic X-rays are passed. This property was discovered by Bragg and Bragg in 1913, who produced the first X-ray spectrum by crystal diffraction through a crystal of rock salt.
(30.6)
Figure 30.4 Dispersive X-ray spectrometer.
554
PART | IV Electrical and Radiation Measurements
30.2.2.2 X-ray Fluorescence Analysis (Non-dispersive)
a multichannel analyzer will allow their intensity to be evaluated electronically. However, it must be pointed out that as the incident X-rays and the excited, emergent, characteristic X-rays have a very short path in metals the technique essentially measures the elemental content of the metal surface. The exciting X-rays are selected for their energy to exceed what is called the “K-absorption edge” of the element of interest, so that elements of higher atomic weight will not be stimulated to emit their characteristic X-rays. Unfortunately, the range of exciting sources is limited, and Table 30.1 lists those currently used. Alternatively, special X-ray tubes with anodes of suitable elements have been used to provide X-rays for specific analyses as well as intensities greater than are generally available from radioisotope sources.
When a substance is irradiated by a beam of X-rays or gamma rays it is found that the elements present fluoresce, giving out X-rays of energies peculiar to each element present. By selecting the energy of the incident X-rays selection may be made of particular elements required. As an example, if a silver coin is irradiated with X-rays, the silver present emits X-rays of energies about 25.5 keV, and if other elements are also present, such as copper or zinc, “characteristic” X-rays, as they are called, will be emitted with energies of 8.0 and 8.6 keV, respectively. The Si (Li) or Ge (Li) detector, cooled to the temperature of liquid nitrogen, will separate the various spectral lines and
TABLE 30.1 Exciting sources for X-ray fluorescence analysis Isotope
Half-life
Principal photon energies (keV)
Emission (%)
241Am
433 yr
11.9–22.3
~40
59.5
35.3
22.1, 25.0
102.3
2.63–3.80
~10
88.0
3.6
6.40, 7.06
~55
14.4
9.4
122.0
85.2
136.5
11.1
5.89, 6.49
~28
109Cd
57Co
453 d
270.5 d
55Fe
2.7 yr
153Gd
241.5 d
125
I
210
Pb
60.0 d
22.3 yr
41.3, 47.3
~110
69.7
2.6
97.4
30
103.2
20
27.4, 31.1
138
35.5
7
9.42–16.4
~21
46.5
~4
+ Bremsstrahlung to 1.16 MeV 147
Pm
2.623 yr
+ target
Characteristic X-rays of target
~0.4
+ Bremsstrahlung to 225 keV
238Pu
87.75 yr
11.6–21.7
~13
125mTe
119.7 d
27.4, 31.1
~50
159.0
83.5
Continued
555
Chapter | 30 Measurements Employing Nuclear Techniques
TABLE 30.1 Continued 170Tm
128 d
52.0, 59.7
~5
84.3
3.4
+ Bremsstrahlung to 968 keV Tritium
(3H)
12.35 yr
+ Ti target
4.51, 4.93
~10–2
+ Bremsstrahlung to 18.6 keV + Zr target
1.79–2.5
~10–2
+ Bremsstrahlung to 18.6 keV
Gas proportional counters and NaI (Tl) scintillation counters have also been used in non-dispersive X-ray fluorescence analysis, but the high-resolution semiconductor detector has been the most important detector used in this work. While most systems are used in a fixed laboratory environment, due to the necessity of operating at liquidnitrogen temperatures, several portable units are available commercially in which small insulated vessels containing liquid nitrogen, etc., give a period of up to 8 h use before requiring to be refilled. The introduction of super-pure Ge detectors has permitted the introduction of portable systems which can operate for limited periods at liquid-nitrogen temperatures, but as long as they remain in a vacuum enclosure they can be allowed to rise to room temperature without damage to the detector, as would occur with the earlier Ge (Li) detector, where the lithium would diffuse out of the crystal at ambient temperature. As Si (Li) detectors are really only useful for X-rays up to about 30 keV, the introduction of the Ge (HP) detector allows X-ray non-dispersive fluorescence analysis to be used for higher energies in non-laboratory conditions. In the early 1980s with the availability of microprocessors as well as semiconductor detectors such as HgI (mercuric iodide), it became possible to build small, lightweight non-dispersive devices that could be operated on batteries and taken into the field. These devices were able to operate at near-room temperature, due to the incorporation of Peltier cooling circuitry in their designs, and made in-situ non-destructive elemental analysis possible. Typically their use has been for positive material identification, especially specialty metal alloys, but they have also been used for coal ash analysis, lead in paint, and other nonspecific elemental analysis.
30.2.3 Moisture Measurement: By Neutrons If a beam of fast neutrons is passed through a substance any hydrogen in the sample will cause the fast neutrons to be
slowed down to thermal energies, and these slow neutrons can be detected by means of a BF3- or 3He-filled gas proportional counter system. As the major amount of hydrogen present in most material is due to water content, and the slowing down is directly proportional to the hydrogen density, this offers a way of measuring the moisture content of a great number of materials. Some elements such as cadmium, boron, and the rare-earth elements chlorine and iron, however, can have an effect on the measurement of water content since they have high thermal-neutron capture probabilities. When these elements are present this method of moisture measurement has to be used with caution. On the other hand, it provides a means of analyzing substances for these elements, provided the actual content of hydrogen and the other elements is kept constant. Since the BF3 or 3He counter is insensitive to fast neutrons it is possible to mount the radioactive fast neutron source close to the thermal neutron detector. Then any scattered thermal neutron from the slowing down of the fast neutrons, which enters the counter, will be recorded. This type of equipment can be used to measure continuously the moisture content of granular materials in hoppers, bins, and similar vessels. The measuring head is mounted external to the vessel, and the radiation enters by a replaceable ceramic window (Figure 30.5). The transducer comprises a radioisotope source of fast neutrons and a slow neutron detector assembly, mounted within a shielding measuring head. The source is mounted on a rotatable disc. It may thus be positioned by an electropneumatic actuator either in the center of the shield or adjacent to the slow neutron detector and ceramic radiation “window” at one end of the measuring head, which is mounted externally on the vessel wall. Fast neutrons emitted from the source are slowed down, mainly by collisions with atoms of hydrogen in the moisture of material situated near the measuring head. The count rate in the detector increases with increasing hydrogen content and is used as an indication of moisture content.
556
PART | IV Electrical and Radiation Measurements
Figure 30.5 Moisture meter for granular material in hoppers, surface mounting. Courtesy of Nuclear Enterprises Ltd.
The slow neutron detector is a standard, commercially available detector and is accessible without dismantling the measuring head. The electronics, housed in a sealed case with the measuring head, consist of an amplifier, pulseheight analyzer, and a line-driver module, which feed preshaped 5 V pulses to the operator unit. The pulses from the head electronic unit are processed digitally in the operator’s unit to give an analog indication of moisture content on a 100-mm meter mounted on the front panel, or an analog or digital signal to other equipment. The instrument has a range of 0–20 percent moisture and a precision of !0.5 percent moisture, with a response time of 5–25 s. The sample volume is between 100 and 3001, and the operating temperature of the detector is 5–70°C and of the electronics 5–30°C. Figure 30.6 shows such a moisture gauge in use on a coke hopper. A similar arrangement of detector and source is used in borehole neutron moisture meters. The fast neutron source is formed into an annulus around the usually cylindrical thermal neutron detector and the two components are mounted in a strong cylinder of steel, which can be let down a suitable borehole and will measure the distribution of water around the borehole as the instrument descends. This system is only slightly sensitive to the total soil density (provided no elements of large cross-section for absorbing neutrons are present), so only a rough estimate is needed of the total soil density to compensate for its effect on the measured value of moisture content. The water content in soil is usually quoted as percentage by weight, since the normal gravimetric method of determining water content measures the weight of water and the total weight of the soil sample. To convert the water content as measured by the neutron gauge
Figure 30.6 Coke moisture gauge in use on hoppers. Courtesy of Nuclear Enterprises Ltd.
measurement into percentage by weight one must also know the total density of the soil by some independent measurement. This is usually performed, in the borehole case, by a physically similar borehole instrument, which uses the scattering of gamma-rays from a source in the nose of the probe, around a lead plug to a suitable gas-filled detector in the rear of the probe. The lead plug shields the detector from any direct radiation from the source. Figure 30.7 shows the response of the instrument. At zero density of material around the probe the response of the detector is zero, since there is no material near the detector to cause scattering of the gamma rays and subsequent detection. In practice, even in free air, the probe will show a small background due to scattering by air molecules. As the surrounding density increases, scattering into the detector begins to occur, and the response of the instrument increases linearly with density but reaches a maximum for the particular source and soil material. The response then decreases until theoretically at maximum density the response will be zero. Since the response goes through a maximum with varying density, the probe parameters should be adjusted so that the density range of interest is entirely on one side of the maximum. Soil-density gauges are generally designed to operate on the negative-slope portion of the response.
Chapter | 30 Measurements Employing Nuclear Techniques
557
30.2.4 Measurement of Sulfur Contents of Liquid Hydrocarbons
Figure 30.7 Response of a scattered-gamma-ray gauge to soil density.
30.2.3.1 Calibration of Neutron-moisture Gauges Early models of neutron gauges used to be calibrated by inserting them into concrete blocks of known densities, but this could lead to serious error, since the response in concrete is quite different from that in soil. It has been suggested by Ballard and Gardner that in order to eliminate the sensitivity of the gauge to the composition of the material to be tested, one should use non-compactive homogeneous materials of known composition to obtain experimental gauge responses. This data is then fitted by a “least-squares” technique to an equation which enables a correction to be obtained for a particular probe used in soil of a particular composition. Dual Gauges Improved gauges have been developed in which the detector is a scintillation counter with the scintillator of cerium-activated lithium glass. As such a detector is sensitive to both neutrons and gamma rays, the same probe may be used to detect both slowed-down neutrons and gamma rays from the source—the two can be distinguished because they give pulses of different shapes. This avoids the necessity of having two probes, one to measure neutrons and the other to measure scattered gammas, allowing the measurement of both moisture and density by pulse-shape analysis. Surface-neutron gauges are also available, in which both radioactive source and detector are mounted in a rectangular box which is simply placed on a flat soil surface. It is important to provide a smooth flat surface for such measurements, as gaps cause appreciable errors in the response.
Sulfur occurs in many crude oils at a concentration of up to 5 percent by weight and persists to a lesser extent in the refined product. As legislation in many countries prohibits the burning of fuels with a high sulfur content to minimize pollution, and sulfur compounds corrode engines and boilers and inhibit catalysts, it is essential to reduce the concentration to tolerable levels. Thus rapid measurement of sulfur content is essential, and the measurement of the absorption of appropriate X-rays provides a suitable online method. In general, the mass absorption coefficient of an element increases with increase of atomic number (Figure 30.8) and decreases with shortening of the wavelength of the X-rays. In order to make accurate measurement the wavelength chosen should be such that the absorption will be independent of changes in the carbon-hydrogen ratio of the hydrocarbon. When the X-rays used have an energy of 22 keV the mass attenuation for carbon and hydrogen are equal. Thus by using X-rays produced by allowing the radiation from the radio-element 241Am to produce fluorescent excitation in a silver target which gives X-rays having an energy of 23 keV, the absorption is made independent of the carbon–hydrogen ratio. As this source has a half-life of 450 yr, no drift occurs owing to decay of the source. The X-rays are passed through a measuring cell through which the hydrocarbon flows and, as the absorption per unit weight of sulfur is many times greater than the absorption of carbon and hydrogen, the fraction of the X-rays absorbed is a measure of the concentration of the sulfur present. Unfortunately the degree of absorption of X-rays is also affected by the density of the sample and by the concentration of trace elements of high atomic weight and of water. The concentration of X-rays is measured by a highresolution proportional counter so the accuracy will be a function of the statistical variation in the count rate and the stability of the detector and associated electronics, which can introduce an error of !0.01 per cent sulfur. A water content of 650 ppm will also introduce an error of 0.01 percent sulfur. Compensation for density variations may be achieved by measuring the density with a non-nucleonic meter and electronically correcting the sulfur signal. Errors caused by impurities are not serious, since the water content can be reduced to below 500 ppm and the only serious contaminant, vanadium, seldom exceeds 50 ppm. The stainless steel flow cell has standard flanges and is provided with high-pressure radiation windows and designed so that there are no stagnant volumes. The flow cell may be removed for cleaning without disturbing the source. Steam tracing or electrical heating can be arranged for samples likely to freeze. The output of the high-resolution proportional counter, capable of high count rates for statistical accuracy, is amplified and applied to a counter and digitalto-analog converter when required. Thus both digital
558
PART | IV Electrical and Radiation Measurements Figure 30.8 Online sulfur analyzer. Courtesy of Nuclear Enterprises Ltd. (a) Mass absorption coefficient; (b) arrangement of instrument.
Figure 30.9 Block diagrams of calcium in cement raw material mea suring instrument. Courtesy of Nuclear Enterprises Ltd. (a) Dry powder form of instrument; (b) slurry form.
and analog outputs are available for display and control purposes. The meter has a range of up to 0–6 percent sulfur by weight and indicates with a precision of !0.01 percent sulfur by weight or !1 percent of the indicated weight, whichever is the larger. It is independent of carbon-hydrogen ratio from 6:1 to 10:1 and the integrating times are from 10 to 200 s. The flow cell is suitable for pressures up to 15 bar and temperatures up to 150°C, the temperature range for the electronics being –10 to +45°C. The arrangement of the instrument is as shown in Figure 30.8.
30.2.5 The Radioisotope Calcium Monitor The calcium content of raw material used in cement manufacture may be measured on-line in either dry powder or slurry form. The basis of the method is to measure the intensity of the characteristic K X-rays emitted by the flowing
sample using a small 55Fe radio-isotope source as the means of excitation. This source is chosen because of its efficient excitation of Ca X-rays in the region which is free from interference by iron. In the form of instrument shown in Figure 30.9(a), used for dry solids, a sample of material in powder form is extracted from the main stream by a screw conveyor and fed into a hopper. In the powder-presentation unit the powder is extracted from the hopper and fed on to a continuously weighted sample presenter at a rate which is controlled so as to maintain a constant mass of sample per unit area on the latter within very close limits. After measurement, the sample is returned to the process. A system is fitted to provide an alarm if the mass per unit area of sample wanders outside preset limits, and aspiration is provided to eliminate airborne dust in the vicinity of the sample presenter and measuring head. Under these conditions it is possible to make precise reproducible X-ray fluorescence measurements of elements from atomic number 19 upward without pelletizing.
Chapter | 30 Measurements Employing Nuclear Techniques
The signal from the X-ray detector in the powderpresentation unit is transmitted via a head amplifier and a standard nucleonic counting chain to a remote display and control station. Analog outputs can be provided for control purposes. In the form used for slurries shown in Figure 30.9(b) an additional measurement is made of the density and hence the solids content of the slurry. The density of the slurry is measured by measuring the absorption of a highly collimated 660 ke V gamma-ray beam. At this energy the measurement is independent of changes in solids composition. The dry powder instrument is calibrated by comparing the instrument readings with chemical analysis carried out under closely controlled conditions with the maximum care taken to reduce sampling errors. This is best achieved by calibrating while the instrument is operating in closed loop with a series of homogeneous samples recirculated in turn. This gives a straight line relating percentage of calcium carbonate to the total X-ray count. With slurry, a line relating percentage calcium carbonate to X-ray count at each dilution is obtained, producing a nomogram which enables a simple special-purpose computer to be used to obtain the measured value indication or signal. In normal operation the sample flows continuously through the instrument and the integrated reading obtained at the end of 2–5 min, representing about 2 kg of dry sample or 30 liters of slurry, is a measure of the composition of the sample. An indication of CaO to within !0.15 percent for the dry method and !0.20 percent for slurries should be attainable by this method.
559
30.2.7 Leak Detection Leakage from pipes buried in the ground is a constant problem with municipal authorities, who may have to find leaks in water supplies, gas supplies, or sewage pipes very rapidly and with the minimum of inconvenience to people living in the area, as essential supplies may have to be cut off until the leak is found and the pipe made safe again. To find the position of large leaks in water distribution pipes two methods have been developed. The first uses an inflatable rubber ball with a diameter nearly equal to that of the pipe and containing 100 or so MBq of 24Na which is inserted into the pipe after the leaking section has been isolated from the rest of the system. The only flow is then towards the leak and the ball is carried as far as the leak, where it stops. As 24Na emits a high-energy gamma ray its radiation can be observed on the surface through a considerable thickness of soil, etc., by means of a sensitive portable detector. Alternatively, radioactive tracer is introduced directly into the fluid in the pipe. After a suitable period the line of the pipe can be monitored with a sensitive portable detector, and the buildup of activity at the point of the leak can be determined. 24Na is a favored radioactive source for leak testing, especially of domestic water supply or sewage leaks, since it has a short half-life (15 h), emits a 2.7 MeV gamma ray and is soluble in water as 24Na Cl. Thus, leak tests can be rapidly carried out and the activity will have decayed to safe limits in a very short time.
30.3 Mechanical measurements 30.3.1 Level Measurement
30.2.6 Wear and Abrasion
30.3.1.1 Using X- or Gamma Rays
The measurement of the wear experienced by mechanical bearings, pistons in the cylinder block or valves in internal combustion engines is extremely tedious when performed by normal mechanical methods. However, wear in a particular component may be easily measured by having the component irradiated in a neutron flux in a nuclear reactor to produce a small amount of induced radioactivity. Thus the iron in, for example, piston rings, which have been activated and fitted to the pistons of a standard engine, will perform in an exactly similar way to normal piston rings, but when wear takes place, the active particles will be carried around by the lubrication system, and a suitable radiation detector will allow the wear to be measured, as well as the distribution of the particles in the lubrication system and the efficiency of the oil filter for removing such particles. To measure the wear in bearings, one or other of the bearing surfaces is made slightly radioactive, and the amount of activity passed to the lubricating system is a measure of the wear experienced.
Level measurements are usually made with the source and detector fixed in position on opposite sides of the outer wall of the container (Figure 30.10). Because many containers in the chemical engineering and oil-refining industries, where most level gauges are installed, have large dimensions, highactivity sources are required and these have to be enclosed in thick lead shields with narrow collimators to reduce scattered
Figure 30.10 Level gauge (fixed).
560
radiation which could be a hazard to people working in the vicinity of such gauges. Because of cost, Geiger counters are the most usual detectors used, though they are not as efficient as scintillation counters. The important criterion in the design of a level gauge is to select a radioactive source to give the optimum path difference signal when the material or liquid to be monitored just obscures the beam from source to detector. The absorption of the beam by the wall of the container must be taken into account, as well as the absorption of the material being measured. A single source and single detector will provide a single response, but by using two detectors with a single source (Figure 30.11), it may be able to provide three readings: low (both detectors operating), normal (one detector operating), and high (neither detector operating). This system is used in papermaking systems to measure the level of the hot pulp. Another level gauge to give a continuous indication of material or liquid level has the detector and the source mounted on a servocontrolled platform which follows the level of the liquid or material in the container (Figure 30.12). This provides a continuous readout of level, and it is possible to use this signal to control the amount of material or liquid entering or leaving the container in accordance with some preprogrammed schedule. Another level gauge uses a radioactive source inside the container but enclosed in a float which rises and falls with the level of the liquid inside the container. An external detector can then observe the source inside the container, indicate the level and initiate refilling procedure to keep the level at a predetermined point.
PART | IV Electrical and Radiation Measurements
Portable level gauges consisting of radioactive source and Geiger detector in a hand-held probe, with the electronics and counting display in a portable box, have been made to detect the level of liquid CO2 in high-pressure cylinders. This is a much simpler method than that of weighing the cylinder and subtracting this value from the weight when the cylinder was originally received.
30.3.1.2 Using Neutrons Some industrial materials have a very low atomic number (Z), such as oils, water, plastics, etc., and by using a beam of fast neutrons from a source such as 241Am/Be or 252Cf, and a suitable thermal-neutron detector such as cerium-activated lithium glass in a scintillation counter, or 10BF3 or 3He gasfilled counters, it is possible to measure the level of such material (Figure 30.13). Fast neutrons from the source are moderated or slowed down by the low-Z material and some are scattered back into the detector. By mounting both the source of fast neutrons and the slow neutron detector at one side of the vessel the combination may be used to follow a varying level using a servocontrolled mechanism, or in a fixed position to control the level in the container to a preset position. This device is also usable as a portable detector system to find blockages in pipes, valves, etc., which often occur in plastics-manufacturing plants.
30.3.2 Measurement of Flow There are several methods of measuring flow using radioactive sources, as follows.
30.3.2.1 Dilution Method This involves injection of a liquid containing radioactivity into the flow line at a known constant rate: samples are taken further down the line where it is known that lateral mixing has been completed. The ratio of the concentrations of the radioactive liquid injected into the line and that of the samples allows the flow rate to be computed. Figure 30.11 Dual detector level gauge.
Figure 30.12 Level gauge (continuous) with automatic control.
Figure 30.13 Level measurement by moderation of fast neutrons.
Chapter | 30 Measurements Employing Nuclear Techniques
30.3.2.2 The “Plug” Method This involves injecting a radioactive liquid into the flow line in a single pulse. By measuring the time this “plug” of radioactive liquid takes to pass two positions a known distance apart, the flow can be calculated. A variation of the “plug” method again uses a single pulse of radioactive liquid injected into the stream, but the measurement consists of taking a sample at a constant rate from the line at a position beyond which full mixing has been completed. Here the flow rate can be calculated by measuring the average concentration of the continuous sample over a known time.
30.3.3 Mass and Thickness Since the quantitative reaction of particles and photons depends essentially on the concentration and mass of the particles with which they are reacting it is to be expected that nuclear techniques can provide means for measuring such things as mass. We have already referred to the measurement of density of the material near a borehole. We now describe some other techniques having industrial uses.
30.3.3.1 Measurement of Mass, Mass Per Unit Area, and Thickness The techniques employed in these measurements are basically the same. The radiation from a gamma ray source falls on the material and the transmitted radiation is measured by a suitable detector. In the nucleonic belt weigher shown in Figure 30.14, designed to measure the mass flow rate of granular material such as iron ore, limestone, coke, cement, fertilizers, etc., the absorption across the total width of the
Figure 30.14 Nucleonic belt weigher. Courtesy of Nuclear Enter prises Ltd.
561
belt is measured. The signal representing the total radiation falling on the detector is processed with a signal representing the belt speed by a solid-state electronic module and displayed as a mass flow rate and a total mass. The complete equipment comprises a C frame assembly housing the source, consisting of 137Cs enclosed in a welded steel capsule mounted in a shielding container with a radiation shutter, and the detector, a scintillation counter whose sensitive length matches the belt width, housed in a cylindrical flameproof enclosure suitable for Groups 11 A and B gases, with the preamplifier. A calibration plate is incorporated with the source to permit a spot check at a suitable point within the span. In addition, there is a dust- and moisture-proof housing for the electronics which may be mounted locally or up to 300m from the detector. The precision of the measurement is better than !1 percent, and the operating temperature of the detector and electronics is –10 to +40°C. The detector and preamplifier may be serviced by unclassified staff, as the maximum dose rate is less than 7.5 nGy/h. Similar equipment may be used to measure mass per unit area by restricting the area over which the radiation falls to a finite area, and if the thickness is constant and known the reading will be a measure of the density.
30.3.3.2 Measurement of Coating Thickness In industry a wide variety of processes occur where it is necessary to measure and sometimes automatically control the thickness of a coating applied to a base material produced in strip form. Examples of such processes are the deposition of tin, zinc, or lacquers on steel, or adhesives, wax, clay bitumen, or plastics to paper, and many other processes. By nucleonic methods measurement to an accuracy of !1 percent of coating thickness can be made in a wide variety of circumstances by rugged equipment capable of a high reliability. Nucleonic coating-thickness gauges are based on the interaction of the radiation emitted from a radioisotope source with the material to be measured. They consist basically of the radioisotope source in a radiation shield and a radiation detector contained in a measuring head, and an electric console. When the radiation emitted from the source is incident on the subject material, part of this radiation is scattered, part is absorbed, and the rest passes through the material. A part of the absorbed radiation excites characteristic fluorescent X-rays in the coating and/or backing. Depending on the measurement required, a system is used in which the detector measures the intensity of scattered, transmitted, or fluorescent radiation. The intensity of radiation monitored by the detector is the measure of the thickness (mass per unit area) of the coating. The electric console contains units which process the detector signal and indicate total coating thickness and/or deviation from the target thickness. The measuring head may be stationary
562
PART | IV Electrical and Radiation Measurements
or programmed to scan across the material. Depending on the type and thickness of coating and base materials, and machine details, one of four gauge types is selected: differential beta transmission, beta backscatter, X-ray fluorescence, and preferential absorption. Differential Beta-transmission Gauge (Figure 30.15) The differential beta-transmission gauge is used to measure coating applied to base materials in sheet form when the coating has a total weight of not less than about one-tenth of the weight of the base material, when both sides of the base and coated material are accessible, and when the composition of coating and base is fairly similar. Here the thickness (mass per unit area) of the coating is monitored by measuring first the thickness of the base material before the coating is applied, followed by the total thickness of the material with its coating, and then subtracting the former from the latter. The difference provides the coating thickness. The readings are obtained by passing the uncoated material through one measuring head and the coated material through the other, the coating being applied between the two positions. The intensity of radiation transmitted through the material is a measure of total thickness. Separate meters record the measurement determined by each head, and a third meter displays the difference between the two readings, which corresponds to the coating thickness. Typical applications of this gauge are the measurements of wax and plastics coatings applied to paper and aluminum sheet or foil, or abrasives to paper or cloth. Beta-backscatter Gauge (Figure 30.16) The beta-backscatter gauge is used to measure coating thickness when the process is such that the material is only accessible from one side and when the coating and backing material are of substantially different atomic number. The radioisotope source and the detector are housed in the same enclosure. Where radiation is directed, for example, on to an uncoated calender roll it will be backscattered and measurable by the detector. Once a coating has been applied to the roll the intensity of the backscattered radiation returning to the detector will change. This change is a measure of the thickness of the coating. Typical applications of this gauge are the measurement of rubber and adhesives on calenders, paper on rollers, or lacquer, paint, or plastics coatings applied to sheet steel. Measurement of Coating and Backing by X-ray Fluorescence X-ray fluorescent techniques, employing radioisotope sources to excite the characteristic fluorescent radiation, are normally used to measure exceptionally thin coatings. The coating-fluorescence gauge monitors the increase in intensity of an X-ray excited in the coating as the coating thickness is increased. The backing-fluorescence gauge excites an X-ray in the backing or base material and measures the decrease in intensity due to attenuation in the coating as the coating thickness is increased. The intensity of fluorescent radiation
Figure 30.15 Differential beta-transmission gauge. Courtesy Nuclear Enterprises Ltd. S1 : first source; D1 : first detector; S2 : second source; D2 : second detector; B: base material; C: coating; M: differential measurement indicator.
Figure 30.16 Beta-backscatter gauge. Courtesy of Nuclear Enter prises Ltd.
is normally measured with an ionization chamber, but a proportional or scintillation counter may sometimes be used. By the use of compact geometry, high-efficiency detectors, and a fail-safe radiation shutter, the dose rates in the vicinity of the measuring head are kept well below the maximum permitted levels, ensuring absolute safety for operators and maintenance staff. Figure 30.17(a) illustrates the principle of the coatingfluorescence gauge, which monitors the increase in intensity of an X-ray excited in the coating as the coating thickness is increased. The instrument is used to measure tin, zinc, aluminum, and chromium coatings applied to sheet steel, or titanium coatings to paper or plastics sheet. The Preferential Absorption Gauge (Figure 30.18) This gauge is used when the coating material has a higher mean atomic number than the base material. The gauge employs low-energy X-rays from a sealed radioisotope source which are absorbed to a much greater extent by materials with a high atomic number, such as chlorine, than by materials such as paper or textiles, which have a low atomic number. It is thus possible to monitor variations in coating thickness by measuring the degree of preferential X-ray absorption by the coating, using a single measuring head. The instrument is used to measure coatings which contain clay, titanium, halogens, iron or other substances with a high atomic number which have been applied to plastics, paper, or textiles.
Chapter | 30 Measurements Employing Nuclear Techniques
563
instruments can survey the actual areas pinpointed in the aerial survey as possible sources of uranium.
30.4.1.1 Land-based Radiometrical Surveys
Figure 30.17 X-ray fluorescence gauge. Courtesy of Nuclear Enterprises Ltd. (a) Coating-fluorescence gauge which monitors the increase in intensity of X-rays excited in coating as its thickness increases; (b) backingfluorescence gauge monitors the decrease in intensity of radiation excited in the backing materials as the coating thickness increases.
Just as the aircraft can carry suitable detector systems and computing equipment, so can land-based vehicles, which can be taken to the areas giving high-uranium indications. While similar electronics can usually be operated from a motor vehicle, the detectors will have to be smaller, especially if the terrain is very rugged, when manually portable survey monitors will be called for. These also can now incorporate a small computer, which can perform the necessary analyses of the signals received from potassium, uranium, thorium, and background.
30.4.1.2 Undersea Surveys
Figure 30.18 Preferential absorption gauge. Courtesy Nuclear Enterprises Ltd.
30.4 Miscellaneous measurements 30.4.1 Field-survey Instruments In prospecting for uranium, portable instruments are used (1) in aircraft, (2) in trucks or vans, (3) hand-held and (4) for undersea surveys. Uranium is frequently found in the same geological formations as oil, and uranium surveys have been used to supplement other more conventional methods of surveying, such as seismic analyses. The special case of surveying for beryllium-bearing rocks was discussed in Section 4.2.1. As aircraft can lift large loads, the detectors used in such surveys have tended to be relatively large NaI (Tl) groups of detectors. For example, one aircraft carried four NaI (Tl) assemblies, each 29.2 cm diameter and 10 cm thick, feeding into a four- or five-channel spectrometer which separately monitored the potassium, uranium, thorium, and the background. Simultaneously a suitable position-finding system such as Loran-C is in operation, so that the airborne plot of the radioactivities, as the aircraft flies over a prescribed course, is printed with the position of the aircraft onto the chart recorder. In this way large areas of land or sea, which may contain ground-based survey teams with suitable
Measurement of the natural gamma radiation from rocks and sediments can be carried out using a towed seabed gamma-ray spectrometer. The spectrometer, developed by UKAEA Harwell in collaboration with the Institute of Geological Sciences, has been used over the last ten years and has traversed more than 10,000 km in surveys of the United Kingdom continental shelf. It consists of a NaI (Tl) crystalphoto-multiplier detector assembly, containing a crystal 76 mm diameter # 76 mm or 127 mm long, with an EMI type 9758 photomultiplier together with a preamplifier and high-voltage generator which are potted in silicone rubber. The unit is mounted in a stainless steel cylinder, which in turn is mounted in a 30-m long flexible PVC hose 173 mm diameter, which is towed by a cable from the ship, and also contains suitable ballast in the form of steel chain to allow the probe to be dragged over the surface of the seabed without becoming entangled in wrecks or rock outcrops. The electronics on board the ship provide four channels to allow potassium, uranium, and thorium, as well as the total gamma radioactivity to be measured and recorded on suitable chart recorders and teletypes, and provision is also made to feed the output to a computer-based multichannel analyzer.
30.4.2 Dating of Archaeological or Geological Specimens 30.4.2.1 Radiocarbon Dating by Gasproportional or Liquid-scintillation Counting The technique of radiocarbon dating was discovered by W. F. Libby and his associates in 1947, when they were investigating the radioactivity produced by the interaction of cosmic rays from outer space with air molecules. They discovered that interactions with nitrogen in the atmosphere produced radioactive 14C which quickly transformed into 14CO , forming about 1 percent of the total CO in the 2 2 world. As cosmic rays have been bombarding the earth
564
at a steady rate over millions of years, forming some two atoms of 14C per square centimeter of the earth’s surface per second, then an equilibrium stage should have been reached, as the half-life (time for a radioactive substance to decay to half its original value) of 14C is some 5,000 years. As CO2 enters all living matter the distribution should be uniform. However, when the human being, animal, tree, or plant dies, CO2 transfer ceases, and the carbon already present in the now-dead object is fixed. The 14C in this carbon will therefore start to decay with a half-life of some 5,000 years, so that measurement of the 14C present in the sample allows one to determine, by the amount of 14C still present, the time elapsed since the death of the person, animal, tree, or plant. We expect to find two disintegrations per second for every 8 g of carbon in living beings or dissolved in sea water or in the atmosphere CO2 for the total carbon in these three categories adds to 8 (7.5 in the oceans, 0.125 in the air, 0.25 in life forms, and perhaps 0.125 in humus). There are several problems associated with radiocarbon dating which must be overcome before one can arrive at an estimated age for a particular sample. First, the sample must be treated so as to release the 14C in a suitable form for counting. Methods used are various, depending on the final form in which the sample is required and whether it is to be counted in a gas counter (Geiger or proportional) or in a liquid-scintillation counter. One method is to transform the carbon in the sample by combustion into a gas suitable for counting in a gasproportional counter. This can be carbon dioxide (CO2), methane (CH4), or acetylene (C2H2). The original method used by Libby, in which the carbon sample was deposited in a thin layer inside the gas counter, has been superseded by the gas-combustion method. In this the sample is consumed by heating in a tube furnace or, in an improved way, in an oxygen bomb. The gas can be counted directly in a gas proportional counter, after suitable purification, as CO2 or CH4, or it can be transformed into a liquid form such as benzene, when it can be mixed with a liquid scintillator and measured in a liquid-scintillation counter. Counting Systems When one considers that there are, at most, only two 14C disintegrations per second from each 8 g of carbon, producing two soft beta particles (Emax = 0.156 MeV), one can appreciate that the counting is, indeed, very “low-level.” The natural counting rate (unshielded) of a typical gas proportional counter 15 cm diameter # 60 cm long would be some 75.6 counts per second, whereas the signal due to 1 atm of live modern CO2 or CH4 filling the counter would be only 0.75 count per second. In order to achieve a standard deviation of 1 percent in the measurement, 10,000 counts would have to be measured, and at the rate of 0.75 count per second the measurement would take 3.7 days. It is immediately apparent that the natural background must be drastically reduced and, if possible, the sample size increased to improve the counting characteristics (Watt and Ramsden, 1964).
PART | IV Electrical and Radiation Measurements
Background is due to many causes, some of the most important being: 1. Environmental radioactivity from walls, air, rocks, etc.; 2. Radioactivities present in the shield itself; 3. Radioactivities in the materials used in the manufacture of the counters and associated devices inside the shield; 4. Cosmic rays; 5. Radioactive contamination of the gas or liquid scintillator itself; 6. Spurious electronic pulses, spikes due to improper operation or pick-up of electromagnetic noise from the electricity mains supply, etc. Calculation of a Radiocarbon Date Since the measurement is to calculate the decay of 14C, we have the relation
I = I 0 exp - ] mt g
(30.8)
where I is the activity of the sample when measured, I0 the original activity of the sample (as reflected by a modern standard), m the decay constant = 0.693/T½ (where T½ = half-life) and t the time elapsed. If T½ = 5,568 yr (the latest best value found for the half-life of 14C is 5,630 yr, but internationally it has been agreed that all dates are still referred to 5,568 yr to avoid the confusion which would arise if the volumes of published dates required revision) then Equation (30.8) may be rewritten as
t = 8033 log e
Ss - B S0 - B
(30.9)
where Ss is the count rate of sample, S0 the count rate of modern sample, and B the count rate of dead carbon. A modern carbon standard of oxalic acid, 95 percent of which is equivalent to 1890 wood, is used universally, and is available from the National Institute of Standards and Testing (NIST) in Gaithersburg, Maryland. Anthracite coal, with an estimated age of 2 # 109 yr, can be used to provide the dead carbon background. Corrections must also be made for isotopic fractionation which can occur both in nature and during the various chemical procedures used in preparing the sample. Statistics of Carbon Dating The standard deviation v of the source count rate when corrected for background is given by 1 S B 2 v== + G (30.10) T - tb tb where S is the gross count of sample plus background, B the background counted for a time tb and T the total time available. In carbon dating S c 2B and the counting periods for sample and background are made equal, usually of the order of 48 h. Thus if t1 = tb = T½ and S = D + B we have
vD = b
1
D + 2B l 2 t
(30.11)
Chapter | 30 Measurements Employing Nuclear Techniques
The maximum age which can be determined by any specific system depends on the minimum sample activity which can be detected. If the “2v criterion” is used, the minimum sample counting rate detectable is equal to twice the standard deviation and the probability that the true value of D lies within the region !2vD is 95.5 percent. Some laboratories prefer to use the “4v criterion,” which gives a 99.99 percent probability that the true value is within the interval !4v. If Dmin = 2vD, then D min = 2
d
D min + 2B n t
(30.12)
then the maximum dating age Tmax which can be achieved with the system can be estimated as follows. From Equation (30.9) T1 D0 2 Tmax = log e D min log e 2 T1
=
2
log e 2
log e >
D0
t
2 _ Dmin + 2B i
H
(30.13)
As Dm 11 2B the equation can be simplified to T1
Tmax =
2
log e 2
log e >
D0
t H 8
B
(30.14)
where D0 is the activity of the modern carbon sample, corrected for background. The ratio D0 B is considered the factor of merit for a system. For a typical system such as that of Libby (1985) t = 48 h
and T 1 = 5568 yr 2
D 0 = 6.7 cpm and B = 5 cpm ,
so
D0 / B . 3 Hence Tmax =
5568 log e < 3 log e 2
c
48 # 62 F m 8
= 8034 # 4.038 = 32442 yr Calibration of the Radiocarbon Time Scale A number of corrections have to be applied to dates computed from the radiocarbon decay measurements to arrive at a “true” age. First, a comparison of age by radiocarbon measurement with the age of wood known historically initially showed good agreement in 1949. When the radiocarbondating system was improved by the use of CO2 proportional counters, in which the sample was introduced directly into the counter, closer inspection using many more samples showed discrepancies in the dates calculated by various methods. Some of these discrepancies can be accounted
565
for, such as the effect on the atmosphere of all the burning of wood and coal since the nineteenth century—Suess called this the “fossil-fuel effect.” Alternative methods of dating, such as dendrochronology (the counting of tree rings), thermoluminescent dating, historical dating, etc., have demonstrated that variations do occur in the curve relating radiocarbon dating and other methods. Ottaway (1983) describes in more detail the problems and the present state of the art.
30.4.3 Static Elimination Although not strictly instrumentation, an interesting application of radioactive sources is in the elimination of static electricity. In a number of manufacturing processes static electricity is produced, generally, by friction between, for example, a sheet of paper and the rollers used to pass it through a printing process. This can cause tearing at the high speeds that are found in printing presses. In the weaving industry, when a loom is left standing overnight, the woven materials and the warp threads left in the loom remain charged for a long period. It is found that dust particles become attracted to the cloth, producing the so-called “fog-marking,” reducing thereby the value of the cloth. In rubber manufacture for cable coverings, automobile tires, etc., the rubber has to pass through rollers, where the static electricity so generated causes the material to remain stuck to the rollers instead of moving to the next processing stage. All these static problems can be overcome, or at least reduced by mounting a suitable radioactive source close to the place where static is produced, so that ions of the appropriate sign are produced in quantities sufficient to neutralize the charges built up by friction, etc. A wide variety of sources are now available in many shapes to allow them to be attached to the machines to give optimum ionization of the air at the critical locations. Longstrip sources of tritium, 90Sr/90Y, and 241Am are the most popular. The importance of preventing static discharges has been highlighted recently in two fields. The first is in the oil industry, where gas-filled oil tankers have exploded due to static discharges in the empty tanks. The second is in the microchip manufacturing industry, where static discharges can destroy a complete integrated circuit.
References Clayton, C. G., and Cameron, J. F., Radioisotope Instruments, Vol. 1, Pergamon Press, Oxford (1971) (Vol. 2 was not published). This has a very extensive bibliography for further study. Gardner, R. P., and Ely, R. L., Jr, Radioisotope Measurement Applications in Engineering, Van Nostrand, New York (1967). Hubbell, J. H., Photon Cross-sections, Attenuation Coefficients, and Energy Absorption Coefficients from 10 keV to 100 GeV, NS RDS NBS 29 (1969). Libby, W. F., Radiocarbon Dating, University of Chicago Press (1955).
566
Ottaway, B. S. (ed.), “Archaeology, dendrochronology and the radiocarbon calibration curve,” Edinburgh University Occasional Paper No. 9 (1983). Shumilovskii, N. N., and Mel’ttser, L. V., Radioactive Isotopes in Instrumentation and Control, Pergamon Press, Oxford (1964). Sim, D. F., Summary of the Law Relating to Atomic Energy and Radioactive Substances, UK Atomic Energy Authority (yearly). Includes
PART | IV Electrical and Radiation Measurements (a) Radioactive Substances Act 1960, (b) Factories Act—The Ionizing Radiations (Sealed Sources) Regulations 1969. (HMSO Stat. Inst. 1969, No. 808); (c) Factories Act—The Ionizing Radiations Regulations, HMSO (1985). Watt, D. E., and Ramsden, D., High Sensitivity Counting Techniques, Pergamon Press, Oxford (1964).
Chapter 31
Non-Destructive Testing Scottish School of Non-Destructive Testing
31.1 Introduction The driving force for improvements and developments in non-destructive testing instrumentation is the continually increasing need to demonstrate the integrity and reliability of engineering materials, products, and plant. Efficient materials manufacture, the assurance of product quality, and re-assurance of plant at regular intervals during use represent the main need for non-destructive testing (NDT). This “state-of-health” knowledge is necessary for both economic and safety reasons. Indeed, in the UK the latter reasons have been strengthened by legislation such as the Health and Safety at Work Act 1974, and in the United States by the Occupational Safety and Health Administration (OSHA). Failures in engineering components generally result from a combination of conditions, the main three being inadequate design, incorrect use, or the presence of defects in materials. The use of non-destructive testing seeks to eliminate the failures caused predominantly by defects. During manufacture these defects may, for example, be shrinkage and porosity in castings, laps and folds in forgings, laminations in plate material, lack of penetration and cracks in weldments. Alternatively, with increasing complexity of materials and conditions of service, less obvious factors may require control through NDT. For example, these features may be composition, microstructure, and homogeneity.
Non-destructive testing is not confined to manufacture. The designer and user may find application to on-site testing of bridges, pipelines in the oil and gas industries, pressure vessels in the power-generation industry, and in-service testing of nuclear plant, aircraft, and refinery installations. Defects at this stage may be deterioration in plant due to fatigue and corrosion. The purpose of non-destructive testing during service is to look for deterioration in plant to ensure that adequate warning is given of the need to repair or replace. Periodic checks also give confidence that “all is well.” In these ways, therefore, non-destructive testing plays an important role in the manufacture and use of materials. Moreover, as designs become more adventurous and as new materials are used, there is less justification for relying on past experience. Accordingly, non-destructive testing has an increasingly important role. Methods of non-destructive testing are normally categorized in terms of whether their suitability is primarily for the examination of the surface features of materials (Figure 31.1) or the internal features of materials (Figure 31.2). Closely allied to this is the sensitivity of each method, since different situations invariably create different levels of quality required. Consequently the range of applications is diverse. In the following account the most widely used methods of nondestructive testing are reviewed, together with several current developments.
Figure 31.1 NDT methods for surface inspection.
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00031-0
567
568
PART | IV Electrical and Radiation Measurements
Figure 31.2 NDT methods for sub-surface inspection.
31.2 Visual examination For many types of components, integrity is verified principally through visual inspection. Indeed, even for components that require further inspection using ultrasonics or radiography visual inspection still constitutes an important aspect of practical quality control. Visual inspection is the most extensively used of any method. It is relatively easy to apply and can have one or more of the following advantages:
Figure 31.3 Gauges for visual inspection. Courtesy the Welding Institute.
1. Low cost; 2. Can be applied while work is in progress; 3. Allows early correction of faults; 4. Gives indication of incorrect procedures; 5. Gives early warning of faults developing when item is in use. Equipment may range from that suitable for determining dimensional non-conformity, such as the Welding Institute Gauges (Figure 31.3), to illuminated magnifiers (Figure 31.4) and the more sophisticated fiberscope (Figure 31.5). The instrument shown in Figure 31.5 is a high-resolution flexible fiberscope with end tip and focus control. Flexible lengths from 1 m to 5 m are available for viewing inaccessible areas in boilers, heat exchangers, castings, turbines, interior welds, and other equipment where periodic or troubleshooting inspection is essential.
Figure 31.4 Illuminated magnifiers for visual inspection. Courtesy P. W. Allen & Co.
31.3 Surface-inspection methods The inspection of surfaces for defects at or close to the surface presents great scope for a variety of inspection techniques. With internal-flaw detection one is often limited to radiographic and ultrasonic techniques, whereas with surface-flaw detection visual and other electromagnetic methods such as magnetic particle, potential drop, and eddy current become available.
31.3.1 Visual Techniques In many instances defects are visible to the eye on the surface of components. However, for the purposes of recording or
Figure 31.5 High-resolution flexible fiberscope. Courtesy P. W. Allen & Co.
569
Chapter | 31 Non-Destructive Testing
gaining access to difficult locations, photographic and photomicrographic methods can be very useful. In hazardous environments, as encountered in the nuclear and offshore fields, remote television cameras coupled to video recorders allow inspection results to be assessed after the test. When coupled to remote transport systems these cameras can be used for pipeline inspection, the cameras themselves being miniaturized for very narrow pipe sections. When surface-breaking defects are not immediately apparent, their presence may be enhanced by the use of dye penetrants. A penetrating dyeloaded liquid is applied to a material surface where, due to its surface tension and wetting properties, a strong capillary effect exists, which causes the liquid to penetrate into fine openings on the surface. After a short time (about 10 minutes), the surface is cleaned and an absorbing powder applied which blots the dye penetrant liquid, causing a stain around the defects. Since the dye is either a bright red or fluorescent under ultraviolet light, small defects become readily visible. The penetrant process itself can be made highly automated for large-scale production, but still requires trained inspectors for the final assessment. To achieve fully automated inspection, scanned ultraviolet lasers which excite the fluorescent dye and are coupled to photodetectors to receive the visible light from defect indications are under development.
31.3.2 Magnetic Flux Methods When the material under test is ferromagnetic the magnetic properties may be exploited to provide testing methods based on the localized escape of flux around defects in magnetized material. For example, when a magnetic flux is present in a material such as iron, below magnetic saturation, the flux will tend to confine itself within the material surface. This is due to the continuity of the tangential component of the magnetic field strength, H, across the magnetic boundary. Since the permeability of iron is high, the external flux density, Bext, is small (Figure 31.6(a)). Around a defect, the
Figure 31.6 Principle of magnetic flux test.
presence of a normal component of B incident on the defect will provide continuity of the flux to the air, and a localized flux escape will be apparent (Figure 31.6(b)). If only a tangential component is present, no flux leak occurs, maximum leakage conditions being obtained when B is normal to the defect.
31.3.2.1 Magnetization Methods To detect flux leakages, material magnetization levels must be substantial (values in excess 0.72 Tesla for the magnetic flux density). This, in turn, demands high current levels. Applying Ampere’s current law to a 25-mm diameter circular bar of steel having a relative permeability of 240, the current required to achieve a magnetic field strength of 2400 A/m at the surface giving the required flux value is 188 A peak current. Such current levels are applied either as ac current from a step-down transformer whose output is shorted by the specimen or as halfwave rectified current produced by diode rectification. Differences in the type of current used become apparent when assessing the “skin depth” of the magnetic field. For ac current in a magnetic conductor, magnetic field penetration, even at 50 Hz, is limited; the dc component present in the half-wave rectified current produces a greater depth of penetration. Several methods of achieving the desired flux density levels at the material surface are available. They include the use of threading bars, coils, and electromagnets (Figure 31.7).
31.3.2.2 Flux-Leakage Detection One of the most effective detection systems is the application of finely divided ferric-oxide particles to form an indication by their accumulation around the flux leakage. The addition of a fluorescent dye to the particle enables the indication to be easily seen under ultraviolet light. Such a system, however, does not sustain recording of defect information except through photographic and replication techniques such as strippable magnetic paint and rubber. Alternative flux-detection
Figure 31.7 Ways of inducing flux.
570
techniques are becoming available, such as the application of modified magnetic recording tape which is wrapped around the material before magnetization. After the test, the tape can be unwound and played through a tape unit, which detects the presence of the recorded flux leakages. For more dynamic situations, such as in the online testing of tube and bar materials, a faster detection technique for the recording of indications is required. A small detector head, comprising a highly permeable yoke on which a number of turns of wire are wrapped, can be used to detect small flux leakages by magnetic induction as the flux leak passes under the detector (Figure 31.8). The material motion can be linked to a chart recorder showing impulses as they occur.
31.3.3 Potential Drop Techniques The measurement of material resistance can be related to measurements of the depth of surfacebreaking cracks. A four-point probe head (Figure 31.9) is applied to a surface and current passed between the outer probes. The potential drop across the crack is measured by the two inner probes and, as the crack depth increases, the greater current path causes an increasing potential drop. By varying probe spacing, maximum sensitivity to changes in crack depth can be obtained. In addition, the application of ac current of varying frequency permits the depth of current penetration beneath the surface to be varied due to the “skin effect” (see below).
PART | IV Electrical and Radiation Measurements
31.3.4 Eddy-Current Testing A powerful method of assessing both the material properties and the presence of defects is the eddycurrent technique. A time-changing magnetic field is used to induce weak electrical currents in the test material, these currents being sensitive to changes in both material conductivity and permeability. In turn, the intrinsic value of the conductivity depends mainly on the material composition but is influenced by changes in structure due to crystal imperfections (voids or interstitial atoms); stress conditions; or work hardening dependent upon the state of dislocations in the material. Additionally, the presence of discontinuities will disturb the eddy-current flow patterns giving detectable changes. The usual eddy-current testing system comprises a coil which due to the applied current produces an ac magnetic field within the material. This, in turn, excites the eddy currents which produce their own field, thus altering that of the current (Figure 31.10). This reflects also in the impedance of the coil, whose resistive component is related to eddycurrent losses and whose inductance depends on the magnetic circuit conditions. Thus, conductivity changes will be reflected in changes in coil resistance, whilst changes in permeability or in the presentation of the coil to the surface will affect the coil inductance. The frequency of excitation represents an important test parameter, due to the “skin effect” varying the depth of current penetration beneath the surface. From the skin-depth formula d=
1 ^ r f nv h
Figure 31.8 Detection of flux leakage. Dl: flux-leakage width; Dz: flux-leakage magnitude; V: induced voltage.
where f is the frequency, n the permeability, and v the conductivity, it can be seen that in ferromagnetic material the skin depth, d, is less than in non-magnetic materials by the large factor of the square root of the permeability. By selection of the appropriate frequency, usually in the range 100 kHz to 10 MHz, the detection of discontinuities and other subsurface features can be varied. The higher the frequency, the less the depth of penetration. In addition, in ferromagnetic material the ability of an ac magnetic field to bias the material into saturation results in an incremental permeability close to that of the non-magnetic material.
Figure 31.9 Probe for potential drop technique.
Figure 31.10 Principle of eddy-current testing. Zcoil = (r0 + Re) + jXL; Re are the additional losses due to eddy-current flow.
571
Chapter | 31 Non-Destructive Testing
The eddy-current method therefore represents a very general testing technique for conducting materials in which changes in conductivity, permeability, and surface geometry can be measured.
31.3.4.1 Eddy-Current Instrumentation In eddy-current testing the coils are incorporated into a balanced-bridge configuration to provide maximum detection of small changes in coil impedance reflecting the material changes. The simplest type of detector is that which measures the magnitude of bridge imbalance. Such units are used for material comparison of known against unknown and for simple crack detection (Figure 31.11). A more versatile type of unit is one in which the magnitude and phase of the coil-impedance change is measured (Figure 31.12), since changes in inductance will be 90° out of phase with those from changes in conductivity. Such units as the vector display take the bridge imbalance voltage V0ej(~t + z) and pass it through two quadrature phase detectors. The 0° reference detector produces a voltage proportional to V0 cos z, whilst the 90° detector gives a voltage of V0 sin z. The vector point displayed on an X–Y storage oscilloscope represents the magnitude and phase of the voltage imbalance and hence the impedance change. To allow positioning of the vector anywhere around the screen a vector rotator is incorporated using sine and cosine potentiometers. These implement the equation
Figure 31.11 Simple type of eddy-current detector.
Figure 31.12 Eddy-current detection with phase discrimination.
V xl = Vx cos z - Vy sin z V yl = V x sin z + V y cos z where V xl and V yl are the rotated X and Y oscilloscope voltages and Vx and Vy are the phasedetector outputs. In setting up such a unit the movement of the spot during the lift-off of the probe from the surface identifies the magnetic circuit or permeability axis, whereas defect deflection and conductivity changes will introduce a component primarily along an axis at right angles to the permeability axis (Figure 31.13). Additional vector processing can take place to remove the response from a known geometrical feature. Compensation probes can produce a signal from the feature, such as support members, this signal being subtracted from that of the defect plus feature. Such cancellation can record small defect responses which are being masked by the larger geometrical response. Also, a number of different frequencies can be applied simultaneously (multi-frequency testing) to give depth information resulting from the response at each frequency. The third type of testing situation for large-scale inspection of continuous material comprises a testing head through which the material passes. A detector system utilizing the phase detection units discussed previously is set up to respond to known defect orientations. When these defect signals
Figure 31.13 CRT display for eddy-current detection.
exceed a predetermined threshold it is recorded along with the tube position on a strip chart, or the tube itself is marked with the defect position.
31.4 Ultrasonics 31.4.1 General Principles of Ultrasonics Ultrasonics, when applied to the non-destructive testing of an engineering component, relies on a probing beam of energy directed into the component interacting in an interpretable way with the component’s structural features. If a flaw is present within the metal, the progression of the beam of energy is locally modified and the modification is detected and conveniently displayed to enable the flaw to be diagnosed. The diagnosis largely depends on a knowledge of the nature of the probing energy beam, its interaction with the structural features of the component under test, and the manufacturing history of the component.
572
PART | IV Electrical and Radiation Measurements
The ultrasonic energy is directed into the material under test in the form of mechanical waves or vibrations of very high frequency. Although its frequency may be anything in excess of the upper limit of audibility of the human ear, or 20 kHz, ultrasonic non-destructive testing frequencies normally lie in the range 0.5–10 MHz. The equation V f where m is the wavelength, V the velocity, and f the frequency, highlights this by relating wavelength and frequency to the velocity in the material. The wavelength determines the defect sensitivity in that any defect dimensionally less than half the wavelength will not be detected. Consequently the ability to detect small defects increases with decreasing wavelength of vibration and, since the velocity of sound is characteristic of a particular material, increasing the frequency of vibration will provide the possibility of increased sensitivity. Frequency selection is thus a significant variable in the ability to detect small flaws. The nature of ultrasonic waves is such that propagation involves particle motion in the medium through which they travel. The propagation may be by way of volume change, the compression wave form, or by a distortion process, the shear wave form. The speed of propagation thus depends on the elastic properties and the density of the particular medium. m=
Compression wave velocity Vc =
E ft $
Shear wave velocity Vs =
E et $
1-n
^ 1 + n h ^ 1 - 2n h 1 o= ^2 + nh
p
G t
where E is the modulus of elasticity, t the density, n Poisson’s ratio, and G the modulus of shear. Other properties of ultrasonic waves relate to the results of ultrasound meeting an interface, i.e., a boundary wall between different media. When this occurs, some of the wave is reflected, the amount depending on the acoustic properties of the two media and the direction governed by the same laws as for light waves. If the ultrasound meets a boundary at an angle, the part of the wave that is not reflected is refracted, suffering a change of direction for its progression through the second medium. Energy may be lost or attenuated during the propagation of the ultrasound due to energy absorption within the medium and to scatter which results from interaction of the waves with microstructural features of size comparable with the wavelength. This is an important factor, as it counteracts the sensitivity to flaw location on the basis of frequency selection. Hence high frequency gives sensitivity to small flaws but may be limited by scatter and absorption to short-range detection.
The compression or longitudinal wave is the most common mode of propagation in ultrasonics. In this form, particle displacement at each point in a material is parallel to the direction of propagation. The propagating wavefront progresses by a series of alternate compressions and rarefactions, the total distance occupied by one compression and one rarefaction being the wavelength. Also commonly used are shear or transverse waves, which are characterized by the particle displacement at each point in a material being at right angles to the direction of propagation. In comparing these wave motions it should be appreciated that for a given material the shear waves have a velocity approximately fiveninths of that of compressional waves. It follows that for any frequency, the lower velocity of shear waves corresponds to a shorter wavelength. Hence, for a given frequency, the minimum size of defect detectable will be less in the case of shear waves. Other forms of shear motion may be produced. Where there is a free surface a Rayleigh or surface wave may be generated. This type of shear wave propagates on the surface of a body with effective penetration of less than a wavelength. In thin sections bounded by two free surfaces a Lamb wave may be produced. This is a form of compressional wave which propagates in sheet material, its velocity depending not only on the elastic constant of the material but also on plate thickness and frequency. Such waveforms can be used in ultrasonic testing. A wave of a given mode of propagation may generate or transform to waves of other modes of propagation at refraction or reflection, and this may give rise to practical difficulties in the correct interpretation of test signals from the material. Ultrasonic waves are generated in a transducer mounted on a probe. The transducer material has the property of expanding and contracting under an alternating electrical field due to the piezoelectric effect. It can thus transform electrical oscillations into mechanical vibrations and vice versa. Since the probe is outside the specimen to be tested, it is necessary to provide a coupling agent between probe and specimen. The couplant, a liquid or pliable solid, is interposed between probe surface and specimen surface, and assists in the passage of ultrasonic energy. The probe may be used to transmit energy as a transmitter, receive energy as a receiver, or transmit and receive as a transceiver. A characteristic of the transceiver or single-crystal probe is the dead zone, where defects cannot be resolved with any accuracy due to the transmission-echo width. Information on the passage of ultrasonic energy in the specimen under test is provided by way of the transducer, in the form of electrical impulses which are displayed on a cathode ray tube screen. The most commonly used presentation of the information is A-scan, where the horizontal base line represents distance or time intervals and the vertical axis gives signal amplitude or intensities of transmitted or reflected signals. The basic methods of examination are transmission, pulse-echo, and resonance. In the transmission method an
Chapter | 31 Non-Destructive Testing
ultrasonic beam of energy is passed through a specimen and investigated by placing an ultrasonic transmitter on one face and a receiver on the other. The presence of internal flaws is indicated by a reduction in the amplitude of the received signal or a loss of signal. No indication of defect depth is provided. Although it is possible with the pulse echo method to use separate probes it is more common to have the transmitter and receiver housed within the one probe, a transceiver. Hence access to one surface only is necessary. This method relies on energy reflected from within the material, finding its way back to the probe. Information is provided on the time taken by the pulse to travel from the transmitter to an obstacle, backwall, or flaw and return to the receiver. Time is proportional to the distance of the ultrasonic beam path, hence the cathode ray tube may be calibrated to
573
enable accurate thickness measurement or defect location to be obtained (Figure 31.14(a)). For a defect suitably orientated to the beam direction an assessment of defect size can be made from the amplitude of the reflected signal. Compression probes are used in this test method to transmit compressional waves into the material normal to the entry surface. On the other hand, shear probes are used to transmit shear waves where it is desirable to introduce the energy into the material at an angle, and a reference called the skip distance may be used. This is the distance measured over the surface of the body between the probe index or beam exit point for the probe and the point where the beam axis impinges on the surface after following a double traverse path. Accurate defect location is possible using the skip distance and the beam path length (Figures 31.14(b) and (c)). Figure 31.14 Displays presented by different ultrasonic probes and defects, (a) Distance (time) of travel–compression-wave examination; (b) skip distance–shear-wave examination; (c) beam path length (distance of travel)–shear-wave examination.
574
A condition of resonance exists when the thickness of a component is exactly equal to half the wavelength of the incident ultrasonic energy, i.e. the component will vibrate with its natural frequency. The condition causes an increase in the amplitude of the received pulse which can readily be identified. The condition of resonance can also be obtained if the thickness is a multiple of the halfwavelength. The resonance method consequently involves varying the frequency of ultrasonic waves to excite a maximum amplitude of vibration in a body or part of a body, generally for the purpose of determining thickness from one side only.
PART | IV Electrical and Radiation Measurements
In most ultrasonic sets, the signal is displayed on a cathode ray tube (CRT). Operation of the instrument is similar in both the through transmission and pulse-echo techniques, and block diagrams of the test equipment are shown in Figures 31.15(a) and (b). The master timer controls the rate of generation of the pulses or pulse repetition frequency (PRF) and supplies the timebase circuit giving the base line whilst the pulse generator controls the output of the pulses or pulse energy which is transmitted to the probe. At the probe, electrical pulses are converted via the transducer into mechanical vibrations at the chosen frequency and directed into the test material. The
amount of energy passing into the specimen is very small. On the sound beam returning to the probe, the mechanical vibrations are reconverted at the transducer into electrical oscillations. This is known as the piezoelectric effect. In the main, transmitter and receiver probes are combined. At the CRT the timebase amplifier controls the rate of sweep of the beam across the face of the tube which, by virtue of the relationship between the distance traveled by ultrasonic waves in unit time, i.e., velocity, can be used as a distance depth scale when locating defects or measuring thickness. Signals coming from the receiver probe to the signal amplifier are magnified, this working in conjunction with an incorporated attenuator control. The signal is produced in the vertical axis. Visual display is by CRT with the transmitted and received signals in their proper time sequence with indications of relative amplitude. A-, B-, and C-scan presentations can be explained in terms of CRT display (Figure 31.16). Interposed between the cathode and anode in the CRT is a grid which is used to limit the electron flow and hence control ultimate screen brightness as it is made more positive or negative. There is also a deflector system immediately following the electron gun which can deflect the beam horizontally and vertically over the screen of the tube. In A-scan presentation, deflector plate X and Y coordinates represent timebase and amplified response. However, in B-scan the received and amplified echoes from defects and from front and rear surfaces of the test plate are applied, not to the Y deflector plate as in normal A-scan, but to the grid of the CRT in order to increase the
Figure 31.15 Block diagram of ultrasonic flaw detectors using (a) through transmission and (b) pulse-echo techniques.
Figure 31.16 A-, B-, and C-scan presentations. Courtesy the Welding Institute.
31.4.2 The Ultrasonic Test Equipment Controls and Visual Presentation
575
Chapter | 31 Non-Destructive Testing
brightness of the trace. If a signal proportional to the movement of the probe along the specimen is applied to the X deflector plates, the whole CRT display can be made to represent a cross section of the material over a given length of the sample (Figure 31.16(b)). A further extension to this type of presentation is used in C-scan, where both X and Y deflections on the tube follow the corresponding coordinates
of probe traverse. Flaw echoes occurring within the material are gated and used to modulate the grid of the CRT and produce brightened areas in the regions occupied by the defects. A picture is obtained very like a radiograph but with greater sensitivity (Figure 31.17(c)). Figure 31.17 is a block diagram for equipment that can give an A- or B-scan presentation, and Figure 31.18 does the same for C-scan.
Figure 31.17 A- and B-scan equipment. Courtesy the Welding Institute.
Figure 31.18 C-scan equipment. Courtesy the Welding Institute.
576
31.4.3 Probe Construction In ultrasonic probe construction the piezoelectric crystal is attached to a non-piezoelectric front piece or shoe into which an acoustic compression wave is emitted (Figure 31.19(a)). The other side of the crystal is attached to a material which absorbs energy emitted in the backward direction. These emitted waves correspond to an energy loss from the crystal and hence increase the crystal damping. Ideally, one would hope that the damping obtained from the emitted acoustic wave in the forward direction would be sufficient to give the correct probe performance. Probes may generate either compression waves or angled shear waves, using either single or twin piezoelectric crystals. In the single-crystal compression probe the zone of intensity variation is not confined within the Perspex wear shoe. However, twin-crystal probes (Figure 31.19(b)) are mainly used since the dead zone or zone of non-resolution of the single form may be extremely long. In comparison, angle probes work on the principle of mode conversion at the boundary of the probe and workpiece. An angled beam of longitudinal waves is generated by the transducer and strikes the surface of the specimen under test at an angle. If the angle is chosen correctly, only an angled shear wave is transmitted into the workpiece. Again a twin crystal form is available as for compression probes, as shown in Figure 31.19(b). Special probes of the focused, variable-angle, crystal mosaic, or angled compression wave type are also available
Figure 31.19 Single (a) and twin-crystal (b) probe construction.
PART | IV Electrical and Radiation Measurements
(Figure 31.20). By placing an acoustic lens in front of a transducer crystal it is possible to focus the emitted beam in a similar manner to an optical lens. If the lens is given a cylindrical curvature, it is possible to arrange that, when testing objects in immersion testing, the sound beam enters normal to a cylindrical surface. Variable-angle probes can be adjusted to give a varying angle of emitted sound wave, these being of value when testing at various angles on the same workpiece. In the mosaic type of probe a number of crystals are laid side by side. The crystals are excited in phase such that the mosaic acts as a probe of large crystal size equal to that of the mosaic. Finally, probes of the angled compression type have been used in the testing of austenitic materials where, due to large grain structure, shear waves are rapidly attenuated. Such probes can be made by angling the incident compressional beam until it is smaller than the initial angle for emission of shear waves only. Although a shear wave also occurs in the test piece it is rapidly lost due to attenuation.
Figure 31.20 Special probe construction, (a) Focused probe; (b) variable-angle probe; (c) mosaic probe; (d) angled compression probe.
577
Chapter | 31 Non-Destructive Testing
31.4.4 Ultrasonic Spectroscopy Techniques Ultrasonic spectroscopy is a technique used to analyze the frequency components of ultrasonic signals. The origins of spectroscopy date back to Newton, who showed that white light contained a number of different colors. Each color corresponds to a different frequency, hence white light may be regarded as the sum of a number of different radiation frequencies, i.e., it contains a spectrum of frequencies. Short pulses of ultrasonic energy have characteristics similar to white light, i.e., they carry a spectrum of frequencies. The passage of white light through a material and subsequent examination of its spectrum can yield useful information regarding the atomic composition of the material. Likewise the passage of ultrasound through a material and subsequent examination of the spectrum can yield information about defect geometry, thickness, transducer frequency response, and differences in microstructure. The difference in the type of information is related to the fact that ultrasonic signals are elastic instead of electromagnetic waves. Interpretation of ultrasonic spectra requires a knowledge of how the ultrasonic energy is introduced into the specimen. Two main factors determine this:
versus frequency plot. Some modifications to the system include: 1. A detector and suitable filter between the wide band amplifier and oscilloscope to allow the envelope function to be displayed; and 2. Substitution of an electronically tuned rf amplifier to suppress spurious signals. Alternatively, a range of frequencies may be introduced, using the spectra implicit in pulses. The output spectra for four types of output signal are shown in Figure 31.22. The types of output are: 1. Single dc pulse with reactangular shape; 2. Oscillating pulse with rectangular envelope and carrier frequency fo; 3. dc pulse with an exponential rise and decay; 4. Oscillating pulse with exponential rise and decay and carrier frequency fo. From these results it can be seen that the main lobe of the spectrum contains most of the spectral energy, and its width
1. The frequency response characteristic of the transducer; 2. The output spectrum of the generator that excites the transducer. The transducer response is important. Depending on the technique, one or two transducers may be used. The response depends on the mechanical behavior, and elastic constants and thickness normal to the disc surface are important. In addition, damping caused by a wear plate attached to the front of the specimen and transducer-specimen coupling conditions can be significant factors. Crystals which have a low Q value give a broad, flat response satisfactory for ultrasonic spectroscopy but suffer from a sharp fall-off in amplitude response, i.e., sensitivity. Results indicate that crystal response is flat at frequencies well below the fundamental resonant frequency, fo, suggesting that higher fo values would be an advantage. However, there is a limit to the thickness of crystal material which remains robust enough to be practical. In general, some attempt is made to equalize the transducer response’s effect, for instance, by changing the amplitude during frequency modulation. A straightforward procedure can be adopted of varying frequency continuously and noting the response. Using such frequency-modulation techniques (Figure 31.21) requires one probe for transmission and one for reception. If a reflection test procedure is used, then both transducers may be mounted in one probe. Figure 31.21 shows a block outline of the system. The display on the CRT is then an amplitude
Figure 31.21 Frequency-modulation spectroscope.
Figure 31.22 Pulse shape and associated spectra.
578
is inversely proportional to the pulse duration. In order to obtain a broad ultrasonic spectrum of large amplitude the excitation pulse must have as large an amplitude and as short a duration as possible. In practice, a compromise is required, since there is a limit to the breakdown strength of the transducer and the output voltage of the pulse generator. Electronic equipment used in pulse spectroscopy (Figure 31.23) differs considerably from that for frequency modulation, including a time gate for selecting the ultrasonic signals to be analyzed, and so allowing observations in the time and frequency domains. A pulse frequency-modulation technique can also be used which combines both the previous procedures (Figure 31.24). The time gate opens and closes periodically at a rate considerably higher than the frequency sweep rate, hence breaking the frequency sweep signals into a series of pulses. This technique has been used principally for determining transducer response characteristics.
PART | IV Electrical and Radiation Measurements
Figure 31.23 Pulse-echo spectroscope.
31.4.5 Applications of Ultrasonic Spectroscopy 31.4.5.1 Transducer Response It is known that the results of conventional ultrasonic inspection can vary considerably from transducer to transducer even if element size and resonant frequency remain the same. To avoid this difficulty it is necessary to control both the response characteristics and the beam profile during fabrication. To determine the frequency response both the pulse and pulsed frequency-modulation techniques are used. The test is carried out by analyzing the first backwall echo from a specimen whose ultrasonic attenuation has a negligible dependence on frequency and which is relatively thin, to avoid errors due to divergence of the ultrasonic beam. Analysis of the results yields response characteristics, typical examples of which are shown in Figure 31.25.
31.4.5.2 Microstructure The attenuation of an ultrasonic signal as it passes through an amorphous or polycrystalline material will depend on the microstructure of the material and the frequency content of the ultrasonic signal. Differences in microstructure can therefore be detected by examining the ultrasonic attenuation spectra. The attenuation of ultrasound in polycrystalline materials with randomly oriented crystallites is caused mainly by scattering. The elastic properties of the single crystal and the average grain size determine the frequency dependence of the ultrasonic attenuation. Attenuation in glass, plastics, and other amorphous materials is characteristic of the material, although caused by mechanisms other than grain-boundary scattering.
Figure 31.24 Pulsed frequency-modulation spectroscope.
Figure 31.25 Frequency response of various piezoelectric transducers.
31.4.5.3 Analyzing Defect Geometry The assessment of the shape of a defect is often made by measuring changes in the ultrasonic echo height. Intepretation on this basis alone may give results that are less than accurate, since factors such as orientation, geometry, and acoustic impedance could affect the echo size at least as much as the defect size. One technique that allows further investigation is to vary the wavelength of the ultrasonic beam by changing the frequency, hence leading to ultrasonic spectroscopy. The pulse echo spectroscope is ideally suited for this technique. Examination of test specimens has shown that
579
Chapter | 31 Non-Destructive Testing
the use of defect echo heights for the purpose of size assessment is not advisable if the spectral “signatures” of the defect echoes show significant differences.
31.4.6 Other Ways of Presenting Information from Ultrasonics Many other techniques of processing ultrasonic information are available, and in the following the general principles of two main groups: (1) ultrasonic visualization or (2) ultrasonic imaging will be outlined. More detailed information may be found in the references given at the end of this chapter. Ultrasonic visualization makes use of two main techniques: (1) Schlieren methods and (2) photoelastic methods. A third method combining both of these has been described by D. M. Marsh in Research Techniques in Non-destructive Testing (1973). Schlieren methods depend on detecting the deviation of light caused by refractive index gradients accompanying ultrasonic waves. In most cases this technique has been used in liquids. Photoelastic visualization reveals the stresses in an ultrasonic wave using crossed Polaroids to detect stress birefringence of the medium. Main uses have been for particularly intense fields, such as those in solids in physical contact with the transducer. Many other methods have been tried. The principle of the Schlieren system is shown in Figure 31.26. In the absence of ultrasonics, all light is intercepted by E. Light diffracted by ultrasound and hence passing outside E forms an image in the dark field. Considerable care is required to make the system effective. Lenses or mirrors may be used in the system as focusing devices; mirrors are free from chromatic aberration and can be accurately produced even for large apertures. Problems may exist with layout and off-axis errors. Lens systems avoid these problems but can be expensive for large apertures. The basic principles of photoelastic visualization (Figure 31.27) are the same as those used in photoelastic stress analysis (see Chapter 4, Section 4.9). Visualization works well in situations where continuous ultrasound is being used. If a pulsed system is used then collimation of the light beam crossing the ultrasound beam and a pulsed light source are required. The principal advantage of a photoelastic system is its compactness and lack of protective covering in contrast to the Schlieren system. Photoelastic systems are also cheaper. However, the Schlieren system’s major advantage lies in its ability to visualize in fluids as well as in solids. The sensitivity of photoelastic techniques to longitudinal waves is half that for shear waves. Measurements with Schlieren systems have shown that the sensitivity of ultrasound visualization systems is particularly affected by misalignment of the beam. For example, misalignment of the beam in water is
Figure 31.26 Schlieren visualization.
Figure 31.27 Diagrams of photoelastic visualization methods. (a) Basic anisotropic system; (b) circular polarized system for isotropic response; (c) practical large-aperture system using Fresnel lenses.
magnified by refraction in the solid. It is therefore essential to ensure firm securing of transducers by clamps capable of fine adjustment. An alternative approach to visualization of ultrasonic waves is the formation of an optical image from ultrasonic radiation. Many systems exist in this field and, once again, a few selected systems will be outlined. Ultrasonic cameras of various types have been in existence for many years. In general, their use has been limited to the laboratory due to expense and limitations imposed by design. Most successful cameras have used quartz plates forming the end of an evacuated tube for electron-beam scattering, and the size of these tubes is limited by the need to withstand atmospheric pressure. This can be partly overcome by using thick plates and a harmonic, although there is a loss in sensitivity. A new approach to ultrasonic imaging is the use of liquid crystals to form color images. All these systems suffer from the need to have the object immersed in water, where the critical angle is such that radiation can only be received over a small angle.
580
Increasingly, acoustic holography is being introduced into ultrasonic flaw detection and characterization. A hologram is constructed from the ultrasonic radiation and is used to provide an optical picture of any defects. This type of system is theoretically complex and has practical problems associated with stability, as does any holographic system, but there are an increasing number of examples of its use (for example, medical and seismic analysis). Pasakomy has shown that real-time holographic systems may be used to examine welds in pressure vessels. Acoustic holography with scanned hologram systems has been used to examine double“V” notch welds. Reactor pressure vessels were examined using the same technique and flaw detection was within the limits of error.
31.4.7 Automated Ultrasonic Testing The use of microcomputer technology to provide semiautomatic and automatic testing systems has increased, and several pipe testers are available which are controlled by microprocessors. A typical system would have a microprocessor programmed to record defect signals which were considered to represent flaws. Depending on the installation, it could then mark these flaws, record them, or transmit them via a link to a main computer. Testing of components online has been developed further by British Steel and by staff at SSNDT. Allied to the development of automatic testing is the development of data-handling systems to allow flaw detection to be automatic. Since the volume of data to be analyzed is large, the use of computers is essential. One danger associated with this development is that of over-inspection. The number of data values collected should be no more than is satisfactory to meet the specification for flaw detection.
31.4.8 Acoustic Emission Acoustic emission is the release of energy as a solid material undergoes fracture or deformation. In non-destructive testing two features of acoustic emission are important: its ability to allow remote detection of crack formation or movement at the time it occurs and to do this continuously. The energy released when a component undergoes stress leads to two types of acoustic spectra: continuous or burst type. Burst-type spectra are usually associated with the leading edge of a crack being extended; i.e., crack growth. Analysis of burst spectra is carried out to identify material discontinuities. Continuous spectra are usually associated with yield rather than fracture mechanisms. Typical mechanisms which would release acoustic energy are: (1) crack growth, (2) dislocation avalanching at the leading edge of discontinuities, and (3) discontinuity surfaces rubbing.
PART | IV Electrical and Radiation Measurements
Acoustic spectra are generated when a test specimen or component is stressed. Initial stressing will produce acoustic emissions from all discontinuities, including minor yield in areas of high local stress. The most comprehensive assessment of a structure is achieved in a single stress application; however, cyclic stressing may be used for structures that may not have their operating stresses exceeded. In large structures which have undergone a period of continuous stress a short period of stress relaxation will allow partial recovery of acoustic activity. Acoustic emission inspection systems have three main functions: (1) signal detection, (2) signal conditioning, and (3) analysis of the processed signals. It is known that the acoustic emission signal in steels is a very short pulse. Other studies have shown that a detecting system sensitive mainly to one mode gives a better detected signal. Transducer patterns may require some thought, depending on the geometry of the item under test. Typical transducer materials are PZT-5A, sensitive to Rayleigh waves and shear waves, and lithium niobate in an extended temperature range. Signal conditioning basically takes the transducer output and produces a signal which the analysis system can process. Typically a signal conditioning system will contain a low-noise pre-amplifier and a filter amplification system. The analysis of acoustic emission spectra depends on being able to eliminate non-recurring emissions and to process further statistically significant and predominant signals. A typical system is shown in Figure 31.28. The main operating functions are: (1) a real-time display and (2) a source analysis computer. The real-time display gives the operator an indication of all emissions while significant emissions are processed by the analysis system. A permanent record of significant defects can also be produced. In plants where turbulent fluid flow and cavitation are present, acoustic emission detection is best carried out in the low-megahertz frequency range to avoid noise interference. Acoustic emission detection has been successfully applied to pipe rupture studies, monitoring of known flaws in pressure vessels, flaw formation in welds, stress corrosion cracking, and fatigue failure.
31.5 Radiography Radiography has long been an essential tool for inspection and development work in foundries. In addition, it is widely used in the pressure vessel, pipeline, offshore drilling platform, and many other industries for checking the integrity of welded components at the in-process, completed, or inservice stages. The technique also finds application in the aerospace industry. The method relies on the ability of high-energy, shortwavelength sources of electromagnetic radiation such as
581
Chapter | 31 Non-Destructive Testing
Figure 31.28 System for acoustic emission detection.
X-rays, gamma rays (and neutron sources) to penetrate solid materials. By placing a suitable recording medium, usually photographic film, on the side of the specimen remote from the radiation source and with suitable adjustment of technique, a shadowgraph or two-dimensional image of the surface and internal features of the specimen can be obtained. Thus radiography is one of the few non-destructive testing methods suitable for the detection of internal flaws, and has the added advantage that a permanent record of these features is directly produced. X- and gamma rays are parts of the electro-magnetic spectrum (Figure 31.29). Members of this family are connected by the relationship velocity equals frequency times wavelength, and have a common velocity of 3 # 108 m/s. The shorter the wavelength, the higher the degree of penetration of matter. Equipment associated with them is also discussed in Chapter 22.
31.5.1 Gamma Rays The gamma-ray sources used in industrial radiography are artificially produced radioactive isotopes. Though many radioactive isotopes are available, only a few are suitable for radiography. Some of the properties of the commonly used isotopes are shown in Table 31.1. Since these sources emit radiation continuously they must be housed in a protective container which is made of a heavy metal such as lead
Figure 31.29 Electromagnetic spectrum.
or tungsten. When the container is opened in order to take the radiograph it is preferable that this be done by remote control in order to reduce the risk of exposing the operator to the harmful rays. Such a gamma-ray source container is shown in Figure 31.30.
582
PART | IV Electrical and Radiation Measurements
Table 31.1 Isotopes commonly used in gamma radiography Source
Effective equiv. energy (MeV)
Half-life
Specific emission (R/h/Ci)
Specific activity (Cil)
Used for steel thickness (mm) up to
Thulium170
0.08
117 days
0
0.0025
9
Iridium192
0.4
75 days
0.48
25
75
Cesium137
0.6
30 yr
0.35
25
80
Cobalt60
1.1; 1.3
5.3 yr
1.3
120
140
Figure 31.30 Gamma-ray source container. Saddle is attached to the work piece. The operator slips the source container into the saddle and moves away. After a short delay the source moves into the exposed position. When the preset exposure time expires the source returns automatically into its safe shielding. Courtesy Pantatron Radiation Engineering Ltd.
31.5.2 X-rays X-rays are produced when high-speed electrons are brought to rest by hitting a solid object. In radiography the X-rays are produced in an evacuated heavy-walled glass vessel or “tube.” The typical construction of a tube is shown in Figure 31.31. In operation a dc voltage in the range 100 kV to 2 MV is applied between a heated, spirally wound filament (the cathode) and the positively charged fairly massive copper anode. The anode has embedded in it a tungsten insert or target of effective area 2–4 mm2 and it is on to this target that the electrons are focused. The anode and cathode are placed about 50 mm apart, and the tube current is kept low (5–10 mA) in order to prevent overheating of the anode. Typical voltages for a range of steel thicknesses are shown in Table 31.2. The high voltage needed to produce the X-rays is obtained by relatively simple circuitry consisting of suitable
combinations of transformers, rectifiers, and capacitors. Two of the more widely used circuits are the Villard, which produces a pulsating output, and the Greinacher, producing an almost constant potential output. These are shown in Figure 31.32 along with the form of the resulting voltage. A voltage stabilizer at the 240 V input stage is desirable. Exposure (the product of current and time) varies from specimen to specimen, but with a tube current of 5 mA exposure times in the range 2–30 min are typical.
31.5.3 Sensitivity and IQI Performance can be optimized by the right choice of type of film and screens, voltage, exposure, and film focal (target) distance. The better the technique, the higher the sensitivity of the radiograph. Sensitivity is a measure of the smallness
Chapter | 31 Non-Destructive Testing
583
Figure 31.31 Single section, hot-cathode, high vacuum, oilcooled, radiographic X-ray tube.
Table 31.2 Maximum thickness of steel which can be radiographed with different X-ray energies X-ray energy (KeV)s
High-sensitivity technique thickness (mm)
Low-sensitivity technique thickness (mm)
100
10
25
150
15
50
200
25
75
250
40
90
400
75
110
1000
125
160
2000
200
250
5000
300
350
30000
300
380
of flaw which may be revealed on a radiograph. Unfortunately, this important feature cannot be measured directly, and so the relative quality of the radiograph is assessed using a device called an image indicator (IQI). A number of different designs of IQI are in use, none of which is ideal. After extensive experimentation, the two types adopted by the British Standards Institution and accepted by the ISO (International Standards Organization) are the wire type and the step-hole type. In use, the IQI should, wherever possible, be placed in close contact with the source side of the specimen, and in such a position that it will appear near the edge of whichever size of film is being used.
Figure 31.32 Voltage-doubling circuits. (a) Villard; (b) Greinacher constant-potential.
584
PART | IV Electrical and Radiation Measurements
31.5.3.1 Wire Type of IQI This consists of a series of straight wires 30 mm long and diameters as shown in Table 31.3. Five models are available, the most popular containing seven consecutive wires. The wires are laid parallel and equidistant from each other and mounted between two thin transparent plastic sheets of low absorption for X- or gamma rays. This means that although the wires are fixed in relation to each other, the IQI is flexible and thus useful for curved specimens. The wires should be of the same material as that being radiographed. IQIs for steel, copper, and aluminum are commercially available. For other materials it may not be possible to obtain an IQI of matching composition. In this case, a material of similar absorptive properties may be substituted. Also enclosed in the plastic sheet are letters and numbers to indicate the material type and the series of wires (Figure 31.33). For example, 4Fe10 indicates that the IQI is suitable for iron and steel and contains wire diameters from 0.063 to 0.25 mm. For weld radiography, the IQI is placed with the wires placed centrally on the weld and lying at right angles to the length of the weld.
Table 31.3 Voltage-doubling circuits: (a) Villard; (b) Greinacher constant-potential
The percentage sensitivity, S, of the radiograph is calculated from S=
Diameter of thinnest wire visible on the radiograph # 100 Specimen thickness (mm)
Thus the better the technique, the more wires will be imaged on the radiograph. It should be noted that the smaller the value of S, the better the quality of the radiograph.
31.5.3.2 Step or Hole Type of IQI This consists of a series of uniform thickness metal plaques each containing one or two drilled holes. The hole diameter is equal to step thickness. The step thickness and hole diameters are shown in Table 31.4. For steps 1 to 8, two holes are drilled in each step, each hole being located 3 mm from the edge of the step. The remaining steps, 9 to 18, have only a single hole located in the center of the step. For convenience, the IQI may be machined as a single step wedge (Figure 31.34). For extra flexibility each plaque is mounted in line but separately between two thin plastic or rubber sheets. Three modules are available, the step series in each being as shown in Table 31.5. In this type also, a series of letters and numbers identifies the material and thickness range for which the IQI is suitable. The percentage sensitivity, S, is calculated from S=
Diameter of smallest hole visible on the radiograph # 100 Specimen thickness (mm)
Wire no.
Diameter (mm)
Wire no.
Diameter (mm)
Wire no.
Diameter (mm)
1
0.032
8
0.160
15
0.80
2
0.010
9
0.200
16
1.00
3
0.050
10
0.250
17
1.25
4
0.063
11
0.320
18
1.00
5
0.080
12
0.400
19
2.00
6
0.100
13
0.500
20
2.50
TABLE 31.4 Hole and step dimensions for IQI
7
0.125
14
0.63
21
3.20
Step no.
Diameter and step thickness (mm)
Step no.
Diameter and step thickness (mm)
1
0.125
10
1.00
2
0.160
11
1.25
3
0.200
12
1.60
4
0.250
13
2.00
5
0.320
14
2.50
6
0.400
15
3.20
7
0.500
16
4.00
8
0.630
17
5.00
9
0.800
18
6.30
Figure 31.33 Wire-type IQI.
For the thinner steps with two holes it is important that both holes are visible before that diameter can be included in the calculation. In use, this type is placed close to, but not on, a weld.
585
Chapter | 31 Non-Destructive Testing
Figure 31.34 Step/hole type IQI.
TABLE 31.5 Step and holes included in different models of IQI Model
Step and hole sizes
A
1–6 inclusive
B
7–12 inclusive
C
13–18 inclusive
31.5.4 Xerography Mainly because of the high price of silver, attempts have been made to replace photographic film as the recording medium. Fluoroscopy is one such method but is of limited application due to its lack of sensitivity and inability to cope with thick sections in dense materials. An alternative technique which has been developed and which can produce results of comparable sensitivity to film radiography in the medium voltage range (100–250 kV) is xerography. In this process, the X-ray film is replaced by an aluminum plate coated on one side with a layer of selenium 30–100 nm thick. A uniform positive electric charge is induced in the selenium layer. Since selenium has a high resistivity, the charge may be retained for long periods provided the plate is not exposed to light, ionizing radiations, or unduly humid atmospheres. Exposure to X- or gamma rays causes a leakage of the charge from the selenium to the earthed aluminum backing plate—the leakage at any point being proportional to the radiation dose falling on it. The process of forming the image is shown in Figure 31.35. This latent image is developed by blowing a fine white powder over the exposed plate. The powder becomes charged (by friction) and the negatively charged particles are attracted and adhere to the positively charged selenium, the amount attracted being proportional to the amount of charge at each point on the “latent” selenium image. The image is now ready for viewing, which is best done in an illuminator using low-angle illumination, and the image will withstand vibration, but it must not be touched. The process is capable of a high degree of image sharpness since the selenium is virtually free of graininess, and sharpness is not affected by the grain size of the powder.
Figure 31.35 Diagrammatic representation of the process of xerography.
An apparent drawback is the low inherent contrast of the image, but there is an outlining effect on image detail which improves the rendering of most types of weld and casting flaws thus giving an apparent high contrast. The overall result is that sensitivity is similar to that obtained with film.
31.5.5 Fluoroscopic and Image-Intensification Methods In fluoroscopy the set-up of source, specimen, and recording medium is similar to that for radiography. However, instead of film a specially constructed transparent screen is used which fluoresces, i.e., emits light when X-rays fall on it. This enables a positive image to be obtained since greater amounts of radiation, for example that passing through thinner parts of the specimen, will result in greater brightness. Fluoroscopy has the following advantages over radiography: 1. The need for expensive film is eliminated 2. The fluorescent screen can be viewed while the specimen is moving, resulting in: a. Easier image interpretation b. Faster throughput. Unfortunately, the sensitivity possible with fluoroscopy is considerably inferior to that obtained with film radiography. It is difficult to obtain a sensitivity better than 5 percent whereas for critical work a sensitivity of 2 percent or better is required. Therefore, although the method is widely used in the medical field, its main use in industry is for applications where resolution of fine detail is not required. There are three reasons for the lack of sensitivity: 1. Fluoroscopic images are usually very dim. The characteristics of the human eye are such that, even when fully dark adapted, it cannot perceive at low levels of brightness the small contrasts or fine detail which it can at higher levels.
586
PART | IV Electrical and Radiation Measurements
2. In an attempt to increase image brightness the fluorescent screens are usually constructed using a zinc sulphidecadmium sulphide mixture which, although increasing brightness, gives a coarser-grained and hence a more blurred image. 3. The image produced on a fluoroscopic screen is much less contrasty than that on a radiograph.
31.5.5.1 Image-Intensification Systems
Figure 31.36 Diagram of 5-inch Philips image-intensifier tube.
In fluoroscopy the main problem of low screen brightness is due mainly to: 1. The low efficiency—only a fraction of the incident X-rays are converted into light. 2. The light which is produced at the screen is scattered in all directions, so that only a small proportion of the total produced is collected by the eye of the viewer. In order to overcome these limitations, a number of image-intensification and image-enhancement systems have been developed. The electron tube intensifier is the commonest type. Such instruments are commonly marketed by Philips and Westinghouse. In this system use is made of the phenomenon of photoelectricity, i.e., the property possessed by some materials of emitting electrons when irradiated by light. The layout of the Philips system is shown in Figure 31.36. It consists of a heavy-walled glass tube with an inner conducting layer over part of its surface which forms part of the electron-focusing system. At one end of the tube there is a two-component screen comprising a fluorescent screen and, in close contact with it, a photoelectric layer supported on a thin curved sheet of aluminum. At the other end of the tube is the viewing screen and an optical system. The instrument operates as follows. When X-rays fall on the primary fluorescent screen they are converted into light which, in turn, excites the photoelectric layer in contact with it and causes it to emit electrons: i.e., a light image is converted into an electron image. The electrons are accelerated across the tube by a dc potential of 20–30 kV and focused on the viewing screen. Focusing of the electron image occurs largely because of the spherical curvature of the photocathode. However, fine focusing is achieved by variation of a small positive voltage applied to the inner conducting layer of the glass envelope. As can be seen from Figure 31.36, the electron image reproduced on the viewing screen is much smaller and hence much brighter than that formed at the photo-cathode. Further increase in brightness is obtained because the energy imparted to the electrons by the accelerating electric field is given up on impact. Although brighter, the image formed on the final screen is small and it is necessary to increase its size with an optical magnifier. In the Philips instrument an in-line system provides a linear magnification of nine for either monocular or binocular viewing.
Figure 31.37 Diagram of Westinghouse image-intensifier tube and optical system.
As can be seen from Figure 31.37, the Westinghouse instrument is somewhat similar to that of Philips, but there are two important differences: 1. Westinghouse uses a subsidiary electron lens followed by a final main electron lens, fine focusing being achieved by varying the potential of the weak lens. 2. Westinghouse uses a system of mirrors and lenses to prepare the image for final viewing. The advantages of this system are that viewing is done out of line of the main X-ray beam and the image can be viewed simultaneously by two observers.
31.6 Underwater non-destructive testing The exploration and recovery of gas and oil offshore based on large fabricated structures has created a demand for nondestructive testing capable of operation at and below the surface of the sea. Because of high capital costs, large operating costs, and public interest the structures are expected to operate all the year round and to be re-certificated by relevant authorities and insurance companies typically on a five-year renewal basis.
587
Chapter | 31 Non-Destructive Testing
The annual inspection program must be planned around limited opportunities and normally covers: 1. Areas of previous repair; 2. Areas of high stress; 3. Routine inspection on a five-year cycle. The accumulated inspection records are of great importance. The inspection is designed to include checks on: 1. Structural damage caused by operational incidents and by changes in seabed conditions; 2. Marine growth accumulation both masking faults and adding extra mass to the structure; 3. Fatigue cracking caused by the repeated cyclic loading caused by wind and wave action; 4. Corrosion originating in the action of salt water. The major area of concern is within the splash zone and just below, where incident light levels, oxygen levels, repeated wetting/drying, and temperature differentials give rise to the highest corrosion rates. The environment is hostile to both equipment and operators in conditions where safety considerations make demands on inspection techniques requiring delicate handling and high interpretative skills.
31.6.1 Diver Operations and Communication Generally, experienced divers are trained as non-destructive testing operators rather than inspection personnel being converted into divers. The work is fatiguing and inspection is complicated by poor communications between diver and the supervising surface inspection engineer. These constraints lead to the use of equipment which is either robust and provides indications which are simple for first-line interpretation by the diver or uses sophisticated data transmission to the surface for interpretation, and relegates the diver’s role to one of positioning the sensor-head. Equipment requiring a high degree of interaction between diver and surface demands extensive training for optimum performance. The communication to the diver is speech based. The ambient noise levels are high both at the surface, where generators, compressors, etc., are operating, and below it, where the diver’s microphone interacts with the breathing equipment. The microphone picks up the voice within the helmet. Further clarity is lost by the effect of air pressure altering the characteristic resonance of the voice, effectively increasing the frequency band so that, even with frequency shifters, intelligible information is lost. Hence any communication with the diver is monosyllabic and repetitive. For initial surveys and in areas of high ambient danger the mounting of sensor arrangements on remote-controlled vehicles (RCV) is on the increase. Additional requirements for robust techniques and equipments are imposed by 24-hour operation and continual
changes in shift personnel. Equipment tends to become common property and not operated by one individual who can undertake maintenance and calibration. Surface preparation prior to examination is undertaken to remove debris, marine growth, corrosion products, and, where required, the existing surface protection. The preparation is commonly 15 cm on either side of a weld and down to bare metal. This may be carried out by hand scraping, water jetting, pneumatic/hydraulic needle descaler guns, or hydraulic wire brushing. Individual company requirements vary in their estimation of the effects of surface-cleaning methods which may peen over surfacebreaking cracks.
31.6.2 Visual Examination The initial and most important examination is visual in that it provides an overall and general appreciation of the condition of the structure, the accumulation of marine growth and scouring. Whilst only the most obvious cracks will be detected, areas requiring supplementary non-destructive testing will be highlighted. The examination can be assisted by closed-circuit television (CCTV) and close-up photography. CCTV normally uses low-light level silicon diode cameras, capable of focusing from 4 inches to infinity. In order to assist visual examination, templates, mimics, and pit gauges are used to relay information on the size of defects to the surface. Where extensive damage has occurred, a template of the area may be constructed above water for evaluation by the inspection engineer who can then design a repair technique.
31.6.3 Photography Light is attenuated by scatter and selective absorption in seawater and the debris held in suspension so as to shift the color balance of white light towards green. Correction filters are not normally used as they increase the attenuation. Correct balance and illumination levels are achieved with flood- or flashlights. The photography is normally on 35 or 70 mm format using color film with stereoscopic recording where later analysis and templating will assist repair techniques. Camera types are normally waterproofed versions of land-based equipment. When specifically designed for underwater use, the infrequent loading of film and power packs is desirable with built-in flash and single-hand operation or remote firing from a submersible on preset or automatic control. CCTV provides immediate and recordable data to the surface. It is of comparatively poor resolution and, in black and white, lacks the extra picture contrast given by color still photography.
588
PART | IV Electrical and Radiation Measurements
31.6.4 Magnetic Particle Inspection (MPI)
31.6.5 Ultrasonics
MPI is the most widely accepted technique for underwater non-destructive testing. As a robust system with wide operator latitude and immediate confirmation of a successful application by both diver and surface, the technique is well suited to cope in the hostile environment. Where large equipment cannot gain access, magnetization is by permanent magnets with a standard pulloff strength, although use will be limited away from flat material, and repeated application at 6-inch intervals is required. When working near to the air–sea boundary, the magnetization is derived from flexible coils driven from surface transformers and the leakage flux from cracks disclosed by fluorescent ink supplied from the surface. Ac is used to facilitate surface crack detection. The flexible cables carrying the magnetization current are wrapped around a member or laid in a parallel conductor arrangement along the weld. At lower levels the primary energy is taken to a subsea transformer to minimize power losses. The transformer houses an ink reservoir which dilutes the concentrate 10:1 with seawater. Ink illumination is from hand-held ultraviolet lamps which also support the ink dispenser (Figure 31.38). At depth the low ambient light suits fluorescent inspection whilst in shallow conditions inspection during the night is preferred. Photographic recording of indications is normal along with written notes and any CCTV recording available.
Ultrasonic non-destructive testing is mainly concerned with simple thickness checking to detect erosion and corrosion. Purpose-built probes detect the back-wall ultrasonic echo and will be hand-held with rechargeable power supplies providing full portability. The units may be self-activating, switching on only when held on the test material, and are calibrated for steel although alternative calibration is possible. Ideally the unit therefore has no diver controls and can display a range of steel thickness from 1 to 300 mm on a digital readout. Where detailed examination of a weld is required, either a conventional surface A-scan set housed in a waterproof container with external extensions to the controls is used, or the diver operates the probes of a surface-located set under the direction of the surface non-destructive testing engineer. In either case, there is a facility for monitoring of the master set by a slave set. The diver, for instance, may be assisted by an audio feedback from the surface set which triggers on a threshold gate open to defect signals. The diver’s hand movements will be monitored by a helmet video to allow monitoring and adjustment by the surface operator.
Figure 31.38 Magnetic particle inspection in deep water.
Figure 31.39 Measuring cathodic protection.
31.6.6 Corrosion Protection Because seawater behaves as an electrolytic fluid, corrosion of the steel structures may be inhibited by providing either sacrificial zinc anodes or impressing on the structure a constant electrical potential to reverse the electrolytic action. In order to check the operation of the impressed potential or the corrosion liability of the submerged structure, surface voltage is measured with reference to silver/silver chloride cell (Figure 31.39). Hand-held potential readers containing a reference cell, contact tip, and digital reading volt-meter along with internal power supply will allow a survey to be completed by a diver which may be remotely monitored from the surface.
589
Chapter | 31 Non-Destructive Testing
31.6.7 Other Non-Destructive Testing Techniques There are a variety of other non-destructive techniques which have not yet gained common acceptance within the oil industry but are subject to varying degrees of investigation and experimental use. Some of these are described below.
those below the optimum required to break down the surface barriers. The technique, however, is an obvious choice to first evaluate MPI indications in order to differentiate purely surface features (for example, grinding marks) from cracks.
31.6.7.3 Bulk Ultrasonic Scanning 31.6.7.1 Eddy Current Eddy-current techniques are described in Section 31.3.4. The method can, with suitable head amplification, be used for a search-and-follow technique. Whilst it will not detect sub-surface cracks, the degree of surface cleaning both on the weld and to each side is not critical. Thus substantial savings in preparation and reprotection can be made.
31.6.7.2 AC Potential Difference (ACIPD) As mentioned in Section 31.3.3, changes in potential difference can be used to detect surface defects. This is particularly valuable under water. An ac will tend to travel just under the surface of a conductor because of the skin effect. The current flow between two contacts made on a steel specimen will approximately occupy a square with the contact points on one diagonal. In a uniform material there will be a steady ohmic voltage drop from contact to contact which will map out the current flow. In the presence of a surface crack orientated at right angles to the current flow there will be a step change in the potential which can be detected by two closely spaced voltage probes connected to a sensitive voltmeter (Figure 31.40). Crack penetration, regardless of attitude, will also influence the step voltage across the surface crack and allow depth estimation. The method relies upon the efficiency of the contact made by the current driver and the voltage probe tips, and limitations occur because of the voltage safety limitations imposed on electrical sources capable of producing the constant current required. The voltages are limited to
Figure 31.40 AC potential difference (AC/PD).
The alternative to adapting surface non-destructive testing methods is to use the surrounding water as a couplant to transfer a large amount of ultrasonic energy into the structure and then monitor the returning energy by scanning either a single detector or the response from a tube from which acoustic energy can be used to construct a visual image (Figure 31.41). Such techniques are experimental but initial results indicate that very rapid inspection rates can be obtained with the diver entirely relegated to positioning the sensor. Analysis of the returning information is made initially by microcomputers, which attempt to remove the background variation and highlight the signals which alter the sensor position or scan. The devices do not attempt to characterize defects in detail, and for this other techniques are required.
31.6.7.4 Acoustic Emission Those developments which remove the need for a diver at every inspection are attractive but, as yet, are not fully proven. Acoustic emission is detected using probes fixed to the structure, “listening” to the internal noise. As described in Section 31.4.8, the system relies upon the stress concentrations and fatigue failures to radiate an increasing amount
Figure 31.41 Bulk ultrasonic scanning.
590
PART | IV Electrical and Radiation Measurements
of energy as failure approaches, and this increased emission is detected by the probes.
31.7 Developments Many of the recent development in non-destructive testing, and in ultrasonic testing in particular, have been in the use of computers to control inspections and analyze results. The most widespread use of computer technology is in digital flaw detectors. These instruments digitize the incoming data signal, allowing it to be stored on disc, recalled, and printed. Digital flaw detectors are also able to simplify the task of taking inspections by providing functions such as automatic calibration and DAC curve plotting. The use of B-, C-, and D-scans to produce clear images of defects is well established and commonly available at a relatively low cost. Many advances in instrumentation are in the off-line processing of information, both before and after the actual inspection (Carter and Burch, 1986). The most common use is to enhance C-scans by color coding defect areas. Other data processing techniques include SAFT (Software Aperture Focusing Technique) (Doctor et al., 1986), TOFD (Time of Flight Diffraction) (Carter, 1984), expert systems (Moran and Bower, 1987), and neural networks (Windsor et al., 1993). One of the most interesting innovations in this field is the concept of an integrated NDT work-bench (McNab and Dunlop, 1993). These work-benches provide the hardware and software systems required to design, carry out, and analyze an ultrasonic inspection, all operated from a single computer. In theory, such a system would include a CAD package to provide the exact geometry of the part under
inspection, controllers to move a mechanical probe over the inspection area, digital signal processing software to enhance the incoming data, and an expert system to help assess the results. The use of mechanical devices to take inspections is becoming more common (Mudge, 1985), especially in the nuclear and pressure vessel industries. Due to the inflexibility of these mechanical systems, however, the majority of inspections are still performed by a manual operator. Designers of NDT equipment are now turning their attention to how computer technology can improve the reliability. For this type of equipment, ease of use is a prime consideration, since the operator must be able to concentrate on taking the inspection without the distraction of a complicated computer system. Work is under way at the University of Paisley to determine the optimum structure and design of system interface to produce the best aid to manual operators. The use of speech recognition as a form of remote control for operators is being examined with great interest.
31.8 Certification of personnel No overview of non-destructive testing would be complete without some reference to the various operator-certification schemes currently in use worldwide. The range of products, processes and materials to which such methods have been applied has placed increasing demands on the skills and abilities of its practitioners. Quality assurance requires not only the products to have fitness for purpose but also the relevant personnel. Developments in various countries in operator certification for non-destructive testing are given by Drury (1979).
Appendix 31.1 Fundamental standards used in ultrasonic testing British Standards
BS
BS 2704
(78) Calibration Blocks
BS 3683 (Part 4)
(77) Glossary of Terms
BS 3889 (Part 1A)
(69) Ultrasonic Testing of Ferrous Pipes
BS 3923
Methods of Ultrasonic Examination of Welds
BS 3923 (Part 1)
(78) Manual Examination of Fusion Welds in Ferritic Steel
BS 3923 (Part 2)
(72) Automatic Examination of Fusion Welded Butt Joints in Ferritic Steels
BS 3923 (Part 3)
(72) Manual Examination of Nozzle Welds
BS 4080
(66) Methods of Non-Destructive Testing of Steel Castings
BS 4124
(67) Non-Destructive Testing of Steel Forgings—Ultrasonic Flaw Detection
BS 4331
Methods for Assessing the Performance Characteristics of Ultrasonic Flaw Detection Equipment
BS 4331 (Part 1)
(78) Overall Performance—Site Methods
(Continued )
591
Chapter | 31 Non-Destructive Testing
Appendix 31.1 (Continued ) BS 4331 (Part 2)
(72) Electrical Performance
BS 4331 (Part 3)
(74) Guidance on the In-service Monitoring of Probes (excluding Immersion Probes)
BS 5996
(81) Methods of Testing and Quality Grading of Ferritic Steel Plate by Ultrasonic Methods
M 36
(78) Ultrasonic Testing of Special Forgings by an Immersion Technique
M 42
(78) Non-Destructive Testing of Fusion and Resistance Welds in Thin Gauge Material (ASTM)
American Standards E 114
(75) Testing by the Reflection Method using Pulsed Longitudinal Waves Induced by Direct Contact
E 127
(75) Aluminum Alloy Ultrasonic Standard Reference Blocks
E 164
(74) Ultrasonic Contact Inspection of Weldments
E 213
(68) Ultrasonic Inspection of Metal Pipe and Tubing for Longitudinal Discontinuities
E 214
(68) Immersed Ultrasonic Testing
E 273
(68) Ultrasonic Inspection of Longitudinal and Spiral Welds of Welded Pipes and Tubings
E 317
(79) Performance Characteristics of Pulse Echo Ultrasonic Testing Systems
E 376
(69) Seamless Austenitic Steel Pipe for High Temperature Central Station Service
E 388
(71) Ultrasonic Testing of Heavy Steel Forgings
E 418
(64) Ultrasonic Testing of Turbine and Generator Steel Rotor Forgings
E 435
(74) Ultrasonic Inspection of Steel Plates for Pressure Vessels
E 50
(64) Ultrasonic Examination of Large Forged Crank Shafts
E 531
(65) Ultrasonic Inspection of Turbine-Generator Steel Retaining Rings
E 557
(73) Ultrasonic Shear Wave Inspection of Steel Plates
E 578
(71) Longitudinal Wave Ultrasonic Testing of Plain and Clad Steel Plates for Special Applications
E 609
(78) Longitudinal Beam Ultrasonic Inspection of Carbon and Low Alloy Steel Casting
West German Standards (DIN)—Translations available 17175
Seamless Tubes of Heat Resistant Steels: Technical Conditions of Delivery
17245
Ferritic Steel Castings Creep Resistant at Elevated Temperatures: Technical Conditions of Delivery
54120
Non-Destructive Testing: Calibration Block 1 and its Uses for the Adjustment and Control of Ultrasonic Echo Equipment
54122
Non-Destructive Testing: Calibration Block 2 and its Uses for the Adjustment and Control of Ultrasonic Echo Equipment
References BS 3683, Glossary of Terms used in Non-destructive Testing: Part 3, Radiological Flaw Detection (1964). BS 3971, Image Quality Indications (1966). BS 4094, Recommendation for Data on Shielding from Ionising Radiation (1966). Blitz, J., Ultrasonic Methods and Applications, Butterworths, London (1971). Carter, P., “Experience with the time-of-flight diffraction technique and accompanying portable and versatile ultrasonic digital recording system,” Brit. J. NDT, 26, 354 (1984).
Carter, P. and S. F., Burch, “The potential of digital processing for ultrasonic inspection,” NDT-86: Proceedings of the 21st Annual British Conference on Non-Destructive Testing, (eds.) J. M. Farley and P. D. Hanstead (1986). Doctor, S. R., T. E., Hall, and L. D., Reid, “SAFT—the evolution of a signal processing technology for ultrasonic testing,” NDT International (June 1986). Drury, J., “Developments in various countries in operator certification for non-destructive testing,” in Developments in Pressure Vessel Technology, Vol. 2, Applied Science, London (1979). Electricity Supply Industry, Standards published by Central Electricity Generating Board.
592
Ensminger, D., Ultrasonics, Marcel Dekker, New York (1973). Erf, R. K., Holographic Non-destructive Testing, Academic Press, London (1974). Farley, J. M., et al., “Developments in the ultrasonic instrumentation to improve the reliability of NDT of pressurised components,” Conf. on In-service Inspection, Institution of Mechanical Engineers (1982). Filipczynski, L., et al., Ultrasonic Methods of Testing Materials, Butterworths, London (1966). Greguss, P., Ultrasonic Imaging, Focal Press, London (1980). Halmshaw, R., Industrial Radiology: Theory and Practice, Applied Science, London (1982). HMSO, The Ionising Radiations (Sealed Sources) Regulations (1969). Institute of Welding, Handbook on Radiographic Apparatus and Techniques. Krautkramer, J. and H. Krautkramer, Ultrasonic Testing of Materials, 3rd ed., Springer-Verlag, New York (1983). Marsh, D. M., “Means of visualizing ultrasonics,” in Research Techniques in Non-destructive Testing, Vol. 2 (ed., R. S. Sharpe). Academic Press, London (1973). Martin, A. and S. Harbison, An Introduction to Radiation Protection, Chapman and Hall, London (1972). McGonnigle, W. J., Non-destructive Testing, Gordon and Breach, London (1961).
PART | IV Electrical and Radiation Measurements McNab, A. and Dunlop, I., “Advanced visualisation and interpretation techniques for the evaluation of ultrasonic data: the NDT workbench,” British Journal of NDT (May 1993). Moran, A. J. and K. J. Bower, “Expert systems for NDT—hope or hype?,” Non-Destructive Testing: Proceedings of the 4th European Conference 1987, Vol. 1 (eds.) J. M. Farley and R. W. Nichols (1987). Mudge, P. J., “Computer-controlled systems for ultrasonic testing,” Research Techniques in Non-Destructive Testing, Vol. VII, R. S. Sharp, ed. (1985). Sharpe, R. S. (ed.), Research Techniques in Non-destructive Testing, Vols. 1–5, Academic Press, London (1970–1982). Sharpe, R. S., Quality Technology Handbook, Butterworths, London (1984). Windsor, C. G., F. Anelme, L. Capineri, and J. P. Mason, “The classification of weld defects from ultrasonic images: a neural network approach,” British Journal of NDT (January 1993).
Further Reading Gardner, W. E. (ed.), Improving the Effectiveness and Reliability of Nondestructive Testing, Pergamon Press, Oxford (1992).
Chapter 32
Noise Measurement J. Kuehn
32.1 Sound and sound fields 32.1.1 The Nature of Sound If any elastic medium, whether it be gaseous, liquid, or solid, is disturbed, then this disturbance will travel away from the point of origin and be propagated through the medium. The way in which the disturbance is propagated and its speed will depend upon the nature and extent of the medium, its elasticity, and density. In the case of air, these disturbances are characterized by very small fluctuations in the density (and hence atmospheric pressure) of the air and by movements of the air molecules. Provided these fluctuations in pressure take place at a rate (frequency) between about 20 Hz and 16 kHz, then they can give rise to the sensation of audible sound in the human ear. The great sensitivity of the ear to pressure fluctuations is well illustrated by the fact that a peak-to-peak fluctuation of less than 1 part in 109 of the atmospheric pressure will, at frequencies around 3 kHz, be heard as audible sound. At this frequency and pressure the oscillating molecular movement of the air is less than 10–7 of a millimeter. The magnitude of the sound pressure at any point in a sound field is usually expressed in terms of the rms (rootmean-square) value of the pressure fluctuations in the atmospheric pressure. This value is given by
Pms =
f
1 T
T
#
0
P 2 ] k g dt p
(32.1)
where P(t) is the instantaneous sound pressure at time t and T is a time period long compared with the periodic time of the lowest frequency present in the sound. The SI unit for sound pressure is Newton/m2 (N/m2), which is now termed Pascal, that is, 1 Newton/m2 = 1 Pascal. Atmospheric pressure is, of course, normally expressed in bars (1 bar = 105 Pascal). The great sensitivity of the human hearing mechanism, already mentioned, is such that at frequencies where ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00032-2
it is most sensitive it can detect sound pressures as small as 2 # 10–5 Pascals. It can also pick up sound pressures as high as 20 or even 100 Pascals. When dealing with such a wide dynamic range (pressure range of 10 million to 1) it is inconvenient to express sound pressures in terms of Pascals and so a logarithmic scale is used. Such a scale is the decibel scale. This is defined as ten times the logarithm to the base 10 of the ratio of two powers. When applying this scale to the measurement of sound pressure it is assumed that sound power is related to the square of the sound pressure. Thus Sound pressure level = 10 log 10 f
W1 W0
p
= 10 log 10 f
P2 p P 02
= 20 log 10 e
P o dB P0
(32.2)
where P is the sound pressure being measured and P0 is a “reference” sound pressure (standardized at 2 # 10–5 Pascals). It should be noted that the use of the expression “soundpressure level” always denotes that the value is expressed in decibels. The reference pressure will, of course, be the 0 dB value on the decibel scale. The use of a reference pressure close to that representing the threshold of hearing at the frequencies where the ear is most sensitive means that most levels of interest will be positive. (A different reference pressure is used for underwater acoustics and for the 0 dB level on audiograms.) A good example of the use of the decibel scale of sound-pressure level and of the complicated response of the human ear to pure tones is given in Figure 32.1, showing equal loudness contours for pure tones presented under binaural, free-field listening conditions. The equal loudness level contours of Figure 32.1 (labeled in Phons) are assigned numerical values equal to 593
594
PART | IV Electrical and Radiation Measurements Figure 32.1 Normal equal loudness contours for pure tones. (Most of the figures in this chapter are reproduced by courtesy of Bruel & Kjaer (UK) Ltd.)
that of the sound-pressure level at 1 kHz through which they pass. This use of 1 kHz as a “reference” frequency is standardized in acoustics. It is the frequency from which the audible frequency range is “divided up” when choosing octave-band widths or one-third octave-band widths. Thus the octave band centered on 1 kHz and the third octave-band width centered on 1 kHz are included in the “standardized” frequency bands for acoustic measurements. In daily life, most sounds encountered (particularly those designated as “noise”) are not the result of simple sinusoidal fluctuations in atmospheric pressure but are associated with pressure waveforms which vary with time both in frequency and magnitude. In addition, the spatial variation of sound pressure (i.e., the sound field) associated with a sound source is often complicated by the presence of sound-reflecting obstacles or walls.
32.1.2 Quantities Characterizing a Sound Source or Sound Field The definition of noise as “unwanted sound” highlights the fact that the ultimate measure of a sound, as heard, involves physiological and often psychological measurements and assessments. Objective measurements can be used to evaluate the noise against predetermined and generally acceptable criteria. When planning the installation of machinery and equipment it is often necessary to be able to predict the magnitude and nature of the sound fields at a distance from the noise source, which means using noise data associated only with the noise source and not with the environment in which the measurements were made.
Although the sound-pressure level, at a point, is the most common measure of a sound field it is far from being a comprehensive measure. As a scalar quantity it provides no information on the direction of propagation of the sound and, except in the case of very simple sound fields (such as that from a physically small “point” source or a plane wave; see BS 4727), it is not directly related to the sound power being radiated by the source. The importance, in some cases, of deducing or measuring the sound power output of a source and its directivity has received much attention in recent years, and equipment is now available (as research equipment and also commercially available for routine measurements) for measuring particle velocity or the sound intensity (W/m2) at a point in a sound field. In addition to some measure of the overall sound-pressure level at a point it is also usually necessary to carry out some frequency analysis of the signal to find out how the sound pressure levels are distributed throughout the audible range of frequencies.
32.1.3 Velocity of Propagation of Sound Waves As mentioned earlier, sound-pressure fluctuations (small compared with the atmospheric pressure) are propagated through a gas at a speed which is dependent on the elasticity and density of the gas. An appropriate expression for the velocity of propagation is
c=
f
cP0 p t0
(32.3)
595
Chapter | 32 Noise Measurement
where c is the ratio of the specific heat of the gas at constant pressure to that at constant volume (1.402 for air), P0 is the gas pressure (Newtons/m2) and t0 is the density of the gas (kg/m3). This expression leads to a value of 331.6 m/s for air at 0°C and a standard barometric pressure of 1.013 # 105 Newtons/m2 (1.013 bar). The use of the general gas law also leads to an expression showing that the velocity is directly proportional to the square root of the absolute temperature in K:
c = c0
t + 273 m c m s 273
(32.4)
where c0 is the velocity of sound in air at 0°C (331.6 m/s) and t is the temperature (°C). The speed of propagation of sound in a medium is an extremely important physical property of that medium, and its value figures prominently in many acoustic expressions. If sound propagates in one direction only then it is said to propagate as a “plane free progressive wave” and the ratio of the sound pressure to the particle velocity, at any point, is always given by t0c. Knowledge of the velocity of sound is important in assessing the effect of wind and of wind and temperature gradients upon the bending of sound waves. The velocity of sound also determines the wavelength (commonly abbreviated to m) at any given frequency and it is the size of a source in relation to the wavelength of the sound radiation that greatly influences the radiating characteristics of the source. It also determines the extent of the near-field in the vicinity of the source. The screening effect of walls, buildings, and obstacles is largely dependent on their size in relation to the wavelength of the sound. Thus a vibrating panel 0.3 m square or an open pipe of 0.3 m diameter would be a poor radiator of sound at 50 Hz [m = c /f { 6.6 m] but could be an excellent, efficient radiator of sound at 1 kHz [m { 0.33 m] and higher frequencies. The relation between frequency and wavelength is shown in Figure 32.2.
32.1.4 Selecting the Quantities of Interest It cannot be emphasized too strongly that before carrying out any noise measurements the ultimate objective of those measurements must be clearly established so that adequate, complete, and appropriate measurements are made.
Figure 32.2 Relation between wavelength and frequency.
The following examples indicate how the purpose of a noise measurement influences the acoustic quantities to be measured, the techniques to be employed, and the supporting environmental or operational measurements/observations which may be needed.
32.1.4.1 Measurements to Evaluate the Effect of the Noise on Human Beings The ultimate aim might be to assess the risk of permanent or temporary hearing loss as a result of exposure to the noise or perhaps the ability of the noise to mask communication or to assess the likely acceptability of the noise in a residential area. In these cases frequency analysis of the noise may be required, i.e., some measure of its fluctuation in level with time and a measure of the likely duration of the noise. Other factors of importance could be the level of other (background) noises at various times of the day and night, weather conditions, and the nature of the communication or other activity being undertaken. A knowledge of the criteria against which the acceptability or otherwise of the noise is to be judged will also usually indicate the appropriate bandwidths to be used in the noise measurements. In some cases, octave band or one-third octave band measurements could suffice whilst in other cases narrow-band analyses might be required with attendant longer sampling times.
32.1.4.2 Measurements for Engineering Design or Noise-Control Decisions Most plant installations and many individual items of machinery contain more than one source of noise, and when the ultimate objective is plant noise, control the acoustic measurements have to: 1. Establish the operational and machine installation conditions which give rise to unacceptable noisy operation; 2. Identify and quantify the noise emission from the various sources; 3. Establish an order of priority for noise control of the various sources; 4. Provide adequate acoustic data to assist the design and measure the performance of noise control work. These requirements imply that detailed noise analyses are always required and that these acoustic measurements must be fully supported by other engineering measurements to
596
establish the plant operating and installation conditions and, where possible, assist in the design of the noise-control work that will follow. Such measurements could include measurements of temperatures, pressures, gas flows, and vibration levels. All too often the value of acoustic measurements is much reduced because of the absence of these supporting measurements.
32.1.4.3 Noise Labeling In some cases “noise labeling” of machines may be required by government regulations and by purchasers of production machinery, vehicles, plant used on construction sites, etc. Regulations, where they exist, normally specify the measurement procedure and there are U.S., British, or international standards which apply (see Appendix 32.1).
32.1.4.4 Measurements for Diagnostic Purposes Investigation of the often complex sources of noise of machines and of ways and means of reducing them may require very detailed frequency analysis of both the noise and vibrations and the advanced features offered by modern analyzing and computing instrumentation. In recent years, the use of acoustic measurements for “fingerprinting” machinery to detect, in the early stages, changes in the mechanical condition of machinery has been further developed in conjunction with measurements of vibration for the same purpose. These measurements are also based on detailed frequency analyses.
32.2 Instrumentation for the measurement of sound-pressure level The basic instrumentation chain consists of a microphone, signal-conditioning electronics, some form of filtering or weighting, and a quantity indicator, analog or digital. This is shown schematically in Figure 32.3.
32.2.1 Microphones The microphone is the most important item in the chain. It must be physically small so as not to disturb the sound field and hence the sound pressure which it is trying to measure, it must have a high acoustic impedance compared with that of the sound field at the position of measurement, it must be stable in its calibration, have a wide frequency response, and be capable of handling without distortion (amplitude or phase distortion) the very wide dynamic range of sound-pressure levels to which it will be subjected. When used in instruments for the measurement of low sound-pressure levels it must also have an inherent
PART | IV Electrical and Radiation Measurements
low self-generated noise level. To permit the use of closed cavity methods of calibration it should also have a welldefined diaphragm plane. All these requirements are best met by the use of condenser or electret microphones, and with such high electrical impedance devices it is essential that the input stage for the microphone—which might take the form of a 0 dB gain impedance transforming unit—be attached directly to the microphone so that long extension cables may be used between the microphone and the rest of the measuring instrument.
32.2.1.1 Condenser Microphone The essential components of a condenser microphone are a thin metallic diaphragm mounted close to a rigid metal backplate from which it is electrically isolated. An exploded view of a typical instrument is given in Figure 32.4. A stabilized dc polarization voltage E0 (usually around 200 V) is applied between the diaphragm and the backplate, fed from a high-resistance source to give a time constant R(Ct + Cs) (see Figure 32.5) much longer than the period of the lowest frequency sound-pressure variation to be measured. If the sound pressure acting on the diaphragm produces a change in the value of Ct—due to movement of the diaphragm—of DC(t) then the output voltage V0 fed from the microphone to the preamplifier will be V0 ] t g =
DC (t ) $ E 0 Ct + Cs
since C t & TC (t) . It should be noted that the microphone sensitivity is proportional to the polarization voltage but inversely proportional to the total capacitance Ct + Cs. Moreover, if we wish the microphone to have a pressure sensitivity independent of frequency then the value of DC(t) (and hence the deflection of the diaphragm) for a given sound pressure must be independent of frequency, that is, the diaphragm must be “stiffness” controlled. This requires a natural frequency above that of the frequency of the sounds to be measured. Condenser microphones, which are precision instruments, can be manufactured having outstanding stability of calibration and sensitivities as high as 50 mV per Pascal. The selection of a condenser microphone for a specific application is determined by the frequency range to be covered, the dynamic range of sound-pressure levels, and the likely incidence of the sound. A condenser microphone is a pressure microphone, which means that the electrical output is directly proportional to the sound pressure acting on the diaphragm. At higher frequencies, where the dimensions of the diaphragm become a significant fraction of the wavelength, the presence of the diaphragm in the path of a sound wave creates
597
Chapter | 32 Noise Measurement
Appendix 32.1 Standards play a large part in noise instrumentation. Therefore we give a list of international and British documents that are relevant. BS 4727, part 3, Group 08 (1985), British Standards Glossary of Acoustics and Electroacoustics Terminology particular to Telecommunications and Electronics, is particularly helpful. This standard covers a very wide range of general terms, defines a wide variety of levels, and deals with transmission and propagation, oscillations, transducers, and apparatus, as well as psychological and architectural acoustics.
International standards
British standard
Title
IEC 225
BS 2475
Octave and third-octave filters intended for the analysis of sound and vibration
IEC 327
BS 5677
Precision method of pressure calibration of one inch standard condenser microphones by the reciprocity method
IEC 402
BS 5678
Simplified method for pressure calibration of one inch condenser microphone by the reciprocity method
IEC 486
BS 5679
Precision method for free field calibration of one inch standard condenser microphones
IEC 537
BS 5721
Frequency weighting for the measurement of aircraft noise
IEC 651
BS 5969
Sound level meters
IEC 655
BS 5941
Values for the difference between free field and pressure sensitivity levels for one inch standard condenser microphones
IEC 704
Test code for determination of airborne acoustical noise emitted by household and similar electrical appliances
IEC 804
BS 6698
Integrating/averaging sound level meters
ISO 140
BS 2750
Field and laboratory measurement of airborne and impact sound transmission in buildings
ISO 226
BS 3383
Normal equal loudness contours for pure tones and normal threshold of hearing under free field listening conditions
ISO 266
BS 3593
Preferred frequencies for acoustic measurements
ISO 362
BS 3425
Method for the measurement of noise emitted by motor vehicles
ISO 717
BS 5821
Rating of sound insulation in buildings and of building elements
ISO 532
BS 4198
Method of calculating loudness
ISO R.1996
BS 4142
Method of rating industrial noise affecting mixed residential and industrial areas
ISO 1999
BS 5330
Acoustics—assessment of occupational noise exposure for hearing conservation purposes
ISO 2204
Acoustics—guide to the measurement of airborne noise and evaluation of its effects on man
ISO 3352
Acoustics—assessment of noise with respect to its effects on the intelligibility of speech
ISO 3740–3746
BS 4196 (Parts 1–6)
Guide to the selection of methods of measuring noise emitted by machinery
ISO 3891
BS 5727
Acoustics—procedure for describing aircraft noise heard on ground
ISO 4871
Acoustics—noise labeling of machines
ISO 4872
Acoustics—measurement of airborne noise emitted by construction equipment intended for outdoor use–method for determining compliance with noise limits
ISO 5130
Acoustics—measurement of noise emitted by stationary vehicles—survey method
ISO 6393
Acoustics—measurement of airborne noise emitted by earth moving machinery, method for determining compliance, with limits for exterior noise—stationary test conditions BS 5228
Noise control on construction and demolition sites
BS 6402
Specification of personal sound exposure meters
598
PART | IV Electrical and Radiation Measurements Figure 32.3 Block diagram of a noise-measuring system.
Figure 32.4 Exploded view of a condenser microphone.
Figure 32.5 Equivalent circuit of a condenser microphone and microphone preamplifier.
a high-impedance obstacle, from which the wave will be reflected. When this happens the presence of the microphone modifies the sound field, causing a higher pressure to be sensed by the diaphragm. A microphone with flat pressure response would therefore give an incorrect reading. For this reason, special condenser microphones known as free field microphones are made. They are for use in free field conditions, with perpendicular wave incidence on the diaphragm. Their pressure frequency response is so tailored as to give a
flat response to the sound waves which would exist if they were not affected by the presence of the microphone. Free field microphones are used with sound level meters. Effective pressure increases due to the presence of the microphone in the sound field are shown as free-field corrections in Figure 32.6 for the microphone whose outline is given. These corrections are added to the pressure response of the microphone. Figure 32.7 shows the directional response of a typical ½-inch condenser microphone. Microphones
Chapter | 32 Noise Measurement
599
Figure 32.6 Directional characteristics of a half-inch microphone mounted on a sound-level meter.
Figure 32.7 Free-field corrections to microphone readings for a series of B and K half-inch microphones.
600
with flat pressure responses are mainly intended for use in couplers and should be used with grazing incidence in freefield conditions. A complete condenser microphone consists of two parts, a microphone cartridge and a preamplifier, which is normally tubular in shape, with the same diameter as the cartridge. The preamplifier may be built into the body of a sound-level meter. Generally, a larger diameter of microphone means higher sensitivity, but frequency-range coverage is inversely proportional to diaphragm dimensions. Most commonly used microphones are standardized 1-inch and ½-inch but ¼-inch and even 1 8-inch are commercially available. The smalldiaphragm microphones are suitable for the measurement of very high sound-pressure levels.
32.2.1.2 Electret Microphone Although requirements for precision measurement are best met by condenser microphones, the need to have a source of dc voltage for the polarization has led to the search for microphones which have an inherent polarized element. Such an element is called an electret. In the last fifteen years electrets have been produced which have excellent longterm stability and have been used to produce prepolarized condenser microphones. Many of the design problems have been overcome by retaining the stiff diaphragm of the conventional condenser microphone and applying the electret material as a thin coating on the surface of the backplate. The long-term stability of the charge-carrying element (after artificially aging and stabilizing) is now better than 0.2 dB over a year. The principle of operation of the electret condenser microphone is, of course, precisely the same as that of the conventional condenser microphone. At present, its cost is slightly in excess of that of a conventional condenser microphone of similar performance but it has application where lower power consumption and simplified associated electronics are at a premium.
PART | IV Electrical and Radiation Measurements
which are capable of excellent response down to a small fraction of 1 Hz.
32.2.2 Frequency Weighting Networks and Filters The perception of loudness of pure tones is known and shown in Figure 32.1. It can be seen that it is a function of both the frequency of the tone and of the sound pressure. Perception of loudness of complete sounds has attracted much research, and methods have been devised for the computation of loudness or loudness level of such sounds. Many years ago it was thought that an instrument with frequency-weighting characteristics corresponding to the sensitivity of the ear at low, medium, and high sound intensities would give a simple instrument capable of measuring loudness of a wide range of sounds, simple and complex. Weightings now known as A, B, and C were introduced, but it was found that, overall, the A weighting was most frequently judged to give reasonable results and the measurement of A-weighted sound pressure became accepted and standardized throughout the world. Their response curves are shown in Figure 32.8. Environmental noise and noise at work are measured with A weighting, and many sound-level measuring instruments no longer provide weightings other than A. It cannot be claimed that A weighting is perfect, that the result of A-weighted measurements is a good measure of loudness or nuisance value of all types of noise, but at least they provide a standardized, commonly accepted method. Where it is acknowledged that A weighting is a wrong or grossly inadequate description, and this is particularly likely when measuring low-frequency noise, sounds are analyzed by means of octave or third-octave filters and the results manipulated to give a desired value.
32.2.1.3 Microphones for Low-Frequency Work The performance of condenser microphones at low frequencies is dependent on static pressure equalization and on the ratio of input impedance of the microphone preamplifier to the capacitive impedance of the microphone cartridge. For most measurements, good response to about 10 Hz is required and is provided by most microphones. Extension of performance to 1 or 2 Hz is avoided in order to reduce the sensitivity of a microphone to air movement (wind) and pressure changes due to opening and closing of doors and windows. Microphones for low-frequency work exist, including special systems designed for sonic boom measurement,
Figure 32.8 Frequency response curves of the A, B, C, and D weighting networks.
601
Chapter | 32 Noise Measurement
Aircraft noise is a special case, where the noise is analyzed simultaneously in third-octave bands and individual band levels are sampled every 0.5 s, each one weighted according to a set of tables and summed, to give the answer in forms of a perceived noise level in decibels. An approximation to this figure may be obtained by the use of a simpler system, a sound-level meter with standardized “D” weighting, also shown in Figure 32.8.
32.2.3 Sound-Level Meters These are most widely used for the measurement of sound and can be purchased in forms ranging from simple instruments fitted with one frequency weighting network only and a fixed microphone to versions capable of handling a whole range of microphone probes and sophisticated signal processing with digital storage of data. The instrument therefore deserves special mention. As with most other instruments, its cost and complication is related to the degree of precision of which it is capable. IEC Publication 651 and BS 5969 classify sound-level meters according to their degree of precision. The various classes (or types) and the intended fields of application are given in Table 32.1. The performance specifications for all these types have the same “center value” requirements, and they differ only in the tolerances allowed. These tolerances take into account such aspects as (1) variation of microphone response with the angle of incidence of the sound, (2) departures from design objectives in the frequency-weighting networks (tolerances allowed) and (3) level linearity of the amplifiers. While it is not possible to state the precision of measurement for any type in any given application—since the overall accuracy depends on care with the measurement technique
Table 32.1 Classes of sound-level meter Type
Intended field of application
Absolute accuracy (at reference frequency) in reference direction at the reference soundpressure level (dB)
0
Laboratory reference standards
!0.4
1
Laboratory use and field use where the acoustical environment can be closely specified or controlled
!0.7
2
General field work
!1.0
3
Field noise surveys
!1.5
as well as the calibration of the measurement equipment—it is possible to state the instrument’s absolute accuracy at the reference frequency of 1 kHz when sound is incident from the reference direction at the reference sound-pressure level (usually 94 dB), and these values are given in Table 32.1. Apart from the absolute accuracy at the reference frequency the important differences in performance are: 1. Frequency range over which reliable measurements may be taken—with Type 2 and 3 instruments having much wider tolerances at both the low and the high ends of the audio frequency range (see Figure 32.9). 2. Validity of rms value when dealing with high crest factor (high ratio of peak to rms) signals, i.e., signals of an impulsive nature such as the exhaust of motorcycles and diesel engines. 3. Validity of rms value when dealing with short-duration noises. 4. Linearity of readout over a wide dynamic range and when switching ranges. Prior to 1981, sound-level meters were classified either as “Precision grade” (see IEC Publication 179 and BS 4197) or “industrial grade” (see IEC 123 or BS 3489) with very wide tolerances. The current Type 1 sound-level meter is equivalent, broadly, to the old precision grade whilst Type 3 meters are the equivalent of the old industrial grade with minor modifications. A typical example of a simple soundlevel meter which nevertheless conforms to Type 1 class of accuracy is shown in Figure 32.10. It has integrating facilities as discussed below. Figure 32.11 shows a more sophisticated instrument which includes filters but is still quite compact.
32.2.3.1 Integrating Sound-Level Meters Standard sound-level meters are designed to give the reading of sound level with fast or slow exponential time averaging. Neither is suitable for the determination of the “average” value if the character of the sound is other than steady or is composed of separate periods of steady character. A special category of “averaging” or “integrating” meters is standardized and available, which provides the “energy average,” measured over a period of interest. The value measured is known as Leq (the equivalent level), which is the level of steady noise which, if it persisted over the whole period of measurement, would give the same total energy as the fluctuating noise measured over the period. As in the case of sound-level meters, Types 0 and 1 instruments represent the highest capability of accurately measuring noises which vary in level over a very wide dynamic range, or which contain very short bursts or impulse, for example gunshots, hammer-blows, etc. Type 3 instruments are not suitable for the accurate measurement of such events but may be used in cases where level variation is not wide or sudden.
602
PART | IV Electrical and Radiation Measurements Figure 32.9 Tolerances in the response of sound-level meters at different frequencies.
Figure 32.10 Simple sound-level meter. Courtesy Lucas CEL Instruments Ltd.
Figure 32.11 More sophisticated sound-level meter.
The precise definition of the Leq value of a noise waveform over a measurement period T is given by
(usually A-weighted) sound pressure and p0 the reference rms sound pressure of 20 nPa. A measure of the A-weighted Leq is of great significance when considering the possibility of long-term hearing loss arising from habitual exposure to noise (see Barns and Robinson 1970). Most integrating sound-level meters also provide a value known as sound energy level or SEL, sometimes also
L eq = 10 log
1 T
T
#
0
6 p (t ) @ 2 p 20
dt
where Leq is the equivalent continuous sound level, T the measurement duration, p(t) the instantaneous value of the
603
Chapter | 32 Noise Measurement
described as single-event level. This represents the sound energy of a short-duration event, such as an aircraft flying overhead or an explosion, but expressed as if the event lasted only 1 s. Thus SEL = 10 log #
6 p ] t g @2 p 20
50% 8.40 – 8.50 Leq=77.1 dB(A)
40 30
Probability distribution plot for traffic noise on HelsingØr motorway
20 10 0
dt
30
40
50
60
70
80
90
100
110 db (A)
Figure 32.12 Probability distribution plot for motorway noise.
Many integrating sound-level meters allow the interruption of the integrating process or the summation of energy in separated periods.
32.2.3.2 Statistical Sound-Level Meters In the assessment of environmental noise, for example, that produced by city or motorway traffic, statistical information is often required. Special sound-level meters which analyze the sound levels are available. They provide probability and cumulative level distributions and give percentile levels, for example L10, the level exceeded for 10 percent of the analysis time as illustrated in Figures 32.12 and 32.13. Some of these instruments are equipped with built-in printers and can be programmed to print out a variety of values at preset time intervals.
Figure 32.13 Cumulative distribution plot for traffic noise in a busy city street.
32.2.4 Noise-Exposure Meters/Noise-Dose Meters The measurement of noise exposure to which a worker is subjected may be carried out successfully by an integrating sound-level meter if the worker remains in one location throughout the working day. In situations where he or she moves into and out of noisy areas or performs noisy operations and moves at the same time a body-worn instrument is required. BS 6402 describes the requirements for such an instrument. It may be worn in a pocket but the microphone must be capable of being placed on the lapel or attached to a safety helmet. Figure 32.14 shows one in use with the microphone mounted on an earmuff and the rest of the equipment in a breast pocket. The instrument measures A-weighted exposure according to T
E=
#
Pa 2 (t ) dt
0
where E is the exposure (Pa2 : h) and T is the time (h). Such an instrument gives the exposure in Pascal-squared hours, where 1 Pa2 • h is the equivalent of 8 h exposure to a steady level of 84.9 dB. It may give the answers in terms of percentage of exposure as allowed by government regulations. For example, 100 percent may mean 90 dB for 8 h, which is the maximum allowed with unprotected ears by the current regulations.
Figure 32.14 Noise-exposure meter in use.
The actual performance requirements are similar to those of Type 2 sound-level meters and integrating sound-level meters, as accuracy of measurement is much more dependent on the clothing worn by the worker, the helmet or hat, and the direction of microphone location in relation to noise source than on the precision of the measuring instrument.
32.2.5 Acoustic Calibrators It is desirable to check the calibration of equipment from time to time, and this can be done with acoustic calibrators, devices which provide a stable, accurately known sound
604
pressure to a microphone which is part of an instrument system. Standards covering the requirements for such calibrators are in existence, and they recommend operation at a level of 94 dB (1 Pa) or higher and a frequency of operation between 250 and 1000 Hz. The latter is often used, as it is a frequency at which the A-weighting network provides 0 dB insertion loss, so the calibration is valid for both A-weighted and unweighted measurement systems. One type of calibrator known as a pistonphone offers extremely high order of stability, both short and long term, with known and trusted calibration capability to a tolerance within 0.1 dB. In pistonphones the level is normally 124 dB and the frequency 250 Hz. The reason for the relatively high sound-pressure levels offered by calibrators is the frequent need to calibrate in the field in the presence of high ambient noise levels. Most calibrators are manufactured for operation with one specific microphone or a range of microphones. Some calibrators are so designed that they can operate satisfactorily with microphones of specified (usually standardized) microphone diameters but with different diaphragm constructions and protective grids (see Figures 32.15 and 32.16). In a pistonphone the sound pressure generated is a function of total cavity volume, which means dependence on the diaphragm construction. In order to obtain the accuracy of calibration offered by a pistonphone the manufacturer’s manual must be consulted regarding corrections required for specific types of microphones. The pressure generated by a pistonphone is also a function of static pressure in the
PART | IV Electrical and Radiation Measurements
cavity, which is linked with the atmosphere via a capillary tube. For the highest accuracy, atmospheric pressure must be measured and corrections applied. Some calibrators offer more than one level and frequency. Calibrators should be subjected to periodic calibrations in approved laboratories. Calibration is discussed further in Section 32.6.
32.3 Frequency analyzers Frequently the overall A-weighted level, either instantaneous, time-weighted, or integrated over a period of time, or even statistical information on its variations are insufficient descriptions of sound. Information on frequency content of sounds may be required. This may be in terms of content in standardized frequency bands, of octave or thirdoctave bandwidth. There are many standards which call for presentation of noise data in these bands. Where information is required for diagnostic purposes very detailed, high-resolution analysis is performed. Some analog instruments using tuned filter techniques are still in use—the modern instrument performs its analyzing function by digital means.
32.3.1 Octave Band Analyzers These instruments are normally precision sound-level meters which include sets of filters or allow sets of filters to be Figure 32.15 Principle of operation of a portable sound-level calibrator.
605
Chapter | 32 Noise Measurement
Figure 32.17 Frequency characteristics of octave filters in a typical analyzer.
Figure 32.16 Pistonphone. (a) Mounting microphones on a pistonphone; (b) cross-sectional view showing the principle of operation.
connected, so that direct measurements in octave bands may be made. The filters are bandpass filters, whose passband encompasses an octave, i.e., the upper band edge frequency equals twice the lower band edge one. The filters should have smooth response in the passband, preferably offering 0 dB insertion loss. The center frequency fm is the geometric mean fm =
_ f2 $ f1 i
where f2 is the upper band edge frequency and f1 the lower band edge frequency. Filters are known by their nominal center frequencies. Attenuation outside the passband should be very high. ISO Standard 225 and BS 3593 list the “preferred” frequencies while IEC 266 and BS 2475 give exact performance requirements for octave filters. A typical set of characteristics is given in Figure 32.17. If a filter set is used with a sound-level meter then such a meter should possess a “linear” response function, so that the only filtering is performed by the filters. Answers obtained in such analyses are known as “octave band levels” in decibels, re 20 nPa.
From time to time, octave analysis is required for the purpose of assessing the importance of specific band levels in terms of their contribution to the overall A-weighted level. Octave band levels may then be corrected by introducing A-weighting attenuation at center frequencies. Some sound-level meters allow the use of filters with A weighting included, thus giving answers in A-weighted octave band levels. A simple portable instrument is likely to contain filters spanning the entire audio frequency range. The analyzer performs by switching filters into the system in turn and indicating results for each individual band. If noises tend to fluctuate, “slow” response of the sound-level meter is used, but this may still be too fast to obtain a meaningful average reading. Special recorders, known as level recorders, may be used with octave band filters/sound-level meters which allow a permanent record to be obtained in the field. Parallel operation of all the filters in a set is also possible with special instruments, where the answers may be recorded in graphical form or in digital form, and where the octave spectrum may be presented on a CRT.
32.3.2 Third-Octave Analyzers Octave band analysis offers insufficient resolution for many purposes. The next step is thirdoctave analysis by means of filters meeting ISO 266, IEC 225, and BS 2475. These filters may again be used directly with sound-level meters. As there are three times as many filters covering the same total frequency span of interest and the bandwidths are narrower, requiring longer averaging to obtain correct answers in fluctuating noise situations, the use of recording devices is almost mandatory. Typical characteristics are shown in Figure 32.18. Apart from the field instruments described above, highquality filter sets with automatic switching and recording facilities exist, but the modern trend is for parallel sets of filters used simultaneously with graphical presentation of third-octave spectra on CRT.
606
PART | IV Electrical and Radiation Measurements
Figure 32.18 Frequency characteristics of third-octave filters in a typical analyzer.
Figure 32.20 Characteristics of a typical constant bandwidth filter.
Figure 32.19 Output options with a typical digital frequency analyzer.
Digital filter instruments are also available, in which the exact equivalent of conventional, analog filtering techniques is performed. Such instruments provide time coincident analysis, exponential averaging in each band or correct integration with time, presentation of spectra on a CRT and usually XY recorder output plus interfacing to digital computing devices (see Figure 32.19).
32.3.3 Narrow-Band Analyzers Conventional analog instruments perform the analysis by a tunable filter, which may be of constant bandwidth type (Figure 32.20) or constant percentage (Figure 32.21). Constant percentage analyzers may have a logarithmic frequency sweep capability, while constant bandwidth would usually offer a linear scan. Some constant bandwidth instruments have a synchronized linkup with generators, so that very detailed analysis of harmonics may be performed. The tuning may be manual, mechanical, and linked with a recording device, or electronic via a voltage ramp. In the last case computer control via digital-to-analog conversion may be achieved. With narrow band filtering of random signals, very
Figure 32.21 Bandpass filter selectivity curves for four c onstant percentage bandwidths.
long averaging times may be required, making analysis very time consuming. Some narrow-band analyzers attempt to compute octave and third-octave bands from narrow-band analysis, but at
607
Chapter | 32 Noise Measurement
present such techniques are valid only for the analyses of stationary signals.
32.3.4 Fast Fourier Transform Analyzers These are instruments in which input signals are digitized and fast Fourier transform (FFT) computations performed. Results are shown on a CRT and the output is available in analog form for XY plotters or level recorders or in digital form for a variety of digital computing and memory devices. Single-channel analyzers are used for signal analysis and two-channel instruments for system analysis, where it is possible to directly analyze signals at two points in space and then to perform a variety of computations in the instruments. In these, “real-time” analysis means a form of analysis in which all the incoming signal is sampled, digitized, and processed. Some instruments perform real-time analysis over a limited frequency range (for example, up to 2 KHz). If analysis above these frequencies is performed, some data are lost between the data blocks analyzed. In such a case, stationary signal analyses may be performed but transients may be missed. Special transient analysis features are often incorporated (for example, sonic boom) which work with an internal or external trigger and allow the capture of a data block and its analysis. For normal operation, linear integration or exponential averaging are provided. The analysis results are given in 250–800 spectral lines, depending on type. In a 400-line analyzer, analysis would be performed in the range of, say, 0–20 Hz, 0–200 Hz, 0–2000 Hz and 0–20 000 Hz. The spectral resolution would be the upper frequency divided by number of spectral lines, for example, 2000/400, giving 5 Hz resolution. Some instruments provide a “zoom” mode, allowing the analyzer to show results of analysis in part of the frequency range, focusing on selected frequency in the range and providing much higher resolution (see Figure 32.22). Different forms of cursor are provided by different analyzers, which allow the movement of the cursor to a part of the display to obtain a digital display of level and frequency at the point. The frequency range of analysis and spectral resolution are governed by the internal sampling rate generator, and sampling rate may also be governed by an external pulse source. Such a source may be pulses proportional to a rotating speed of a machine, so that analysis of the fundamental and some harmonic frequencies may be observed, with the spectral pattern stationary in the frequency domain in spite of rotating speed changes during run-up tests. Most analyzers of the FFT type will have a digital interface, either parallel or serial, allowing direct use with desktop computers in common use. In two-channel analyzers, apart from the features mentioned above, further processing can provide such functions as transfer function (frequency response) correlation functions, coherence function, cepstrum, phase responses, probability distribution, and sound intensity.
Figure 32.22 Scale expansion in presentation of data on spectra.
32.4 Recorders 32.4.1 Level Recorders Level recorders are recording voltmeters, which provide a record of dc or the rms value of ac signal on linear or logarithmic scale; some provide synchronization for octave and third-octave analyzers, so that filter switching is timed for frequency-calibrated recording paper or synchronization with generators for frequency-response plotting. Such recorders are also useful for recording noise level against time.
32.4.2 XY Plotters These are recorders in which the Y plot is of dc input and X of a dc signal proportional to frequency. They are frequently used with FFT and other analyzers for the recording of memorized CRT display.
32.4.3 Digital Transient Recorders Digital transient recorders are dedicated instruments specifically designed for the capture of transients such as gunshot, sonic boom, mains transient, etc. The incoming signals are
608
digitized at preset rates and results captured in a memory when commanded by an internal or external trigger system, which may allow pre- and post-trigger recording. The memorized data may be replayed in digital form or via a D/A converter, usually at a rate hundreds or thousands of times slower than the recording speed, so that analog recordings of fast transients may be made.
32.4.4 Tape Recorders These are used mainly for the purpose of gathering data in the field. Instrumentation tape recorders tend to be much more expensive than domestic types, as there are stringent requirements for stability of characteristics. Direct tape recorders are suitable for recording in the AF range, but where infrasonic signals are of interest FM tape recorders are used. The difference in range is shown in Figure 32.23, which also shows the effect of changing tape speed. A direct tape recorder is likely to have a better signalto-noise ratio, but a flatter frequency response and phase response will be provided by FM type. IRIG standards are commonly followed, allowing recording and replaying on different recorders. The important standardization parameters are head geometry and track configuration, tape speeds, center frequencies, and percentage of frequency modulation. Tape reels are used as well as tape cassettes, and the former offer better performance. Recent developments in digital tape recorders suitable for field-instrumentation use enhance performance significantly in the dynamic range.
PART | IV Electrical and Radiation Measurements
32.5 Sound-intensity analyzers Sound-intensity analysis may be performed by two-channel analyzers offering this option. Dedicated sound-intensity analyzers, based on octave or third-octave bands, are available, mainly for the purpose of sound-power measurement and for the investigation of sound-energy flow from sources. The sound-intensity vector is the net flow of sound energy per unit area at a given position, and is the product of sound pressure and particle velocity at the point. The intensity vector component in a given direction r is Ir = p # ur where the horizontal bar denotes time average. To measure particle velocity, current techniques rely on the finite difference approximation by means of integrating over time the difference in sound pressure at two points, A and B, separated by DR, giving u r = - t0 #
_ PB - PA i
DR
dt
where PA and PB are pressures at A and B and t0 is the density of air. An outline of the system is given in Figure 32.24. This two-microphone technique requires the two transducers and associated signal paths to have very closely matched phase and amplitude characteristics. Only a small fraction of a degree in phase difference may be tolerated. Low Figure 32.23 Typical frequency response characteristics of AM and FM recording.
Figure 32.24 Sound-intensity analysis using two channels.
609
Chapter | 32 Noise Measurement
frequencies require relatively large microphone separation (say, 50 mm) while at high frequencies only a few millimeters must be used. Rapid developments are taking place in this area.
32.6 Calibration of measuring instruments 32.6.1 Formal Calibration This is normally performed by approved laboratories, where instrumentation used is periodically calibrated and traceable to primary standards; the personnel are trained in calibration techniques, temperature is controlled, and atmospheric pressure and humidity accurately measured. Instruments used for official measurements, for example by consultants, environmental inspectors, factory inspectors, or test laboratories, should be calibrated and certified at perhaps one- or two-year intervals. There are two separate areas of calibration, the transducers and the measuring, analyzing, and computing instruments. Calibration of transducers, i.e., microphones, requires full facilities for such work. Condenser microphones of standard dimensions can be pressure calibrated by the absolute method of reciprocity. Where accurate free-field corrections are known for a specific type of condenser microphone, these can be added to the pressure calibration. Free-field reciprocity calibration of condenser microphones (IEC 486 and BS 5679) is very difficult to perform and requires a firstclass anechoic chamber. Microphones other than condenser (or nonstandard size condenser) can be calibrated by comparison with absolutely calibrated standard condenser microphones. This again requires an anechoic chamber. Pressure calibration of condenser microphones may be performed by means of electrostatic actuators. Although this is not a standardized method, it offers very good results when compared with the reciprocity method, is much cheaper to perform, and gives excellent repeatability. It is not a suitable method for frequencies below about 50 Hz. Calibration methods for two-microphone intensity probes are in the stage of development and standardization. Sound-level meter calibration is standardized with a recommendation that acoustic tests on the complete unit are performed. This is a valid requirement if the microphone is attached to the body of the sound-level meter. Where the microphone is normally used on an extension or with a long extension cable it is possible to calibrate the microphone separately from the electronics. Filters and analyzers require a purely electronic calibration, which is often time consuming if not computerized. Microphone calibrators (see Section 32.2.5) are devices frequently calibrated officially. This work is normally done
with traceable calibrated microphones or by means of direct comparison with a traceable calibrator of the same kind. Sophisticated analyzers, especially those with digital interfaces, lend themselves to computer-controlled checks and calibrations of all the complicated functions. The cost of this work by manual methods would be prohibitive.
32.6.2 Field Calibration This normally implies the use of a calibrated, traceable microphone calibrator (for example, a pistonphone) which will provide a stable, accurately known sound pressure, at a known frequency, to a microphone used in the field instrument set-up (for example, a sound-level meter). Although such “calibration” is only a calibration at one point in frequency domain and at one level, it is an excellent check, required by most measurement standards. Its virtue is in showing departures from normal operation, signifying the need for maintenance and recalibration of instrumentation.
32.6.3 System Calibration Coomplete instrument systems—for example, a microphone with an analyzer or a microphone with a sound-level meter and a tape recorder—may be calibrated by a good-quality field calibrator. Again, the calibration would be at one frequency and one level, but if all instrument results line up within a tight tolerance a high degree of confidence in the entire system is achieved.
32.6.4 Field-System Calibration Some condenser microphone preamplifiers are equipped with a facility for “insert voltage calibration.” This method is used mainly with multimicrophone arrays, where the use of a calibrator would be difficult and time consuming to perform. Permanently installed outdoor microphones may offer a built-in electrostatic actuator which allows regular (for example, daily) microphone and system checks.
32.7 The measurement of sound-pressure level and sound level The measurement of sound-pressure level (spl) implies measurement without any frequency weighting, whereas sound level is frequency weighted spl. All measurement systems have limitations in the frequency domain, and, even though many instruments have facilities designated as “linear,” these limitations must be borne in mind. Microphones have low-frequency limitations, as well as complex responses at
610
the high-frequency end, and the total response of the measurement system is the combined response of the transducer and the following electronics. When a broad-band signal is measured and spl quoted, this should always be accompanied by a statement defining the limits of the “linear” range, and the response outside it. This is especially true if high levels of infrasound and/or ultrasound are present. In the measurement of sound level, the low-frequency performance is governed by the shape of the weighting filter, but anomalies may be found at the high-frequency end. The tolerances of weighting networks are very wide in this region, the microphones may show a high degree of directionality (see Section 32.2.1.1), and there may also be deviations from linearity in the response of the microphone. Sound-pressure level or sound level is a value applicable to a point in space, the point at which the microphone diaphragm is located. It also applies to the level of sound pressure at the particular point in space, when the location of sound source or sources is fixed and so is the location of absorbing or reflective surfaces in the area as well as of people. In free space, without reflective surfaces, sound radiated from a source will flow through space, as shown in Figure 32.25. However, many practical sources of noise have complex frequency content and also complex radiation patterns, so that location of the microphone and distance
Figure 32.25 Flow of sound from a source.
Figure 32.26 Coaxial circular paths for microphone traverses.
PART | IV Electrical and Radiation Measurements
from the source become important as well as its position relating to the radiation pattern. In free space sound waves approximate to plane waves.
32.7.1 Time Averaging The measurement of sound-pressure level or sound level (weighted sound-pressure level), unless otherwise specified, means the measurement of the rms value and the presentation of results in decibels relative to the standardized reference pressure of 20 nPa. The rms value is measured with exponential averaging, that is, temporal weightings or time constants standardized for sound-level meters. These are known as F and S (previously known as “Fast” and “Slow”). In modern instruments, the rms value is provided by widerange log : mean square detectors. The results are presented on an analog meter calibrated in decibels, by a digital display, or by a logarithmic dc output which may easily be recorded on simple recorders. Some instruments provide the value known as “instantaneous” or “peak” value. Special detectors are used for this purpose, capable of giving the correct value of signals lasting for microseconds. Peak readings are usually taken with “linear” frequency weighting. In addition to the above, “impulse”-measuring facilities are included in some sound-level measuring instruments. These are special detectors and use averaging circuits which respond very quickly to very short-duration transient sounds but which give very slow decay from the maximum value reached. At the time of writing, no British measurement standards or regulations require or allow the use of “impulse” value other than for determination as to whether the sound is of an impulsive character or not. One specific requirement for maximum “fast” level is in vehicle drive-by test, according to BS 3425 and ISO 362. The “slow” response gives slightly less fluctuating readout than the “fast” mode—its main use is in “smoothing” out the fluctuations. “Impulse” time weighting may be found in some soundlevel meters, according to IEC 651 and BS 5969. This form of time weighting is mainly used in Germany, where it originated. The value of meters incorporating this feature is in their superior rms detection capability when used in fast or slow mode. Occasionally, there is a requirement for the measurement of maximum instantaneous pressure, unweighted. This can be performed with a sound-level meter with a linear response or a microphone and amplifier. The maximum instantaneous value, also known as peak value, may be obtained by displaying ac output on a storage oscilloscope or by using an instrument which faithfully responds to very short rise times and which has a “hold” facility for the peak value. These hold circuits have limitations, and
611
Chapter | 32 Noise Measurement
their specifications should be studied carefully. The peak value may be affected by the total frequency and phase response of the system.
32.7.2 Long Time Averaging In the measurement of total sound energy at a point over a period of time (perhaps over a working day) as required by environmental regulations, such as those relating to noise on construction and demolition sites or hearing-conservation regulations, special rules apply. The result required is in the form of Leq, or equivalent sound level, which represents the same sound energy as that of the noise in question, however, fluctuating with time. The only satisfactory way of obtaining the correct value is to use an integrating/averaging sound-level meter to IEC 804 of the correct type. Noises with fast and wide-ranging fluctuations demand the use of Type 0 or Type 1 instruments. Where fluctuations are not so severe, Type 2 will be adequate. The value known as SEL (sound-exposure level), normally available in integrating sound-level meters, is sometimes used for the purpose of describing short-duration events, for example, an aircraft flying overhead or an explosion. Here the total energy measured is presented as though it occurred within 1 s. At present, there are no standards for the specific use of SEL in the UK.
32.7.3 Statistical Distribution and Percentiles Traffic noise and environmental noise may be presented in statistical terms. Requirements for the location of microphones in relation to ground or facades of buildings are to be found in the relevant regulations. Percentile values are the required parameter. L10 means a level present or exceeded for 10 percent of the measurement time and L50 the level exceeded for 50 percent of the time. The value is derived from a cumulative distribution plot of the noise, as discussed in Section 32.2.3.2.
32.7.4 Space Averaging In the evaluation of sound insulation between rooms a sound is generated and measured in one room and also measured in the room into which it travels. In order to obtain information of the quality of insulation relating to frequency and compile an overall figure in accordance with relevant ISO and BS Standards, the measurements are carried out in octave and third-octave bands. The sound is generated normally as a broadband (white) noise or bands of noise. As the required value must represent the transfer of acoustic energy from one room to another, and the distribution of sound in each room may
be complex, “space averaging” must be carried out in both rooms. Band-pressure levels (correctly time averaged) must be noted for each microphone location and the average value found. The space average for each frequency band is found by L p = 10 log )
1 N = / 10 0.1L G 3 N i =1 L p = space average level pi
where Lpi is the sound-pressure level at the ith measurement point and N the total number of measurement points. If large differences are found over the microphone location points in each band, a larger number of measurement points should be used. If very small variations occur, it is quite legitimate to use an arithmetic average value. This kind of measurement may be carried out with one microphone located at specified points and levels noted and recorded, making sure that suitable time averaging was used at each point. It is possible to use an array of microphones and use a scanning (multiplexing) arrangement, feeding the outputs into an averaging analyzer. The space average value may also be obtained by using one microphone on a rotating boom, feeding its output into an analyzer with averaging facility. The averaging should cover a number of complete revolutions of the boom. In integrating/averaging a sound-level meter may be used to obtain the space average value without the need for any calculations. It requires the activation of the averaging function for identical periods of time at each predetermined location and allowing the instrument to average the values as total Leq. With the same time allocated to each measurement point, Leq = space average.
32.7.5 Determination of Sound Power As sound radiation from sources may have a very complex pattern, and sound in the vicinity of a source is also a function of the surroundings, it is not possible to describe or quantify the sound emission characteristics in a simple way by making one or two measurements some distance away, nor even by making a large number of such measurements. The only method is to measure the total sound power radiated by the source. This requires the measurement to be carried out in specific acoustic conditions. ISO 3740–3746 and BS 4196 specify the acoustic conditions suitable for such tests and assign uncertainties which apply in the different conditions. These standards describe the measurements of pressure, over a theoretical sphere or hemisphere or a “box” which encloses the sound source, the computation of space average, and finally, the computation of sound-power level in decibels relative to 1 nW or sound power level in watts. The results are often given in octave or third-octave bands or are
612
PART | IV Electrical and Radiation Measurements Figure 32.27 Microphone posi tions on equal areas of the surface of a hemisphere.
Figure 32.28 Microphone positions on a parallelepiped and a conformal surface.
A weighted. Figures 32.27 and 32.28 show the appropriate positions for a microphone. The great disadvantage of obtaining sound power from the measurement of sound pressure is that special, well defined acoustic conditions are required, and the achievement of such conditions for low uncertainties in measurements is expensive and often impossible. Large, heavy machinery would require a very large free space or large anechoic or reverberation chambers and the ability to run the machinery in them, i.e., provision of fuel or other source of energy, disposal of exhausts, coupling to loads, connection to gearboxes, etc. With smaller sound sources it may be possible to perform the measurement by substitution or juxtaposition method in situ. These methods are based on measurements of noise of a source in an enclosed space, at one point, noting the results and then removing the noise source and substituting it by a calibrated sound source and comparing the sound-pressure readings. If the unknown sound source gives a reading of X dB at the point and the calibrated one gives X + N dB, then the sound-power level of the unknown source is N dB lower than that of the calibrated source. The calibrated source may have variable but calibrated output. In situations where it is not possible to stop the unknown noise source its output may be adjusted so that the two sources
together, working from the same or nearly the same location, give a reading 3dB higher than the unknown source. When this happens, the power output of both sources is the same. The two methods described above are simple to use but must be considered as approximations only.
32.7.6 Measurement of Sound Power by Means of Sound Intensity Accurate and reliable sound-power measurement by means of pressure has the great disadvantage of requiring either free-field conditions in an open space or anechoic chamber or a well diffused field in a reverberation chamber. Open spaces are difficult to find as are rain-free and wind-free conditions. Anechoic chambers are expensive to acquire, and it is difficult to achieve good echo-free conditions at low frequencies; neither is it often convenient or practical to take machinery and install it in a chamber. Reverberation rooms are also expensive to build, and diffuse field conditions in them are not very easy to achieve. In all above conditions machines require transporting, provision of power, gears, loads, etc., and this may mean that other sources or noise will interfere with the measurement.
613
Chapter | 32 Noise Measurement
The technique of measuring sound intensity, as discussed in Section 32.5, allows the measurement of sound power radiated by a source under conditions which would be considered adverse for pressure-measurement techniques. It allows the source to be used in almost any convenient location, for example, on the factory floor where it is built or assembled or normally operating, in large or small enclosures, near reflecting surfaces and, what is extremely important, in the presence of other noises which would preclude the pressure method. It allows the estimation of sound power output of one item in a chain of several. At present the standards relating to the use of this technique for sound-power measurement are being prepared but, due to its many advantages, the technique is finding wide acceptance. In addition, it has great value in giving the direction of energy flow, allowing accurate location of sources within a complex machine.
32.8 Effect of environmental conditions on measurements
as are measurements of low-level noises even in slight wind conditions. In permanent noise-monitoring systems it is now common practice to include an anemometer so that high-wind conditions may be identified and measurements discarded or treated with suspicion. Windscreens on microphones offer some help but do not eliminate the problems. These also have the beneficial effect of protecting the diaphragm from dust and chemical pollution, perhaps even showers. All windscreens have some effect on the frequency response of microphones.
32.8.4 Other Noises The measurement of one noise in the presence of another does cause problems if the noise to be measured is not at least 10 dB higher than other noise or noises. Providing the two noise sources do not have identical or very similar spectra, and both are stable with time, corrections may be made as shown in Figure 32.30. Sound-intensity measuring techniques may be used successfully in cases where other noises are as high as the noise to be measured, or even higher.
32.8.1 Temperature The first environmental condition most frequently considered is temperature. Careful study of instrument specifications will show the range of temperatures in which the equipment will work satisfactorily, and in some cases the effect (normally quite small) on the performance. Storage temperatures may also be given in the specifications. It is not always sufficient to look at the air temperature; metal instrument cases, calibrators, microphones, etc., may be heated to quite high temperatures by direct sunlight. Instruments stored in very low temperatures—for example, those carried in a boot of a car in the middle of winter—when brought into warm and humid surroundings may become covered by condensation and their performance may suffer. In addition, battery life is significantly shortened at low temperatures.
32.8.2 Humidity and Rain
Figure 32.29 Noise levels induced in a half-inch free-field microphone fitted with a nosecone. A: With standard protection grid, wind parallel to diaphragm; B: as A but with wind at right angle to diaphragm; C: as B with windscreen; D: as A with windscreen.
The range of relative humidity for which the instruments are designed is normally given in the specifications. Direct rain on instruments not specifically housed in showerproof (NEMA 4) cases must be avoided. Microphones require protective measures recommended by the manufacturers.
32.8.3 Wind This is probably the worst enemy. Wind impinging on a microphone diaphragm will cause an output, mainly at the lower end of the spectrum (see Figure 32.29). Serious measurements in very windy conditions are virtually impossible,
Figure 32.30 Noise-level addition chart.
614
PART | IV Electrical and Radiation Measurements
References
Further Reading
Burns, W. and D. W. Robinson, Hearing and Noise in Industry, HMSO, London (1970). Hassall, J. R. and Zaveri, Z., Acoustic Noise Measurements, Bruel and Kjaer, Copenhagen (1979). ISO Standards Handbook 4, Acoustics, Vibration and Shock (1985). Randall, R. B., Frequency Analysis, Bruel and Kjaer, Copenhagen (1977).
Anderson, J. and M. Bratos-Anderson, Noise: Its Measurement, Analysis, Rating and Control, Ashgate Publishing (1993). Harris, C. M., Handbook of Acoustical Measurements and Noise Control, McGraw-Hill, New York (1991). Wilson, C. E., Noise Control: Measurement, Analysis and Control of Sound and Vibration, Krieger (1994).
Part V
Controllers, Actuators, and Final Control Elements
This page intentionally left blank
Chapter 33
Field Controllers, Hardware and Software W. Boyes
33.1 Introduction In this section, we look at the hardware and software that take the measurements from the field sensors and the instructions from the control system and provide the instructions to the final control elements that form the end points of the control loops in a plant. In the past 30 years, there have been amazing changes in the design and construction of field controllers and the software that operates on them, while there have been many fewer changes in the design of the final control elements themselves. It all started with Moore’s law, of course. In the April 1965 issue of Electronics Magazine, Intel cofounder Gordon E. Moore described the doubling of electronic capabilities. “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year …” he wrote. “Certainly over the short term this rate can be expected to continue, if not to increase.” Even though there were and are pundits who believe that Moore’s law will finally be exceeded, the cost and power of electronic products continues to follow his law. Costs drop by half and power increases by a factor of two every two years. This has now been going on for over 43 years. Every time it looks like there will be a slowdown, new processes are developed to continue to make more and more powerful electronics less expensively. So, what has this meant? Manufacturing, indeed all of society, has been radically changed by the applications of Moore’s law. In 1965, manufacturing was done with paper routers and instructions. Machining was done by hand, according to drawings. Drawings themselves were done with pencil or India ink by draftspeople who did nothing else all day. Engineers used slide rules. Machines were controlled by electromechanical relays and mechanical timers and human operators. If a new product was required, the production lines needed to be shut down, redesigned, rewired, and restarted, often at the cost of months of lost production. ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00033-4
The computers that put a man on the moon in 1969 had far less processing capability than the average inexpensive cell phone does in 2009.
33.2 Field controllers, hardware, and software In 1968, working on parallel paths, Richard Morley, of Bedford Associates (later Modicon; see Figure 33.1), and Otto Struger, of Allen-Bradley Co., created the first programmable logic controllers (PLCs). These devices were developed
Figure 33.1 Richard Morley with the Modicon. Courtesy of Richard Morley.
617
618
PART | V Controllers, Actuators, and Final Control Elements
to replace hardwired discrete relay logic control systems in discrete manufacturing scenarios such as automotive assembly lines. The first PLCs used dedicated microprocessors running proprietary real-time operating systems (RTOSs) and were programmed using a special programming language called “ladder logic,” created by Morley and his associates. Ladder logic came about as a digital adaptation of the ladder diagrams electricians used to create relay logic prior to the advent of PLCs. The first generation PLCs had 4 kilobytes of memory, maximum. They revolutionized industrial production, in both the discrete and the process fields. In 1976, Robert Metcalfe, of Xerox, and his assistant, David Boggs, published Ethernet: Distributed PacketSwitching For Local Computer Networks. Most computers and PLCs today use some form of Ethernet to move data. The patents, interestingly, were not on the software but on the chips to produce hubs, routers, and the like, which had become practical because of Moore’s law. In 1981, Moore’s law permitted IBM to release its first Personal Computer, or PC. It ran with 16 kilobytes of RAM on an Intel 4.88 MHz 8088 chip. Each of the original PCs was more powerful than the triple modular redundant computers that still (in 2009) drive the space shuttle. By 1983, Moore’s law had progressed to the point where a joint venture of Yamatake and Honeywell produced the first “smart transmitter.” This was a field device: a pressure transmitter that had an onboard microprocessor transmitter— a computer inside a field instrument that could communicate digitally and be programmed like a computer. Other companies quickly followed suit. In 1996, Fisher-Rosemount Inc., now Emerson Process Management, changed the definition of a distributed control system by combining a commercial off-the-shelf (COTS) PC made by Dell with a proprietary field controller and a suite of integrated proprietary software, running over standard Ethernet networks, and called it the DeltaV. This device was
possible only because Moore’s law had made the PC powerful enough to replace the “big iron” proprietary computers used in previous DCS designs, both from Fisher-Rosemount and other vendors. In 2002, Craig Resnick, an analyst with ARC Advisory Group, coined the name programmable automation controller (PAC) for an embedded PC running either a version of Windows or a proprietary RTOS (see Figure 33.2). In 1968, process field controllers were of the analog type, standalone single-loop controllers. In 2009, even standalone single-loop controllers are digital microprocessor-based special-purpose computers. They may even be PACs. The time since 1968 has seen the convergence of the discrete PLC controller and the process loop controller. Products like the Control Logix platform from Rockwell Automation (the successor to Allen-Bradley Co.), the Simatic S7 platform from Siemens, the C200 platform from Honeywell, or any number of other PAC platforms now combine the discrete digital input/output features of the original PLCs and the advanced loop control functions of the analog singleloop controller.
Figure 33.2 A programmable automation controller. Courtesy of Advantech Inc.
Chapter 34
Advanced Control for the Plant Floor Dr. James R. Ford, P. E.
34.1 Introduction Advanced process control (APC) is a fairly mature body of engineering technology. Its evolution closely mirrors that of the digital computer and its close cousin, the modern microprocessor. APC was born in the 1960s, evolved slowly and somewhat painfully through its adolescence in the 1970s, flourished in the 1980s (with the remarkable advances in computers and digital control systems, or DCSs), and reached maturity in the 1990s, when model predictive control (MPC) ascended to the throne of supremacy as the preferred approach for implementing APC solutions. As Zak Friedman1 dared to point out in a recent article, the current decade has witnessed tremendous APC industry discontent, self-examination, and retrenchment. He lists several reasons for the malaise, among them: “cutting corners” on the implementation phase of the projects, poor inferred property models (these are explained later), tying APC projects to “optimization” projects, and too-cozy relationships between APC software vendors/implementers and their customers. In a more recent article2 in the same magazine, Friedman interviewed me because I offer a different explanation for the APC industry problems. This chapter traces the history of the development of process control, advanced process control, and related applied engineering technologies and discusses the reasons that I think the industry has encountered difficulties. The chapter presents some recommendations to improve the likelihood of successful APC project implementation and makes some predictions about the future direction of the technology.
and recovering the main distillable products, primarily kerosene, heating oil, and lubricants. These processes were initially batch in nature. A pot of oil was heated to boiling, and the resulting vapor was condensed and recovered in smaller batches. The first batch in the process was extremely light (virgin naphtha), and the last batch was heavy (fuel oil or lubricating oil). Eventually, this process was transformed from batch to continuous, providing a means of continuously feeding fresh oil and recovering all distillate products simultaneously. The heart of this process was a unit operation referred to as countercurrent, multicomponent, two-phase fractionation. Whereas the batch process was manual in nature and required very few adjustments (other than varying the heat applied to the pot), the continuous process required a means of making adjustments to several important variables, such as the feed rate, the feed temperature, the reflux rate, and so on, to maintain stable operation and to keep products within specifications. Manually operated valves were initially utilized to allow an operator to adjust the important independent variables. A relatively simple process could be operated in a fairly stable fashion with this early “process control” system. Over the next generation of process technology development, process control advanced from purely manual, openloop control to automatic, closed-loop control. To truly understand this evolution, we should examine the reasons that this evolution was necessary and how those reasons impact the application of modern process control technology to the operation of process units today.
34.2 Early developments
34.3 The need for process control
The discovery of oil in Pennsylvania in 1859 was followed immediately by the development of processes for separating
Why do we need process control at all? The single most important reason is to respond to process disturbances. If process disturbances did not occur, the manual valves mentioned here would suffice for satisfactory, stable operation of process plants. What, then, do we mean by a process disturbance?
1. “Has the APC Industry Completely Collapsed?,” Hydrocarbon Processing, January 2005, p. 15. 2. “Jim Ford’s Views on APC,” Hydrocarbon Processing, November 2006, p. 19. ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00034-6
619
620
PART | V Controllers, Actuators, and Final Control Elements
A process disturbance is a change in any variable that affects the flow of heat and/or material in the process. We can further categorize disturbances in two ways: by time horizon and by measurability. Some disturbances occur slowly over a period of weeks, months, or years. Examples of this type of disturbance are: ●●
●●
Heat exchanger fouling. Slowly alters the rate of heat transfer from one fluid to another in the process. Catalyst deactivation. Slowly affects the rate, selectivity, and so on of the reactions occurring in the reactor.
Automatic process control was not developed to address long time-horizon disturbances. Manual adjustment for these types of disturbances would work almost as well. So, automatic process control is used to rectify disturbances that occur over a much shorter time period of seconds, minutes, or hours. Within this short time horizon, there are really two main types of disturbances: measured and unmeasured.
34.4 Unmeasured disturbances Automatic process control was initially developed to respond to unmeasured disturbances. For example, consider the first automatic devices used to control the level of a liquid in a vessel. (See Figure 34.1.) The liquid level in the vessel is sensed by a float. The float is attached to a lever. A change in liquid level moves the float up or down, which mechanically or pneumatically moves the lever, which is connected to a valve. When the level goes up the valve opens, and vice versa. The control loop is responding to an unmeasured disturbance, namely, the flow rate of material into the vessel. The first automatic controllers were not very sophisticated. The float span, lever length, connection to the valve, FLOAT-ACTUATED LEVEL CONTROL
LIQUID IN
LIQUID LEVEL
FLOAT
CONTROLVALVE OUT OPEN Figure 34.1 Float-Actuated level control diagram.
and control valve opening had to be designed to handle the full range of operation. Otherwise, the vessel could overflow or drain out completely. This type of control had no specific target or “set point.” At constant inlet flow, the level in the vessel would reach whatever resting position resulted in the proper valve opening to make the outflow equal to the inflow. Level controllers were probably the first type of automatic process controller developed because of the mechanical simplicity of the entire loop. Later on, it became obvious that more sophisticated control valves were needed to further automate other types of loops. The pneumatically driven, linear-position control valve evolved over the early 20th century in all its various combinations of valve body and plug design to handle just about any type of fluid condition, pressure drop, or the like. The development of the automatic control valve ushered in the era of modern process control.
34.5 Automatic control valves The first truly automatic control valves were developed to replace manual valves to control flow. This is the easiest type of variable to control, for two reasons. First, there is essentially no dead time and very little measurement lag between a change in valve opening and a change in the flow measurement. Second, a flow control loop is not typically subjected to a great deal of disturbance. The only significant disturbance is a change in upstream or downstream pressure, such as might occur in a fuel gas header supplying fuel gas through a flow controller for firing a heater or boiler. Other less significant disturbances include changes in the temperature and density of the flowing fluid. The flow control loop has become the foundation of all automatic process control, for several reasons. Unlike pressure and temperature, which are intensive variables, flow is an extensive variable. Intensive variables are key control variables for stable operation of process plants because they relate directly to composition. Intensive variables are usually controlled by adjusting flows, the extensive variables. In this sense, intensive variables are higher in the control hierarchy. This explains why a simple cascade almost always involves an intensive variable as the master, or primary, in the cascade and flow as the slave, or secondary, in the cascade. When a pressure or temperature controller adjusts a control valve directly (rather than the flow in a cascade), the controller is actually adjusting the flow of material through the valve. What this means, practically speaking, is that, unlike the intensive variables, a flow controller has no predetermined or “best” target for any given desired plant operation. The flow will be wherever it needs to be to maintain the higherlevel intensive variable at its “best” value. This explains why “optimum” unit operation does not require accurate flow measurement. Even with significant error in flow measurement, the target for the measured flow will be adjusted
621
Chapter | 34 Advanced Control for the Plant Floor
(in open or closed loop) to maintain the intensive variable at its desired target. This also explains why orifices are perfectly acceptable as flow controller measurement devices, even though they are known to be rather inaccurate. These comments apply to almost all flow controllers, even important ones like the main unit charge rate controller. The target for this control will be adjusted to achieve an overall production rate, to push a constraint, to control the inventory of feed (in a feed drum), and so on. What about additional feed flows, such as the flow of solvent in an absorption or extraction process? In this case, there is a more important, higher-level control variable, an intensive variable—namely, the ratio of the solvent to the unit charge rate. The flow of solvent will be adjusted to maintain a “best” solvent/feed ratio. Again, measurement accuracy is not critical; the ratio target will be adjusted to achieve the desired higher-level objective (absorption efficiency, etc.), regardless of how much measurement inaccuracy is present. Almost all basic control loops, either single-loop or simple cascades, are designed to react, on feedback, to unmeasured disturbances. Reacting to unmeasured disturbances is called servo, or feedback control. Feedback control is based on reacting to a change in the process variable (the PV) in relation to the loop target, or set point (the SP). The PV can change in relation to the SP for two reasons: either because a disturbance has resulted in an unexpected change in the PV or because the operator or a higher-level control has changed the SP. Let’s ignore SP changes for now.
34.6 Types of feedback control So, feedback control was initially designed to react to unmeasured disturbances. The problem in designing these early feedback controllers was figuring out how much and how fast to adjust the valve when the PV changed. The first design was the float-type level controller described earlier. The control action is referred to as proportional because the valve opening is linearly proportional to the level. The higher the level, the more open the valve. This type of control may have been marginally acceptable for basic levels in vessels, but it was entirely deficient for other variables. The main problem is that proportionalonly control cannot control to a specific target or set point. There will always be offset between the PV and the desired target. To correct this deficiency, the type of control known as integral, or reset, was developed. This terminology is based on the fact that, mathematically, the control action to correct the offset between SP and PV is based on the calculus operation known as integration. (The control action is based on the area under the SP-PV offset curve, integrated over a period of time.) The addition of this type of control action represented a major improvement in feedback control. Almost all flow and pressure loops can be controlled very well with a combination of proportional and integral
control action. A less important type of control action was developed to handle situations in which the loop includes significant measurement lag, such as is often seen in temperature loops involving a thermocouple, inside a thermowell, which is stuck in the side of a vessel or pipe. Control engineers noted that these loops were particularly difficult to control, because the measurement lag introduced instability whenever the loops were tuned to minimize SP-PV error. For these situations, a control action was developed that reacts to a change in the “rate of change” of the PV. In other words, as the PV begins to change its “trajectory” with regard to the SP, the control action is “reversed,” or “puts on the brakes,” to head off the change that is coming, as indicated by the change in trajectory. The control action was based on comparing the rate of change of the PV over time, or the derivative of the PV. Hence the name of the control action: derivative. In practice, this type of control action is utilized very little in the tuning of basic control loops. The other type of change that produces an offset between SP and PV is an SP change. For flow control loops, which are typically adjusted to maintain a higher-level intensive variable at its target, a quick response to the SP change is desirable; otherwise, additional response lag is introduced. A flow control loop can and should be tuned to react quickly and equally effectively to both PV disturbances and SP changes. Unfortunately, for intensive variables, a different closedloop response for PV changes due to disturbances vs. SP changes is called for. For temperature, and especially pressure, these loops are tuned as tightly as possible to react to disturbances. This is because intensive variables are directly related to composition, and good control of composition is essential for product quality and yield. However, if the operator makes an SP change, a much less “aggressive” control action is preferred. This is because the resulting composition change will induce disturbances in other parts of the process, and the goal is to propagate this disturbance as smoothly as possible so as to allow other loops to react without significant upset. Modern DCSs can provide some help in this area. For example, the control loop can be configured to take proportional action on PV changes only, ignoring the effect of SP changes. Then, following an SP change, integral action will grind away on correcting the offset between SP and PV. However, these features do not fully correct the deficiency discussed earlier. This dilemma plagues control systems to this very day and is a major justification for implementation of advanced controls that directly address this and other shortcomings of basic process control.
34.7 Measured disturbances The other major type of disturbance is the measured disturbance. Common examples are changes in charge rate,
622
PART | V Controllers, Actuators, and Final Control Elements
cooling water temperature, steam header pressure, fuel gas header pressure, heating medium temperature and ambient air temperature, where instruments are installed to measure those variables. The first 20 years of the development of APC technology focused primarily on using measured disturbance information for improving the quality of control. Why? Modern process units are complex and highly interactive. The basic control system, even a modern DCS, is incapable of maintaining fully stable operation when disturbances occur. APC was developed to mitigate the destabilizing effects of disturbances and thereby to reduce process instability. This is still the primary goal of APC. Any other claimed objective or direct benefit is secondary. Why is it so important to reduce process instability? Process instability leads to extremely conservative operation so as to avoid the costly penalties associated with instability, namely, production of off-spec product and violation of important constraints related to equipment life and human safety. Conservative operation means staying well away from constraint limits. Staying away from these limits leaves a lot of money on the table in terms of reduced yields, lower throughput, and greater energy consumption. APC reduces instability, allowing for operation much closer to constraints and thereby capturing the benefits that would otherwise be lost. As stated earlier, early APC development work focused on improving the control system’s response to measured disturbances. The main techniques were called feed-forward, compensating, and decoupling. In the example of a fired heater mentioned earlier, adjusting the fuel flow for changes in heater charge rate and inlet temperature is feed-forward. The objective is to head off upsets in the heater outlet temperature that are going to occur because of these feed changes. In similar fashion, the fuel flow can be “compensated” for changes in fuel gas header pressure, temperature, density, and heating value, if these measurements are available. Finally, if this is a dual-fuel heater (fuel gas and fuel oil), the fuel gas flow can be adjusted when the fuel oil flow changes so as to “decouple” the heater from the firing upset that would otherwise occur. This decoupling is often implemented as a heater fired duty controller. A second area of initial APC development effort focused on controlling process variables that are not directly measured by an instrument. An example is reactor conversion or severity. Hydrocracking severity is often measured by how much of the fresh feed is converted to heating oil and lighter products. If the appropriate product flow measurements are available, the conversion can be calculated and the reactor severity can then be adjusted to maintain a target conversion. Work in this area of APC led to the development of a related body of engineering technology referred to as inferred properties or soft sensors. A third area of APC development work focused on pushing constraints. After all, if the goal of APC is to reduce
process instability so as to operate closer to constraints, why not implement APCs that accomplish that goal? (Note that this type of APC strategy creates a measured process disturbance; we are going to move a major independent variable to push constraints, so there had better be APCs in place to handle those disturbances.) Especially when the goal was to increase the average unit charge rate by pushing known, measured constraints, huge benefits could often be claimed for these types of strategies. In practice, these types of strategies were difficult to implement and were not particularly successful. While all this development work was focused on reacting to changes in measured disturbances, the problems created by unmeasured disturbances continued to hamper stable unit operation (and still do today). Some early effort also focused on improving the only tool available at the time, the proportional-integral-derivative (PID) control algorithm, to react better to unmeasured disturbances. One of the main weaknesses of PID is its inability to maintain stable operation when there is significant dead time and/or lag between the valve movement and the effect on the control variable. For example, in a distillation column, the reflux flow is often adjusted to maintain a stable column tray temperature. The problem arises when the tray is well down the tower. When an unmeasured feed composition change occurs, upsetting the tray temperature, the controller responds by adjusting the reflux flow. But there may be dead time of several minutes before the change in reflux flow begins to change the tray temperature. In the meantime, the controller will have continued to take more and more integral action in an effort to return the PV to SP. These types of loops are difficult (or impossible) to tune. They are typically detuned (small gain and integral) but with a good bit of derivative action left in as a means of “putting on the brakes” when the PV starts returning toward SP. Some successes were noted with algorithms such as the Smith Predictor, which relies on a model to predict the response of the PV to changes in controller output. This algorithm attempts to control the predicted PV (the PV with both dead time and disturbances included) rather than the actual measured PV. Unfortunately, even the slightest model mismatch can cause the controller using the Smith Predictor to become unstable. We have been particularly successful in this area with development of our “smart” PID control algorithm. In its simplest form, it addresses the biggest weakness of PID, namely, the overshoot that occurs because the algorithm continues to take integral action to reduce the offset between SP and PV, even when the PV is returning to SP. Our algorithm turns the integral action on and off according to a proven decision process made at each controller execution. This algorithm excels in loops with significant dead time and lag. We use this algorithm on virtually all APC projects.
623
Chapter | 34 Advanced Control for the Plant Floor
34.8 The need for models By the mid-1980s, many consulting companies and in-house technical staffs were involved in the design and implementation of the types of APC strategies described in the last few paragraphs. A word that began to appear more and more associated with APC was model. For example, to implement the constraint-pushing APC strategies we’ve discussed, a “dynamic” model was needed to relate a change in the independent variable (the charge rate) to the effect on each of the dependent, or constraint, variables. With this model, the adjustments that were needed to keep the most constraining of the constraint variables close to their limits could be determined mathematically. Why was it necessary to resort to development of models? As mentioned earlier, many of the early constraintpushing efforts were not particularly successful. Why not? It’s the same problem that plagues feedback control loops with significant dead time and lag. There are usually constraints that should be honored in a constraint-pushing strategy that may be far removed (in time) from where the constraint-pushing move is made. Traditional feedback techniques (PID controllers acting through a signal selector) do not work well for the constraints with long dead time. We addressed this issue by developing special versions of our smart PID algorithm to deal with the long dead times, and we were fairly successful in doing so.
34.9 The emergence of MPC In his Ph.D. dissertation work, Dr. Charles Cutler developed a technique that incorporated normalized step-response models for the constraint (or control) variables, or CVs, as a function of the manipulated variables, or MVs. This allowed the control problem to be “linearized,” which then permitted the application of standard matrix algebra to estimate the MV moves to be made to keep the CVs within their limits. He called the matrix of model coefficients the dynamic matrix and developed the dynamic matrix control (DMC) control technique. He also incorporated an objective function into the DMC algorithm, turning it into an “optimizer.” If the objective function is the sum of the variances between the predicted and desired values of the CVs, DMC becomes a minimum variance controller that minimizes the output error over the controller time horizon. Thus was ushered in the control technology known in general as multivariable, model-predictive control (MVC or MPC). Dr. Cutler’s work led eventually to formation of his company, DMC Corporation, which was eventually acquired by AspenTech. The current version of this control software is known as DMCPlus. There are many competing MPC products, including Honeywell RMPCT, Invensys Connoisseur, and others.
MPC has become the preferred technology for solving not only multivariable control problems but just about any control problem more complicated than simple cascades and ratios. Note that this technology no longer relies on traditional servo control techniques, which were first designed to handle the effect of unmeasured disturbances and which have done a fairly good job for about 100 years. MPC assumes that our knowledge of the process is perfect and that all disturbances have been accounted for. There is no way for an MPC to handle unmeasured disturbances other than to readjust at each controller execution the bias between the predicted and measured value of each control variable. This can be likened to a form of integral-only control action. This partially explains MPC’s poor behavior when challenged by disturbances unaccounted for in the controller. DMCPlus uses linear, step-response models, but other MPC developers have incorporated other types of models. For example, Pavilion Technologies has developed a whole body of modeling and control software based on neural networks. Since these models are nonlinear, they allow the user to develop nonlinear models for processes that display this behavior. Polymer production processes (e.g., polypropylene) are highly nonlinear, and neural net-based controllers are said to perform well for control of these processes. GE (MVC) uses algebraic models and solves the control execution prediction problem with numerical techniques.
34.10 MPC vs. ARC There are some similarities between the older APC techniques (feed-forward, etc.) and MPC, but there are also some important differences. Let’s call the older technique advanced regulatory control (ARC). To illustrate, let’s take a simplified control problem, such as a distillation column where we are controlling a tray temperature by adjusting the reflux flow, and we want feed-forward action for feed rate changes. The MPC will have two models: one for the response of the temperature to feed rate changes and one for the response of the temperature to reflux flow changes. For a feed rate change, the controller knows that the temperature is going to change over time, so it estimates a series of changes in reflux flow required to keep the temperature near its desired target. The action of the ARC is different. In this case, we want to feed-forward the feed rate change directly to the reflux flow. We do so by delaying and lagging the feed rate change (using a simple dead time and lag algorithm customized to adjust the reflux with the appropriate dynamics), then adjusting the reflux with the appropriate steady-state gain or sensitivity (e.g., three barrels of reflux per barrel of feed). The ultimate sensitivity of the change in reflux flow to a change in feed rate varies from day to day; hence, this type of feed-forward control is adaptive and, therefore, superior
624
PART | V Controllers, Actuators, and Final Control Elements
to MPC (the MPC models are static). Note: Dr. Cutler recently formed a new corporation, and he is now offering adaptive DMC, which includes real-time adjustment of the response models. How does the MPC handle an unmeasured disturbance, such as a feed composition change? As mentioned earlier, it can do so only when it notices that the temperature is not where it’s supposed to be according to the model prediction from the series of recent moves of the feed rate and reflux. It resets the predicted vs. the actual bias and then calculates a reflux flow move that will get the temperature back where it’s supposed to be, a form of integral-only feedback control. On the other hand, the ARC acts in the traditional feedback (or servo) manner with either plain or “smart” PID action.
34.11 Hierarchy Prior to MPC, most successful APC engineers used a process engineering-based, hierarchical approach to developing APC solutions. The bottom of the control hierarchy, its foundation, is what we referred to earlier as “basic” process control, the single loops and simple cascades that appear on P&IDs and provide the operator with the first level of regulatory control. Simple processes that are not subject to significant disturbances can operate in a fairly stable fashion with basic process control alone. Unfortunately, most process units in refineries and chemical plants are very complex, highly interactive, and subject to frequent disturbances. The basic control system is incapable of maintaining fully stable operation when challenged by these disturbances—thus the emergence of APC to mitigate the destabilizing effects of disturbances. The hierarchical approach to APC design identifies the causes of the disturbances in each part of the process, then layers the solutions that deal with the disturbances on top of the basic control system, from the bottom up. Each layer adds complexity and its design depends on the disturbance(s) being dealt with. As an example of the first layer of the APC hierarchy, consider the classic problem of how to control the composition of distillation tower product streams, such as the overhead product stream. The hierarchical approach is based on identifying and dealing with the disturbances. The first type of disturbance that typically occurs is caused by changes in ambient conditions (air temperature, rainstorms, etc.), which lead to a change in the temperature of the condensed overhead vapor stream, namely the reflux (and overhead product). This will cause the condensation rate in the tower to change, leading to a disturbance that will upset column separation and product qualities. Hierarchical APC design deals with this disturbance by specifying, as the first level above basic control, internal reflux control (IRC). The IRC first calculates the net internal reflux with an equation that includes the heat of vaporization, heat
capacity and flow of the external reflux, the overhead vapor temperature, and the reflux temperature. The control next back-calculates the external reflux flow required to maintain constant IR and then adjusts the set point of the external reflux flow controller accordingly. This type of control provides a fast-responding, first-level improvement in stability by isolating the column from disturbances caused by changes in ambient conditions. There are multiple inputs to the control (the flow and temperatures which contribute to calculation of the internal reflux), but typically only one output—to the set point of the reflux flow controller. Moving up the hierarchy to the advanced supervisory control level, the overhead product composition can be further stabilized by controlling a key temperature in the upper part of the tower, since temperature (at constant pressure) is directly related to composition. And, since the tower pressure could vary (especially if another application is attempting to minimize pressure), the temperature that is being controlled should be corrected for pressure variations. This control adjusts the set point of the IRC to maintain a constant pressure-corrected temperature (PCT). Feed-forward action (for feed rate changes) and decoupling action (for changes in reboiler heat) can be added at this level. Further improvement in composition control can be achieved by developing a correlation for the product composition, using real-time process measurements (the PCT plus other variables such as reflux ratio, etc.), then using this correlation as the PV in an additional APC that adjusts the set point of the PCT. This type of correlation is known as an inferred property or soft sensor. Thus, the hierarchical approach results in a multiplecascade design, an inferred property control adjusting a PCT, adjusting an IRC, adjusting the reflux flow. In general, APCs designed using hierarchical approaches consist of layers of increasingly complex strategies. Some of the important advantages of this approach are: ●●
●●
●●
Operators can understand the strategies; they appeal to human logic because they use a “systems” approach to problem solving, breaking a big problem down into smaller problems to be solved. The control structure is more suitable for solutions at a lower level in the control system; such solutions can often be implemented without the requirement for additional hardware and software. The controls “degrade gracefully”; when a problem prohibits a higher-level APC from being used, the lowerlevel controls can still be used and can capture much of the associated benefit.
How would we solve the preceding control problem using MPC? There are two approaches. The standard approach is to use the inferred property as a CV and the external reflux flow controller as the MV. In this case, then, how does the MPC deal with the other disturbances such as the reflux temperature, reboiler heat, and feed rate? These variables must
Chapter | 34 Advanced Control for the Plant Floor
be included in the model matrix as additional independent variables. A step-response model must then be developed for the CV (the inferred property) as a function of each of the additional independent variables. This is the way most of the APC industry designs an MPC. The hierarchical approach would suggest something radically different. The CV is the same because the product composition is the variable that directly relates to profitability. However, in the hierarchical design, the MV is the PCT. The lower-level controls (the PCTC and IRC) are implemented at a lower level in the control hierarchy, typically in the DCS. Unfortunately, the industry-accepted approach to MPC design violates the principles of hierarchy. Rarely, if ever, are intermediate levels of APC employed below MPC. There is no hierarchy—just one huge, flat MPC controller on top of the basic controllers, moving all of them at the same time in seemingly magical fashion. Operators rarely understand them or what they are doing. And they do not degrade gracefully. Consequently, many fall into disuse.
34.12 Other problems with MPC A nonhierarchical design is only one of the many problems with MPC. The limitations of MPC have been thoroughly exposed, though probably widely ignored.3 Here are a few other problems. In many real control situations, even mild model mismatch introduces seriously inappropriate controller moves, leading to inherent instability as the controller tries at each execution to deal with the error between where it thinks it is going and where it really is. MPC is not well suited for processes with noncontinuous phases of operation, such as delayed cokers. The problem here is how to “model” the transitory behavior of key control variables during coke drum prewarming and switching. Unfortunately, every drum-switching operation is different. This means that both the “time to steady state” and ultimate CV gain are at best only approximations, leading to model mismatch during every single drum operation. No wonder they are so difficult to “tune” during these transitions. Correcting model mismatch requires retesting and remodeling, a form of very expensive maintenance. MPC controllers, particularly complex ones implemented on complex units with lots of interactions, require a lot of “babysitting”—constant attention from highly trained (and highly paid) control engineers. This is a luxury that few operating companies can afford. The licensors of MPC software will tell you that their algorithms “optimize” operation by operating at constraints 3. The emperor is surely wearing clothes! See the excellent article by Alan Hugo, “Limitations of Model-Predictive Controllers,” that appeared in the January 2000 issue of Hydrocarbon Processing, p. 83.
625
using the least costly combination of manipulated variable assets. That is certainly correct, mathematically; that is the way the LP or QP works. In practice, however, any actual “optimizing” is marginal at best. This is due to a couple of reasons. The first is the fact that, in most cases, one MV dominates the relationships of other MVs to a particular CV. For example, in a crude oil distillation tower, the sensitivity between the composition of a side-draw distillate product and its draw rate will be much larger than the sensitivity with any other MV. The controller will almost always move the distillate draw rate to control composition. Only if the draw rate becomes constrained will the controller adjust a pump-around flow or the draw rate of another distillate product to control composition, regardless of the relative “cost” of these MVs. The second reason is the fact that, in most cases, once the constraint “corners” of the CV/MV space are found, they tend not to change. The location of the corners is most often determined either by “discretionary” operatorentered limits (for example, the operator wants to limit the operating range of an MV) or by valve positions. In both situations, these are constraints that would have been pushed by any kind of APC, not just a model-predictive controller. So, the MPC has not “optimized” any more than would a simpler ARC or ASC. When analyzing the MPC controller models that result from plant testing, control engineers often encounter CV/ MV relationships that appear inappropriate and that are usually dropped because the engineer does not want that particular MV moved to control that particular CV. Thus, the decoupling benefit that could have been achieved with simpler ASC is lost. If every single combination of variables and constraint limits has not been tested during controller commissioning (as is often the case), the controller behavior under these untested conditions is unknown and unpredictable. For even moderately sized controllers, the number of possible combinations becomes unreasonably large such that all combinations cannot be tested, even in simulation mode. Control engineers often drop models from the model matrix to ensure a reasonably stable solution (avoiding approach to a singular matrix in the LP solution). This is most common where controllers are implemented on fractionation columns with high-purity products and where the controller is expected to meet product purity specifications on both top and bottom products. The model matrix then reduces to one that is almost completely decoupled. In this case, single-loop controllers, rather than an MPC, would be a clearly superior solution. What about some of the other MPC selling points? One that is often mentioned is that, unlike traditional APC, MPC eliminates the need for custom programming. This is simply not true. Except for very simple MPC solutions, custom code is almost always required—for example, to calculate a variable used by the controller, to estimate a flow from a valve position, and so on.
626
PART | V Controllers, Actuators, and Final Control Elements
The benefits of standardization are often touted. Using the same solution tool across the whole organization for all control problems will reduce training and maintenance costs. But this is like saying that I can use my new convertible to haul dirt, even though I know that using my old battered pickup would be much more appropriate. Such an approach ignores the nature of real control problems in refineries and chemical plants and relegates control engineers to pointers and clickers.
34.13 Where we are today? Perhaps the previous discussion paints an overly pessimistic picture of MPC as it exists today. Certainly many companies, particularly large refining companies, have recognized the potential return of MPC, have invested heavily in MPC, and maintain large, highly trained staffs to ensure that the MPCs function properly and provide the performance that justified their investment. But, on the other hand, the managers of many other companies have been seduced by the popular myth that MPC is easy to implement and maintain—a misconception fostered at the highest management levels by those most likely to benefit from the proliferation of various MPC software packages. So, what can companies do today to improve the utilization and effectiveness of APC in general and MPC in particular? They can do several things, all focused primarily on the way we analyze operating problems and design APC solutions to solve the problems and thereby improve productivity and profitability. Here are some ideas. As mentioned earlier, the main goal of APC is to isolate operating units from process disturbances. What does this mean when we are designing an APC solution? First, identify the disturbance and determine its breadth of influence. Does the disturbance affect the whole unit? If so, how? For example, the most common unit-encompassing disturbance is a feed-rate change. But how is the disturbance propagated? In many process units, this disturbance merely propagates in one direction only (upstream to downstream), with no other complicating effects (such as those caused by recycle or interaction between upstream and downstream unit operations). In this case, a fairly straightforward APC solution involves relatively simple ARCs for inventory and load control—adjusting inter-unit flows with feed-forward for inventory control, and adjusting load-related variables such as distillation column reflux with ratio controls or with feedforward action in the case of cascades (for example, controlling a PCT by adjusting an IRC). No MVC is required here to achieve significantly improved unit stability. If the effect of the disturbance can be isolated, design the APC for that isolated part of the unit and ignore potential second-order effects on other parts of the unit. A good example is the IR control discussed earlier. The tower can be easily isolated from the ambient conditions disturbance by implementing
internal reflux control. The internal reflux can then be utilized as an MV in a higher-level control strategy (for example, to control a pressure-compensated tray temperature as an indicator of product composition) or an inferred property. What about unmeasured disturbances, such as feed composition, where the control strategy must react on feedback alone? As mentioned earlier, we have had a great deal of success with “intelligent” feedback-control algorithms that are much more effective than simple PID. For example, our Smart PID algorithm includes special logic that first determines, at each control algorithm execution, whether or not integral action is currently advised based on the PV’s recent trajectory toward or away from set point. Second, it determines the magnitude of the integral move based on similar logic. This greatly improves the transient controller response, especially in loops with significant dead time and/or lag. Another successful approach is to use model-based, adaptive control algorithms, such as those incorporated in products like Brainwave. Here are some additional suggestions: Using a hierarchical approach suggests implementing the lower-level APC strategies as low in the control system as possible. Most modern DCSs will support a good bit of ARC and ASC at the DCS level. Some DCSs such as Honeywell Experion (and earlier TDC systems) have dedicated, DCSlevel devices designed specifically for implementation of robust APC applications. We are currently implementing very effective first-level APC in such diverse DCSs as Foxboro, Yokogawa, DeltaV, and Honeywell. When available, and assuming the analyses are reliable, use lab data as much as possible in the design of APCs. Develop inferred property correlations for control of key product properties and update the model correlations with the lab data. Track APC performance by calculating, historizing, and displaying a variable that indicates quality of control. We often use the variance (standard deviation), in engineering units, of the error between SP and PV of the key APC (for example, an inferred property). Degradation in the long-term trend of this variable suggests that a careful process analysis is needed to identify the cause of the degraded performance and the corrective action needed to return the control to its original performance level.
34.14 Recommendations for using MPC If MPC is being considered for solution of a control problem, apply it intelligently and hierarchically. Intelligent application of MPC involves asking important questions, such as: ●●
Is the control problem to be solved truly multivariable? Can (and should) several MVs be adjusted to control one CV? If not, don’t use MPC.
Chapter | 34 Advanced Control for the Plant Floor
●●
●●
●●
Are significant dead time and lag present such that the model-based, predictive power of MPC would provide superior control performance compared to simple feedforward action? Look at the model matrix. Are there significant interactions between many dependent variables and many independent variables, or are there just isolated “islands” of interactions where each island represents a fairly simple control problem that could just as easily be solved with simpler technology? How big does the controller really have to be? Could the problem be effectively solved with a number of small controllers without sacrificing the benefits provided by a unit-wide controller?
Applying MPC hierarchically means doing the following: ●●
●●
●●
Handle isolated disturbance variables with lower-level ARC. For example, stabilize and control a fired heater outlet temperature with ARC, not MPC. The ARC can easily handle disturbance variables such as feed rate, inlet temperature, and fuel gas pressure. Use the lower-level ARCs, such as the fired heater outlet temperature just mentioned, as MVs in the MPC controller. Use intensive variables, when possible, as MVs in the controller. For example, use a distillate product yield, rather than flow rate, as the MV in a controller designed to control the quality of that product; this eliminates unit charge rate as a disturbance variable.
34.15 What’s in store for the next 40 years? A cover story that appeared in Hydrocarbon Processing about 20 years ago included an interview with the top automation and APC guru of a major U.S. refining company. The main point proffered by the interviewee was that the relentless advances in automation technology, both hardware- and software-wise (including APC), would lead one day to the “virtual” control room in which operating processes would no longer require constant human monitoring—no process operators required. I’m not sure how this article was received elsewhere, but it got a lot of chuckles in our office. Process control, whether continuous or discrete, involves events that are related by cause and effect. For example, a change in temperature (the cause event) needs a change in a valve position (the effect event) to maintain control of the temperature. Automation elevates the relationships of the cause and effect events by eliminating the requirement for a human to connect the two. With the huge advances in automation technology accomplished in the period of the 1970s and 1980s, one
627
might imagine a situation in which incredibly sophisticated automation solutions would eventually eliminate the need for direct human involvement. That’s not what this author sees for the next 40 years. Why not? Three reasons. First, the more sophisticated automation solutions require process models. For example, MPC requires accurate MV/CV or DV/CV response models. Real-time rigorous optimization (RETRO) requires accurate steady-state process models. Despite the complexity of the underlying mathematics and the depth of process data analysis involved in development of these models, the fact is that these will always remain just “models.” When an optimizer runs an optimization solution for a process unit, guess what results? The optimizer spits out an optimal solution for the model, not the process unit! Anybody who believes that the optimizer is really optimizing the process unit is either quite naïve or has been seduced by the same type of hype that accompanied the advent of MPCs and now seems to pervade the world of RETRO. When well-designed and -engineered (and -monitored), RETRO can move operations in a more profitable direction and thereby improve profitability in a major way. But the point is that models will never be accurate enough to eliminate the need for human monitoring and intervention. The second important reason involves disturbances. Remember that there are two types of process disturbances, measured and unmeasured. Despite huge advances in process measurement technology (analyzers, etc.), we will never reach the point (at least in the next 40 years) when all disturbances can be measured. Therefore, there will always be target offset (deviation from set point) due to unmeasured disturbances and model mismatch. Furthermore, it is an absolute certainty that some unmeasured disturbances will be of such severity as to lead to process “upsets,” conditions that require human analysis and intervention to prevent shutdowns, unsafe operation, equipment damage, and so on. Operators may get bored in the future, but they’ll still be around for the next 40 years to handle these upset conditions. The third important reason is the “weakest link” argument. Control system hardware (primarily field elements) is notoriously prone to failure. No automation system can continue to provide full functionality, in real time, when hardware fails; human intervention will be required to maintain stable plant operation until the problem is remedied. Indeed, the virtual control room is still more than 40 years away. Process automation will continue to improve with advances in measurement technology, equipment reliability, and modeling sophistication, but process operators, and capable process and process-control engineers, will still maintain and oversee its application. APC will remain an “imperfect” technology that requires for its successful application experienced and quality process engineering analysis.
628
PART | V Controllers, Actuators, and Final Control Elements
The most successful APC programs of the next 40 years will have the following characteristics: ●●
●●
●●
●●
The APC solution will be designed to solve the operating problem being dealt with and will utilize the appropriate control technology; MPC will be employed only when required. The total APC solution will utilize the principle of “hierarchy”; lower-level, simpler solutions will solve lower-level problems, with ascending, higher-level, more complex solutions achieving higher-level control and optimization objectives. An intermediate level of model-based APC technology, somewhere between simple PID and MPC (Brainwave, for example), will receive greater utilization at a lower cost (in terms of both initial cost and maintenance cost) but will provide an effective means of dealing with measured and unmeasured disturbances. The whole subfield of APC known as transition management (e.g., handling crude switches) will become much more dominant and important as an adjunct to continuous process control.
●●
●●
APC solutions will make greater use of inferred proper ties, and these will be based on the physics and chemistry of the process (not “artificial intelligence,” like neural nets). The role of operating company, in-house control engineers will evolve from a purely technical position to more of a program manager. Operating companies will rely more on APC companies to do the “grunt work,” allowing the in-house engineers to manage the projects and to monitor the performance of the control system.
APC technology has endured and survived its growing pains; it has reached a relatively stable state of maturity, and it has shaken out its weaker components and proponents. Those of us who have survived this process are wiser, humbler, and more realistic about how to move forward. Our customers and employers will be best served by a pragmatic, process engineering-based approach to APC design that applies the specific control technology most appropriate to achieve the desired control and operating objectives at lowest cost and with the greatest probability of long-term durability and maintainability.
Chapter 35
Batch Process Control W. H. Boyes
35.1 Introduction “Batch manufacturing has been around for a long time,” says Lynn Craig, one of the founders of WBF, formerly World Batch Forum, and one of the technical chairs of ISA88, the batch manufacturing standard. “Most societies since prehistoric times have found a way to make beer. That’s a batch process. It’s one of the oldest batch processes. Food processing and the making of poultices and simples (early pharmaceutical manufacturing) are so old that there is no record of who did it first.” “The process consisted of putting the right stuff in the pot and then keeping the pot just warm enough, without letting it get too warm. Without control, beer can happen, but don’t count on it. Ten years ago or a little more, batch was a process that, generally speaking, didn’t have a whole lot of instrumentation and control, and all of the procedures were done manually. It just didn’t fit into the high-tech world.”1 In many cases, the way food is manufactured and the way pharmaceuticals and nutraceuticals are made differs hardly at all from the way they were made 100 years ago. In fact, this is true of many processes in many different industries. What is different since the publication of the ISA S88 batch standard is that there’s now a language and a way of describing batch processes so that they can be repeated precisely and reliably, anywhere in the world. The S88 model describes any process (including continuous processes) in terms of levels or blocks (see Figure 35.1). This is similar to the ISO’s Computer Integrated Manufacturing (CIM) standard, which uses a set of building blocks that conceptually allow any process to be clearly elaborated. The highest three levels—enterprise, site, and area— aren’t, strictly speaking, controlled by the batch standard but are included to explain how the batch standard’s language can interface with the business systems of the area,
1. Conversation with the author at WBF North America Conference, 2006. ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00035-8
Enterprise
Enterprise
Facility
Site
Section
Area
Cell
Process Cell
Station
Unit
Equipment
Equipment Module Control Module
Figure 35.1 Comparing standard models. ISA’s S88.01 model (right) is similar to the ISO’s CIM model (left).
plant site, and business enterprise as a whole. The enterprise is a common term for a complete business entity; a site is whatever the enterprise defines it to be, commonly a place where production occurs. The second edition of Tom Fisher’s seminal book on the batch standard, Batch Control Systems, which was completely rewritten by William Hawkins after Fisher’s death, states that, “Areas are really political subdivisions of a plant, whose borders are subject to the whims of management … and so it (the area) joins Site and Enterprise as parts of the physical model that are defined by management, and not by control engineers.” The next two levels, process cell and unit, are the building blocks of the manufacturing process. Process cell is used in a way intentionally similar to the widely used discrete manufacturing cell concept. One or more units are contained in a process cell. A unit is a collection or set of controlled equipment, such as a reactor vessel and the ancillary equipment necessary to operate it. Within the unit are the equipment module and the control module. The equipment module is the border around a minor group of equipment with a process function. An equipment 629
630
PART | V Controllers, Actuators, and Final Control Elements May contain Enterprise May contain Site May contain Area
Process Cell May contain Unit May contain Equipment Module
May contain
May contain Control Module
May contain
Figure 35.2 Getting physical: S88 physical model.
module may contain control module(s) and even subsidiary equipment modules. See Figure 35.2. The control module contains the equipment and systems that perform the actual control of the process.
Since the introduction of the S88 batch standard, every major supplier of automation equipment and software has introduced hardware and software designed to work in accordance with the standard. Many suppliers now build the software required to implement the S88 standard directly into their field controllers. Many of those suppliers are corporate members of WBF, formerly known as World Batch Forum. WBF was founded as a way to promulgate the use of the S88 standards worldwide. Currently it calls itself the Organization for Production Technology, since the S88 and S95 standards are in extremely common use in industry.
Further Reading Hawkins, William M., and Thomas G. Fisher, Batch Control Systems, ISA Press, 2006. Parshall, Jim, and Lamb, L. B., Applying S88: Batch Control from a User’s Perspective, ISA Press, 2000. WBF Body of Knowledge, www.wbf.org.
Chapter 36
Applying Control Valves1 B. G. Liptak; edited by W. H. Boyes
1. This section originally appeared in different form in Control magazine and is reprinted by permission of Putman Media Inc. ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00036
60
40 Pipe friction pressure loss 20
0
cu rv e
�Pmin
Pu mp
80
�P
Control valves modulate the flows of process or heat-transfer fluids and stabilize variations in material and heat balance of industrial processes. They manipulate these flows by changing their openings and modifying the energy needed for the flow to pass through them. As a control valve closes, the pressure differential required to pass the same flow increases, and the flow is reduced. Figure 36.1 illustrates the pump curve of a constant-speed pump and the system curve of the process that the pump serves. The elevation (static) head of the process is constant, whereas the friction loss increases with flow. The pressure generated by the pump is the sum of the system curves (friction and static) and the pressure differential (DP) required by the control valve. As the valve throttles, the pump travels on its curve while delivering the required valve pressure drop. The pumping energy invested to overcome the valve differential is wasted energy and is the difference between the pressure required to “push” (transport) the fluid into the process (system curve, at bottom left in Figure 36.1) and the pump curve of the constant-speed pump. Pumps are selected to meet the maximum possible flow demand of the process, so they tend to be oversized during normal operation. Consequently, using control valves to manipulate the flow generated by constant-speed pumps wastes energy and increases plant-operation costs. Therefore, when designing a control system, a processcontrol engineer must first decide whether a control valve or a variable-speed pump should be used to throttle the flow. Variable-speed pumps reduce flow by reducing pump speed. So, instead of burning energy unnecessarily introduced by the pump head, that energy isn’t introduced in the first place. This lessens operating costs but increases capital investment because variable and constant-speed pumps usually cost more than control valves.
100
�Pmax
36.1 Introduction
Sum, pipe and static pressure
System curve
0
Fmin
20
Static head (pressure) 40 Fmax 60 80 100 Fmin FlowGPM
Figure 36.1 Pump curve vs. process. Courtesy of Putman Media Inc.
When several users are supplied by the same variablespeed pump, its speed can be automatically adjusted by a valve-position controller (VPC), which detects the opening of the most-open user valve (MOV). The MOV isn’t allowed to open beyond 80 to 90 percent because when the set point of the VPC is reached, this integral-only controller starts increasing the pump speed. This increases the available pressure drop for all the valves, which in turn reduces their openings.
36.2 Valve types and characteristics If the cost/benefit analysis comparing constant and variablespeed pumping systems favors using throttling valves, the next task is to select the right valve type for the application. Figure 36.2 shows that various valve designs have different pressure and temperature ratings, costs, capacities (Cd = Cv/d2), and so on. Once the valve type is selected, the next task is to select the valve characteristics and size the valve. 631
632
PART | V Controllers, Actuators and Final Control Elements WWW.CONTROLGLOBAL.COM
F O R
T H E
P R O C E S S
I N D U S T R I E S
PICKING THE RIGHT CONTROL VALVE
Ball: Conventiona
Ball: Characterized
Butterfly:Conventional
Butterfly: Highperformance
Digital
Globe: Single-ported
Globe: Double-ported
Globe: Angle
Globe: Eccentric disc
Pinch
Plug: Conventional
Plug: Characterized
Saunders Sliding gate: V-Insert
Sliding gate: V-Insert
Sliding gate: Positioned disc
Special: Dynamically balanced
Control Valve Types
2500
600
300
600
2500
2500
2500
2500
600
150
2500
300
150
150
2500
1500
45
25
40
25
14
12
15
12
13
60
35
25
20
30
10
30
Characteristics
F
G
P
F, G
E
E
E
E
G
P
P
F, G
P,F
F
F
F,G
Corrosive Service
E
E
G
G
F, G
G, E
G, E
G, E
F, G
G
G, E
G
G
F, G
G
G,E
0.7
0.9
0.6
0.9
3.0
1.0
1.2
1.1
1.0
0.5
0.7
0.9
0.6
1.0
2.0
1.5
Cryogenic service
A
S
A
A
A
A
A
A
A
NA
A
S
NA
A
NA
NA
High pressure drop (over 200 PSI)
A
A
NA
A
E
G
G
E
A
NA
A
A
NA
NA
E
E
High temperature (over 500°F)
Y
S
E
G
Y
Y
Y
Y
Y
NA
S
S
NA
NA
S
NA
Leakage (ANSI class)
V
IV
I
IV
V
IV
II
IV
IV
IV
IV
IV
V
I
IV
II
Liquids: Abrasive service
C
C
NA
NA
P
G
G
E
G
G, E
F, G
F, G
F, G
NA
E
G
Cavitation resistance
L
L
L
L
M
H
H
H
M
NA
L
L
NA
L
H
M
Dirty service
G
G
F
G
NA
F, G
F
G
F, G
E
G
G
G, E
G
F
F
Flashing applications
P
P
P
F
F
G
G
E
G
F
P
P
F
P
G
P
Slurry including fibrous service
G
G
F
F
NA
F, G
F, G
G,E
F, G
E
G
G
E
G
P
F
Viscous service
G
G
G
G
F
G
F, G
G, E
F, G
G, E
G
G
G,E
F
F
F
Gas/Vapor: Abrasive, erosive
C
C
F
F
P
G
G
E
F, G
G, E
F, G
F, G
G
NA
E
E
Dirty
G
G
G
G
NA
G
F, G
G
F, G
G
G
G
G
G
F
G
Features and Applications ANSI class pressure rating (max.) Max. capacity (Cd)
Cost (relative to singleport globe)
Abbreviations: A = Available C = All-ceramic design available F = Fair
G = Good E = Excellent H = High L = Low
M = Medium NA = Notavailable P = Poor
S = Special designs only Y = Yes
Figure 36.2 Picking the right control valve. Courtesy of Putman Media Inc.
These characteristics determine the relationship between valve stroke (control signal received) and the flow through the valve; size is determined by maximum flow required. After startup, if the control loop tends to oscillate at low flows but is sluggish at high flows, users should consider
switching the valve trim characteristics from linear to equalpercentage trim. Inversely, if oscillation is encountered at high and sluggishness at low flows, the equal-percentage trim should be replaced with a linear one. Changing the valve characteristics can also be done (sometimes more
633
Chapter | 36 Applying Control Valves
easily) by characterizing the control signal leading to the actuator rather than by replacing the valve trim. Once the valve type is selected, the next task is to choose the valve characteristics and size the valve. Three of the most common valve characteristics are described in Figure 36.3, and the following tabulation lists the recommended selections of valve characteristics for some of the most common process applications. The characteristic curves were drawn, assuming that the pressure drop through the valves remain constant while the valves throttle. The three valve characteristics differ from each other in their gain characteristics. The gain of a control valve is the ratio between the change (=%) in the control signal that the valve receives and the resulting change (=%) in the flow through the valve. Therefore, the valve gain (Gv) can be expressed as GPM/% stroke. The gain (Gv = GPM/%) of a linear valve is constant, the gain of an equal percentage (=%) valve is increasing at a constant slope, and the gain of a quick opening (QO) valve is dropping as the valve opens. The valve characteristics are called linear (straight line in Figure 36.3) if the gain is constant (Gv = 1) and a 1 percent change in the valve lift (control signal) results in the same amount (GPM) of change in the flow through the valve, no matter how open the valve is. This change is the slope of the straight line in Figure 36.3, and it can be expressed as a percentage of maximum flow per a 1 percent change in lift, or as a flow quantity of, say, 5 GPM per percent lift, no matter how open the valve is. If a 1 percent change in the valve stroke results in the same percentage change (not quantity, but percent of the flow that is occurring!), the valve characteristic is called equal percentage (=%). If the valve characteristic is =%,
% flow (Cv or Kv)
the amount of change in flow is a small quantity when the valve is nearly closed, and it becomes larger and larger as the valve opens. As shown in Figure 36.3, in case of quick opening (QO) valves, the opposite is the case; at the beginning of the stroke, the valve gain is high (the flow increases at a fast slope) and towards full opening, the slope is small. The recommended choice of the valve characteristic is a function of the application. For common applications, the recommendations are tabulated at the bottom of Figure 36.2. It should be noted that Figure 36.3’s valve characteristics assume that the valve pressure drop is constant. Unfortunately, in most applications it isn’t constant but drops off as the load (flow) increases. This is why the valve characteristics recommended in Figure 36.2 are different if the ratio of maximum to minimum pressure differential across the valve is above or below 2:1. One approach to characterizing an analog control signal is to insert either a divider or a multiplier into the signal line. By adjusting the zero and span, a complete family of curves can be obtained. A divider is used to convert an air-to-open, equalpercentage valve into a linear one or an air-to-close linear valve into an equal-percentage one. A multiplier is used to convert an air-to-open linear valve into an equal-percentage or an air-to-close equal-percentage valve into linear one.
36.3 Distortion of valve characteristics Figure 36.4 shows the effect of the distortion coefficient (DC, defined in Figure 36.4) on the characteristics of an =% valve. As the ratio of the minimum to maximum pressure
�P is constant
Destination System
100
�P
(�P)
Pump %Flow
80
100 80
(�Pi)min (�P) Di � (�Pi)max (�Pi)
60
0.0
40
20
Equal Percentage and butterfly 0
20
40
60
80
Dc
�
40 20
% lift or 100 stroke
Figure 36.3 Valve characteristics. Courtesy of Putman Media.
0
0
20
0. 1 .2
Linear
0 � .5 D Dc � 0 c D 1 � Dc
c�
Quick opening
60
0
�Ps �Pt
40 60 Equal percent
80
% lift or 100 stroke
Figure 36.4 Distortion coefficient. Courtesy of Putman Media Inc.
PART | V Controllers, Actuators and Final Control Elements
drop increases, the DC coefficient drops and the =% characteristics of the valve shifts toward linear. Similarly, under these same conditions, the characteristics of a linear valve would shift toward quick opening (QO, not shown in the figure). In addition, as the DC coefficient drops, the controllable minimum flow increases and therefore the “rangeability” of the valve also drops.
36.4 Rangeability
Valve gain (Gv )
m,
Load (u) m �
Gc
Load
e �
For stable control: Gc �Gv�Gp�Gs = 0.5
Gp
c
� b
Set point (r)
�
Process gain (Gp )
The conventional definition of rangeability is the ratio between the maximum and minimum “controllable” flows through the valve. Minimum controllable flow (Fmin) is not the leakage flow (which occurs when the valve is closed) but the minimum flow that is still controllable and can be changed up or down as the valve is throttled. Using this definition, manufacturers usually claim a 50:1 rangeability for equal-percentage valves, 33:1 for linear valves, and about 20:1 for quick-opening valves. These claims suggest that the flow through these valves can be controlled down to 2, 3, and 5 percent of maximum. However, these figures are often exaggerated. In addition, as shown in the figure, the minimum controllable flow (Fmin) rises as the distortion coefficient (DC) drops. Therefore, at a DC of 0.1, the 50:1 rangeability of an equal-percentage valve drops to about 10:1. Consequently, the rangeability should be defined as the flow range over which the actual installed valve gain stays within !25 percent of the theoretical (inherent) valve gain (in the units of GPM per % stroke). To illustrate the importance of this limitation, Figure 36.3 shows the actual gain of an equal percentage valve starts to deviate from its theoretical gain by more than 25 percent, when the flow reaches about 65 percent. Therefore, in determining the rangeability of such a valve, the maximum allowable flow should be 65 percent. Actually, if one uses this definition, the rangeability of an =% valve is seldom more than 10:1. In such cases, the rangeability of a linear valve can be greater than that of an =% valve. Also, the rangeability of some rotary valves can be higher because their clearance flow tends to be lower and their body losses near the wide-open position also tend to be lower than those of other valve designs. To stay within !25 percent of the theoretical valve’s gain, the maximum flow should not exceed 60 percent of maximum in a linear valve or 70 percent in an =% valve. In terms of valve lift, these flow limits correspond to 85 percent of maximum lift for =% and 70 percent for linear valves.
The gain of any device is its output divided by its input. The characteristic range and gain of control valves are interrelated. The gain of a linear valve is constant. This gain (Gv) is the maximum flow divided by the valve stroke in percentage (Fmax/100 percent). Most control loops are tuned for quarter-amplitude damping. This amount of damping (reduction in the amplitude of succeeding peaks of the oscillation of the controlled variable) is obtained by adjusting the controller gain (Gc = 100/%PB) until the total loop gain (the product of the gains of all the control loop components) reaches 0.5 (see Figure 36.5). The gains of a linear controller (Gv = plain proportional) and a linear transmitter (if it is a temperature transmitter, its gain is Gs = 100%/°F) are both constant. Therefore, if the process gain (Gp = °F/GPM) is also constant, a linear valve is needed to maintain the total loop gain at 0.5 (Gv = 0.5/ GvGcGs = constant, meaning linear). If the transmitter is nonlinear, such as in the case of a d/p cell (sensor gain increases with flow), one can correct for that nonlinearity by using a nonlinear valve the gain of which drops with flow increases (quick opening). In case of heat transfer over a fixed area, the efficiency of heat transfer (process gain Gp) drops as the amount of heat to be transferred rises. To compensate for this nonlinearity (drop in process gain = Gp), the valve gain (Gv) must increase with load. Therefore, an equal-percentage valve should be selected for all heat-transfer temperature control applications. In case of flow control, one effective way of keeping the valve gain (Gv) perfectly constant is to replace the control valve with a linear cascade slave flow control loop. The limitation of this cascade configuration (in addition to its higher cost) is that if the controlled process is faster than the flow loop, cycling will occur. This is because the slave in any cascade system must be faster than its master. The only way
Controller gain (Gc )
634
Sensor gain (G s)
Gs
36.5 Loop tuning In the search for the most appropriate control valve, gain and stability are as crucial as any other selection characteristics.
Figure 36.5 Well-tuned loops. Courtesy of Putman Media Inc.
635
Chapter | 36 Applying Control Valves
to overcome this cycling is to slow (detune) the master by lowering its gain (increasing its proportional band), which in turn degrades its control quality. Therefore, this approach should only be considered on slow or secondary temperature control loops.
processes, this should not be done by the use of positioners but instead by installing dividing or multiplying relays in the controller output.
36.7 Smarter smart valves 36.6 Positioning positioners A valve positioner is a high-gain (0.5 to 10 percent proportional band), sensitive, proportional-only, valve-stroke position controller. Its set point is the control signal from the controller. The main purpose of having a positioner is to guarantee that the valve does in fact move to the position that corresponds to the value of the controller output. The addition of a positioner can correct for such maintenance-related effects as variations in packing friction due to dirt buildup, corrosion, or lack of lubrication; variations in the dynamic forces of the process; or nonlinearity in the valve actuator. In addition, the positioner can allow for split-ranging the controller signal between valves or can increase the actuator speed or actuator thrust by increasing the pressure and/or volume of the actuator air signal. In addition, it can modify the valve characteristics by the use of cams or function generators. A positioner will improve performance on most slow loops, such as the control of analytical properties, temperature, liquid level, blending, and large-volume gas flow. A controlled process can be considered slow if its period of oscillation is three or more times the period of oscillation of the positioned valve. Positioners also are useful to overcome the “dead band” of the valve, which can be caused by valve-stem friction. The result of this friction is that whenever the direction of the control signal is reversed, the stem remains in its last position until the dead band is exceeded. Positioners will eliminate this limit cycle by closing a loop around the valve actuator. Integrating processes, such as liquid level, volume (as in digital blending), weight (not weight-rate), and gaspressure control loops, are prone to limit cycling and will usually benefit from the use of positioners. In the case of fast loops (fast flow, liquid pressure, smallvolume gas pressure), positioners are likely to degrade loop response and cause limit cycling because the positioner (a cascade slave) is not faster than the speed at which its set point (the control signal) can change. A controlled process is considered fast if its period of oscillation is less than three times that of the positioned valve. Split ranging of control valves does not necessarily require the use of positioners, because one can also split-range the valves through the use of different spring ranges in the valve actuators. If the need is only to increase the speed or the thrust of the actuator, it is sufficient to install an air volume booster or a pressure amplifier relay, instead of using a positioner. If the goal is to modify the valve characteristics on fast
Much improvement has occurred and more is expected in the design of intelligent and self-diagnosing positioners and control valves. The detection and correction for the wearing of the trim, hysteresis caused by packing friction, air leakage in the actuator, and changes in valve characteristics can all be automated. If the proper intelligence is provided, the valve can compare its own behavior with its past performance and, when the same conditions result in different valve openings, it can conclude, for example, that its packing is not properly lubricated or the valve port is getting plugged. In such cases, the valve can automatically request and schedule its own maintenance. A traditional valve positioner serves only the purpose of keeping the valve at the opening that corresponds to the control signal. Digital positioners can also collect and analyze valve-position data, valve operating characteristics, and performance trends and can enable diagnostics of the entire valve assembly. The control signals into smart positioners can be analog (4–20 mA) or digital (via bus systems). The advantages of digital positioners relative to their analog counterparts include increased accuracy (0.1 to 1 percent versus 0.3 to 2 percent for analog), improved stability (about 0.1 percent compared to 0.175 percent), and wider range (up to 50:1 compared to 10:1). Smart valves should also be able to measure their own inlet, outlet, and vena contracta pressures, flowing temperature, valve opening (stem position) and actuator air pressure. Valve performance monitoring includes the detection of “zero” position and span of travel, actuator air pressure versus stem travel, and the ability to compare these against their values when the valve was new. Major deviations from the “desired” characteristic can be an indication of the valve stuffing box being too tight, a corroded valve stem or a damaged actuator spring. Additional features offered by smart valves include the monitoring of packing box or bellows leakage by “sniffing” (using miniaturized chemical detectors), checking seat leakage by measuring the generated sound frequency, or by comparing the controller output signal at “low flow” with the output when the valve was new. Another important feature of digital positioners is their ability to alter the inherent characteristics of the valve.
36.8 Valves serve as flowmeters A control valve can also be viewed as a variable area flowmeter. Therefore, smart valves can measure their own flow
636
PART | V Controllers, Actuators and Final Control Elements
by solving their valve-sizing equation. For example, in case of turbulent liquid flow applications, where the valve Gf q capacity coefficient Cv can be calculated as Cv = , Fp Dp the valve data can be used to calculate flow. This is done by inserting the known values of Cv, Gf, Dp, and the piping geometry coefficient (Fp) into the applicable equation for Cv. Naturally, in order for the smart valves of the future to be able to accurately measure their own flow, they must be provided with sufficient intelligence to identify the applicable sizing equation for the particular process. (See Section
6.15 in Volume 2 of the Instrument Engineers’ Handbook for valve sizing equations.)
Further Reading Liptak, Bela G., Instrument Engineer’s Handbook, Vol. 2, Chilton’s and ISA Press, 2002. Emerson Process Management, Control Valve Handbook, 4th ed., 2006; downloadable from www.controlglobal.com/whitepapers/2006/056.html. Bauman, Hans D., Control Valve Primer: A User’s Guide, 4th ed., ISA Press, 2009.
Part VI
Automation and Control Systems
This page intentionally left blank
Chapter 37
Design and Construction of Instruments C. I. Daykin and W. H. Boyes
37.1 Introduction
37.2 Instrument design
The purpose of this chapter is to give an insight into the types of components and construction used in commercial instrumentation. To the designer, the technology in Instrument Technology depends on the availability of components and processes appropriate to his task. Being aware of what is possible is an important function of the designer, especially with the rapidity with which techniques now change. New materials such as ceramics and polymers have become increasingly important, as have semicustom (ASICs) and large-scale integrated (LSI and VLSI) circuits. The need for low-cost automatic manufacture is having the greatest impact on design techniques, demanding fewer components and suitable geometries. Low volume instruments and one-offs are now commonly constructed in software, using “virtual instrumentation” graphical user interfaces such as Labview and Labtech, or “industrial strength” HMI programs such as Wonderware or Intellution. The distinction between computer and instrument has become blurred, with many instruments offering a wide range of facilities and great flexibility. Smart sensors, which interface directly to a computer, and Fieldbus interconnectivity have shifted the emphasis to software and mechanical aspects. Historical practice, convention, and the emergence of standards also contribute significantly to the subject. Standards, especially, benefit the designer and the user, and have made the task of the author and the reader somewhat simpler. Commercial instruments exist because there is a market, and so details of their design and construction can only be understood in terms of a combination of commercial as well as technical reasons. A short section describes these tradeoffs as a backdrop to the more technical information.
37.2.1 The Designer’s Viewpoint
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00039-5
Many of the design features found in instruments are not obviously of direct benefit to the user. These can best be understood by also considering the designer’s viewpoint. The instrument designer’s task is to find the best compromise between cost and benefit to the users, especially when competition is fierce. For a typical medium-volume instrument, its cost as a percentage of selling price is distributed as follows: Z Design cost 20% ] ] Manufacturing cost 30% ] Selling cost 20% Purchase cost [ 20% Other overheads ] 10% Profit ] ] 100% \ Operating/maintenance cost may amount to 10 percent per annum. Benefits to the user can come from many features, for example: 1. Accuracy 2. Speed 3. Multipurpose 4. Flexibility 5. Reliability 6. Integrity 7. Maintainability 8. Convenience Fashion, as well as function, is very important, since a smart, pleasing, and professional appearance is often essential when selling instruments on a commercial basis. For a particular product the unit cost can be reduced with higher volume production, and greater sales can be achieved with a lower selling price. The latter is called its “market
639
640
elasticity.” Since the manufacturer’s objective is to maximize return on investment, the combination of selling price and volume which yields the greatest profit is chosen.
37.2.2 Marketing The designer is generally subordinate to marketing considerations, and consequently, these play a major role in determining the design of an instrument and the method of its manufacture. A project will only go ahead if the anticipated return on investment is sufficiently high and commensurate with the perceived level of risk. It is interesting to note that design accounts for a significant proportion of the total costs and, by its nature, involves a high degree of risk. With the rapid developments in technology, product lifetimes are being reduced to less than three years, and market elasticity is difficult to judge. In some markets, especially in test and measurement systems, product lifetime has been reduced to one single production run. In this way design is becoming more of a process of evolution, continuously responding to changes in market conditions. Initially, therefore, the designer will tend to err on the cautious side, and make cost reductions as volumes increase. The anticipated volume is a major consideration and calls for widely differing techniques in manufacture, depending on whether it is a low-, medium-, or high-volume product.
37.2.3 Special Instruments Most instrumentation users configure systems from lowcost standard components with simple interfaces. Occasionally the need arises for a special component or system. It is preferable to modify a standard component wherever possible, since complete redesigns often take longer than anticipated, with an uncertain outcome. Special systems also have to be tested and understood in order to achieve the necessary level of confidence. Maintenance adds to the extra cost, with the need for documentation and test equipment making specials very expensive. This principle extends to software as well as hardware.
37.3 Elements of construction 37.3.1 Electronic Components and Printed Circuits Electronic circuitry now forms the basis of most modern measuring instruments. A wide range of electronic components are now available, from the simple resistor to complete data acquisition subsystems. Table 37.1 lists some of the more commonly used types. Computer-aided design makes it possible to design complete systems on silicon using standard cells or by providing
PART | VI Automation and Control Systems
the interconnection pattern for a standard chip which has an array of basic components. This offers reductions in size and cost and improved design security. The most common method of mounting and interconnecting electronic components at one time was by doublesided, through-hole plated fiberglass printed circuit board (PCB) (Figure 37.1 (a)). Component leads or pins are pushed through holes and soldered to tinned copper pads (Figure 37.1 (b)). This secures the component and provides the connections. The solder joint is thus the most commonly used component and probably the most troublesome. In the past eight years, new techniques for surface-mount assembly have made through-hole circuit boards obsolete for anything but the most low-volume production. Tinning the copper pads with solder stops corrosion and makes soldering easier. Fluxes reduce oxidation and surface tension, but a temperature-controlled soldering iron is indispensable. Large-volume soldering can be done by a wavesoldering machine, where the circuit board is passed over a standing wave of molten solder. Components have to be on one side only, although this is also usually the case with manual soldering. The often complicated routing of connections between components is made easier by having two layers of printed “tracks,” one on each surface, permitting cross-overs. Connections between the top and bottom conductor layers are provided by plated-through “via” holes. It is generally considered bad practice to use the holes in which components are mounted as via holes, because of the possibility of damaging the connection when components are replaced. The throughhole plating (see Figure 37.1(c)) provides extra support for small pads, reducing the risk of them peeling off during soldering. They do, however, make component removal more difficult, often requiring the destruction of the component so that its leads can be removed individually. The more expensive components are therefore generally provided with sockets, which also makes testing and servicing much simpler. The PCB is completed by the addition of a solder mask and a printed (silk-screened) component identification layer. The solder mask is to prevent solder bridges between adjacent tracks and pads, especially when using an automatic soldering technique such as wave soldering. The component identification layer helps assembly, testing and servicing. For very simple low-density circuits a single-sided PCB is often used, since manufacturing cost is lower. The pads and tracks must be larger, however, since without through-hole plating they tend to lift off more easily when soldering. For very high-density circuits, especially digital, multilayer PCBs are used, as many as nine layers of printed circuits being laminated together with through-hole plating providing the interconnections. Most electronic components have evolved to be suitable for this type of construction, along with machines for automatic handling and insertion. The humble resistor is an interesting example; this was originally designed for wiring
641
Chapter | 37 Design and Construction of Instruments
TABLE 37.1 Electronic components Component and symbol
Main types
Resistors
Capacitors
Inductors
Appearance
Range
Character
Metal oxide
0 X–100 MX
1% general purpose
Metal film
0.1 X 100 MX
0.001% low drift
Wire wound
0.01 XMX
0.01% high power
Air dielectric
0.01 pF–100 pF
0.01% high stability
Ceramics
1 pF–10 nF
5% small size
Polymer
1 pF–10 nF
1% general purpose
Electrolytic
0.1 nF–1 F
10% small size
Cylindrical core
0.1 nH–10 mH
10% general purpose
Pot core
1 nH–1 H
0.1% high stability
Toroidal core
100 nH–100 H
20% high values 10–7 high accuracy ratios
Transformers
Diodes Transistors
Cylindrical core
RF, IF types
Pot core
0.1 nH–1 mH
0.1% mutual inductor
Toroidal core
0.1 H–10 H
20% high inductance
pA(on)–103
PN junction
1
Bipolar
10–17 W(input)–103 W(output)
high freq., low noise
FETs
10–23(input)–103 W(output)
as above
Analog
operational amplifiers
wide range
function blocks
multiply, divide
Amplifiers
high frequency
switches
high accuracy
A(off)
Wide range
high power, etc.
Integrated circuits
semi-custom Digital
small scale integ. (SSI)
logic elements
medium scale integ. (MSI)
function blocks
large scale integ. (LSI)
major functions
v. large scale integ. (VLSI)
complete systems
semi-custom Monolithic hybrid
A/D D/A conversion special functions semi-custom
Others
Thyristors
1 A–103 A
high power switch
A–103
high power switch
Triacs
1
Opto-couplers
singles, duals, quads
Thermistors
+or – temp. coefficient
Relay sys
0.01 X(on) – 1012 X(off)
A
642
PART | VI Automation and Control Systems
and wire” hybrid thick film integrated circuit technique, used mainly in high-density military applications. In both cases, reflow soldering techniques are used due to the small size. Here, the solder is applied as a paste and silk-screened onto the surface bonding pads. The component is then placed on its pads and the solder made to reflow by application of a short burst of heat which is not enough to damage the component. The heat can be applied by placing the substrate onto a hot vapor which then condenses at a precise temperature above the melting point of the solder. More simply, the substrate can be placed on a temperature-controlled hot plate or passed under a strip of hot air or radiant heat. The technique is therefore very cost effective in high volumes, and with the increasing sophistication of silicon circuits results in “smart sensors” where the circuitry may be printed onto any flat surface.
37.3.2.1 Circuit Board Replacement When deciding servicing policy it should be realized that replacing a whole circuit board is often more cost effective than trying to trace a faulty component or connection. To this end, PCBs can be mounted for easy access and provided with a connector or connectors for rapid removal. The faulty circuit board can then be thrown away or returned to the supplier for repair. Figure 37.1 Printed electronic circuits. (a) Printed circuit board (PCB); (b) traditional axial component; (c) through-hole plating (close-up); (d) surface-mounted assemblies.
between posts or tag-strips in valve circuits. Currently, they are supplied on long ribbons, and machines or hand tools are used for bending and cropping the leads ready for insertion (see Figure 37.1(b)). Important principles relevant to the layout of circuits are also discussed in Chapter 35.
37.3.2 Surface-Mounted Assemblies As noted earlier, surface-mounting took over as the design of choice in the 1990s. Sometimes, the traditional fiberglass rigid board itself has been replaced by a flexible sheet of plastic with the circuits printed on it. Semiconductors, chip resistors and chip capacitors are available in very small outline packages, and are easier to handle with automatic placement machines. Surface mounting eliminates the difficult problem of automatic insertion and, in most cases, the costly drilling process as well. Slightly higher densities can be achieved by using a ceramic substrate instead of fiberglass (Figure 37.1(d)). Conductors of palladium silver, insulators, and resistive inks are silk-screened and baked onto the substrate to provide connections, cross-overs, and some of the components. These techniques have been developed from the older “chip
37.3.3 Interconnections There are many ways to provide the interconnection between circuit boards and the rest of the instrument, of which the most common are described below. Connectors are used to facilitate rapid making and breaking of these connections and simplify assembly test and servicing. Conventional wiring looms are still used because of their flexibility and because they can be designed for complicated routing and branching requirements. Termination of the wires can be by soldering, crimp, or wirewrap onto connector or circuit board pins. This, however, is a labor-intensive technique and is prone to wiring errors. Looms are given mechanical strength by lacing or sleeving wires as a tight bunch and anchoring to the chassis with cable ties. Ribbon cable and insulation displacement connectors are now replacing conventional looms in many applications. As many as sixty connections can be made with one simple loom with very low labor costs. Wiring errors are eliminated since the routing is fixed at the design stage (see Figure 37.2). Connectors are very useful for isolating or removing a subassembly conveniently. They are, however, somewhat expensive and a common source of unreliability. Another technique, which is used in demanding applications where space is at a premium, is the “flexy” circuit. Printed circuit boards are laminated with a thin, flexible sheet of Kapton which carries conductors. The connections
643
Chapter | 37 Design and Construction of Instruments
screens or in magnetic components. Some alloys–notably beryllium-copper—have very good spring characteristics, improved by annealing, and this is used to convert force into displacement in load cells and pressure transducers. Springs made of nimonic keep their properties at high temperatures, which is important in some transducer applications. The precise thermal coefficient of the expansion of metals makes it possible to produce compensating designs, using different metals or alloys, and so maintain critical distances independent of temperature. Invar has the lowest coefficient of expansion at less than 1 ppm per K over a useful range, but it is difficult to machine precisely. Metals can be processed to change their characteristics as well as their shape; some can be hardened after machining and ground or honed to a very accurate and smooth finish, as found in bearings. Metal components can be annealed, i.e., taken to a high temperature, in order to reduce internal stresses caused in the manufacture of the material and machining. Heat treatments can also improve stability, strength, spring quality, magnetic permeability, or hardness.
Figure 37.2 Ribbon cable interconnection. (a) Ribbon cable assembly; (b) ribbon cable cross-section; (c) insulation displacement terminator; (d) dual in-line header.
are permanent, but the whole assembly can be folded up to fit into a limited space. It is inappropriate to list here the many types of connectors. The connector manufacturers issue catalogs full of different types, and these are readily available.
37.3.4 Materials A considerable variety of materials are available to the instrument designer, and new ones are being developed with special or improved characteristics, including polymers and superstrong ceramics. These materials can be bought in various forms, including sheet, block, rod, and tube, and processed in a variety of ways.
37.3.4.1 Metals Metals are usually used for strength and low cost as structural members. Aluminum for low weight and steel are the most common. Metals are also suitable for machining precise shapes to tight tolerances. Stainless steels are used to resist corrosion, and precious metal in thin layers helps to maintain clean electrical contacts. Metals are good conductors and provide electrical screening as well as support. Mumetal and radiometal have high permeabilities and are used as very effective magnetic
37.3.4.2 Ceramics For very high temperatures, ceramics are used as electrical and heat insulators or conductors (e.g., silicon carbide). The latest ceramics (e.g., zirconia, sialon, silicon nitride, and silicon carbide) exhibit very high strength, hardness, and stability even at temperatures over 1,000ºC. Processes for shaping them include slip casting, hot isostatic pressing (HIP), green machining, flame spraying, and grinding to finished size. Being hard, their grinding is best done by diamond or cubic boron nitride (CBN) tools. Alumina is widely used, despite being brittle, and many standard mechanical or electrical components are available. Glass-loaded machinable ceramics are very convenient, having very similar properties to alumina, but are restricted to lower temperatures (less than 500 ºC). Special components can be made to accurate final size with conventional machining and tungsten tools. Other compounds based on silicon include sapphires, quartz, glasses, artificial granite, and the pure crystalline or amorphous substance. These have well behaved and known properties (e.g., thermal expansion coefficient, conductivity and refractive index), which can be finely adjusted by adding different ingredients. The manufacture of electronic circuitry, with photolithography, chemical doping, and milling, represents the ultimate in materials technology. Many of these techniques are applicable to instrument manufacture, and the gap between sensor and circuitry is narrowing—for example, in chemfets, in which a reversible chemical reaction produces a chemical potential that is coupled to one or more field-effect transistors. These transistors give amplification and possibly conversion to digital form before transmission to an indicator instrument with resulting higher integrity.
644
37.3.4.3 Plastics and Polymers Low-cost, lightweight, and good insulating properties make plastics and polymers popular choices for standard mechanical components and enclosures. They can be molded into elaborate shapes and given a pleasing appearance at very low cost in high volumes. PVC, PTFE, polyethylene, polypropylene, polycarbonates, and nylon are widely used and available as a range of composites, strengthened with fibers or other ingredients to achieve the desired properties. More recently, carbon composites and Kevlar have exhibited very high strength-to-weight ratio, useful for structural members. Carbon fiber is also very stable, making it suitable for dimensional calibration standards. Kapton and polyamides are used at higher temperatures and radiation levels. A biodegradable plastic, poly 3-hydroxy-buty-rate, or PHB, is also available which can be controlled for operating life. Manufactured by cloned bacteria, this material represents one of many new materials emerging from advances in biotechnology. More exotic materials are used for special applications, and a few examples are: 1. Mumetal: very high magnetic permeability. 2. PVDF: polyvinylidene fluoride, piezoelectric effect. 3. Samarium/cobalt: very high magnetic remanence (fixed magnet). 4. Sapphire: very high thermal conductivity. 5. Ferrites: very stable magnetic permeability, wide range available.
37.3.4.4 Epoxy Resins Two-part epoxy resins can be used as adhesives, as potting material, and as paint. Parameters such as viscosity, setting time, set hardness, and color can be controlled. Most have good insulating properties, although conducting epoxies exist, and all are mechanically strong, some up to 300ºC. The resin features in the important structure material: epoxy bonded fiberglass. Delicate assemblies can be ruggedized or passivated by a prophylactic layer of resin, which also improves design security. Epoxy resin can be applied to a component and machined to size when cured. It can allow construction of an insulating joint with precisely controlled dimensions. Generally speaking, the thinner the glue layer, the stronger and more stable the joint.
37.3.4.5 Paints and Finishes The appearance of an instrument is enhanced by the judicious use of texture and color in combination with its controls and displays. A wide range of British Standard coordinated colors are available, allowing consistent results (BS 5252 and 4800). In the United States, the Pantone color chart is usually used, and colors are generally matched to a PMS (Pantone Matching System) color. For example, PMS
PART | VI Automation and Control Systems
720 is a commonly used front panel color, in a royal blue shade. Anodized or brushed aluminum panels have been popular for many years, although the trend is now back toward painted or plastic panels with more exotic colors. Nearly all materials, including plastic, can be spray-painted by using suitable preparation and curing. Matte, gloss, and a variety of textures are available. Despite its age, silk-screen printing is used widely for lettering, diagrams, and logos, especially on front panels. Photosensitive plastic films, in one or a mixture of colors, are used for stick-on labels or as complete front panels with an LED display filter. The latter are often used in conjunction with laminated pressure pad-switches to provide a rugged, easy-to-clean, splash-proof control panel.
37.3.5 Mechanical Manufacturing Processes Materials can be processed in many ways to produce the required component. The methods chosen depend on the type of material, the volume required, and the type of shape and dimensional accuracy.
37.3.5.1 Bending and Punching Low-cost sheet metal or plastic can be bent or pressed into the required shape and holes punched with standard or special tools (Figure 37.3). Simple bending machines and a fly press cover most requirements, although hard tooling is more cost effective in large volumes. Most plastics are thermosetting and require heating, but metals are normally worked cold. Dimensional accuracy is typically not better than 0.5 mm.
37.3.5.2 Drilling and Milling Most materials can be machined, although glass (including fiberglass), ceramics, and some metals require specially hardened tools. The hand or pillar drill is the simplest tool, and high accuracy can be achieved by using a jig to hold the work-piece and guide the rotating bit. A milling machine is more complex, where the workpiece can be moved precisely relative to the rotating tool. Drills, reamers, cutters, and slotting saws are used to create complex and accurate shapes. Tolerances of 1 nm can be achieved.
37.3.5.3 Turning Rotating the workpiece against a tool is a method for turning long bars of material into large numbers of components at low unit cost. High accuracies can be achieved for internal and external diameters and length, typically 1 nm, making cylindrical components a popular choice in all branches of engineering.
645
Chapter | 37 Design and Construction of Instruments
37.3.5.5 Lapping A fine sludge of abrasive is rubbed onto the work-piece surface to achieve ultra-high accuracy, better than 10 nm if the metal is very hard. In principle, any shape can be lapped, but optical surfaces such as flats and spherics are most common, since these can be checked by sensitive optical methods.
37.3.5.6 Chemical and Electrochemical Milling Metal can be removed or deposited by chemical and electrochemical reactions. Surfaces can be selectively treated through masks. Complex shapes can be etched from sheet material of most kinds of metal using photolithographic techniques. Figure 37.3(b) shows an example where accuracies of 0.1 mm are achieved. Gold, tin, copper, and chromium can be deposited for printed circuit board manufacture or servicing of bearing components. Chemical etching of mechanical structures into silicon in combination with electronic circuitry is a process currently under development.
37.3.5.7 Extruding In extruding, the material, in a plastic state and usually at a high temperature, is pushed through an orifice with the desired shape. Complex cross-sections can be achieved, and a wide range of standard items are available, cut to length. Extruded components are used for structural members, heat sinks, and enclosures (Figure 37.4). Initial tooling is, however, expensive for non-standard sections. Figure 37.3 Sheet metal. (a) Bent and drilled or punched; (b) chemical milling.
A fully automatic machining center and tool changer and component handler can produce vast numbers of precise components of many different types under computer control.
37.3.5.8 Casting and Molding Casting, like molding, makes the component from the liquid or plastic phase but results in the destruction of the mold. It usually refers to components made of metals such as
37.3.5.4 Grinding and Honing With grinding, a hard stone is used to remove a small but controlled amount of material. When grinding, both tool and component are moved to produce accurate geometries including relative concentricity and straightness (e.g., parallel) but with a poor surface finish. Precise flats, cylinders, cones, and spherics are possible. The material must be fairly hard to get the best results and is usually metal or ceramic. Honing requires a finer stone and produces a much better surface finish and potentially very high accuracy (0.1 nm). Relative accuracy (e.g., concentricity between outside and inside diameters) is not controllable, and so honing is usually preceded by grinding or precise turning.
Figure 37.4 Extrusion. (a) Structural member; (b) heat sink.
646
PART | VI Automation and Control Systems
Figure 37.5 Examples of casting and molding. (a) Molded enclosure; (b) cast fixing.
aluminum alloys and sand casts made from a pattern. Very elaborate forms can be made, but further machining is required for accuracies better than 0.5 mm. Examples are shown in Figure 37.5. Plastics are molded in a variety of ways, and the mold can be used many times. Vacuum forming and injection molding are used to achieve very low unit cost, but tooling costs are high. Rotational low-pressure molding (rotomolding) is often used for low-volume enclosure runs.
37.3.5.9 Adhesives Adhesive technology is advancing at a considerable rate, finding increasing use in instrument construction. Thin layers of adhesive can be stable and strong and provide electrical conduction or insulation. Almost any material can be glued, although high-temperature curing is still required for some applications. Metal components can be recovered by disintegration of the adhesive at high temperatures. Twopart adhesives are usually best for increased shelf life. Jigs can be used for high dimensional accuracies, and automatic dispensing for high volume and low-cost assembly.
37.3.6 Functional Components A wide range of devices is available, including bearings, couplings, gears, and springs. Figure 37.6 shows the main types of components used and their principal characteristics.
37.3.6.1 Bearings Bearings are used when a controlled movement, either linear or rotary, is required. The simplest bearing consists of rubbing surfaces, prismatic for linear, cylindrical for rotation, and spherical for universal movement. Soft materials such as copper and bronze and PTFE are used for reduced friction, and high precision can be achieved. Liquid or solid lubricants are sometimes used, including thin deposits of PTFE,
Figure 37.6 Mechanical components.
graphite, and organic and mineral oils. Friction can be further reduced by introducing a gap and rolling elements between the surfaces. The hardened steel balls or cylinders are held in cages or a recirculating mechanism. Roller bearings can be precise, low friction, relatively immune to contamination, and capable of taking large loads. The most precise bearing is the air bearing. A thin cushion of pressurized air is maintained between the bearing surfaces, considerably reducing the friction and giving a position governed by the average surface geometry. Accuracies of 0.01 nm are possible, but a source of clean, dry, pressurized air is required. Magnetic bearings maintain an air gap and have low friction but cannot tolerate side loads. With bearings have evolved seals to eliminate contamination. For limited movement, elastic balloons of rubber or metal provide complete and possibly hermetic sealing. Seals made of low-friction polymer composites exclude larger particles, and magnetic liquid lubricant can be trapped between magnets, providing an excellent low-friction seal for unlimited linear or rotary movement.
647
Chapter | 37 Design and Construction of Instruments
37.3.6.2 Couplings It is occasionally necessary to couple the movement of two bearings, which creates problems of clashing. This can be overcome by using a flexible coupling which is stiff in the required direction and compliant to misalignment of the bearings. Couplings commonly used include: 1. Spring wire or filaments. 2. Bellows. 3. Double hinges. Each type is suitable for different combinations of side load, misalignment, and torsional stiffness.
37.3.6.3 Springs Springs are used to produce a controlled amount of force (e.g., for preloaded bearings, force/pressure transducers or fixings). They can take the form of a diaphragm, helix, crinkled washer, or shaped flat sheet leaf spring. A thin circular disc with chemically milled Archimedes spinal slots is an increasingly used example of the latter. A pair of these can produce an accurate linear motion with good sideways stiffness and controllable spring constant.
37.4 Construction of electronic instruments Electronic instruments can be categorized by the way they are intended to be used physically, resulting in certain types of construction: 1. Site mounting. 2. Panel mounting. 3. Bench mounting. 4. Rack mounting. 5. Portable instruments.
37.4.1 Site Mounting The overriding consideration here is usually to get close to the physical process which is being measured or controlled. This usually results in the need to tolerate harsh environmental conditions such as extreme temperature, physical shock, muck, and water. Signal conditioners and data-acquisition subsystems, which are connected to transducers and actuators, produce signals suitable for transmission over long distances, possibly to a central instrumentation and control system some miles away. Whole computerized systems are also available with ruggedized construction for use in less hostile environments. The internal construction is usually very simple, since there are few, if any, controls or displays. Figure 37.7 shows an interesting example which tackles the common problem of wire terminations. The molded plastic enclosure is sealed
Figure 37.7 Instrument for site mounting. Courtesy of Solartron Instruments.
at the front with a rubber “O” ring and is designed to pass the IPC 65 “hosepipe” test (see BS 5490 or the NEMA 4 and 4X standard). The main electronic circuit is on one printed circuit board mounted on pillars, connected to one of a variety of optional interface cards. The unit is easily bolted to a wall or the side of a machine.
37.4.2 Panel Mounting A convenient way for an instrument engineer to construct a system is to mount the various instruments which require control or readout on a panel with the wiring and other system components protected inside a cabinet. Instruments designed for this purpose generally fit into one of a number of DIN standard cut-outs (see DIN 43 700). Figure 37.8 is an example illustrating the following features: 1. The enclosure is an extruded aluminum tube. 2. Internal construction is based around five printed circuit boards, onto which the electronic displays are soldered. The PCBs plug together and can be replaced easily for servicing. 3. All user connections are at the rear, for permanent or semi-permanent installation.
37.4.3 Bench-mounting Instruments Instruments which require an external power source but a degree of portability are usually for benchtop operation. Size is important, since bench space is always in short supply. Instruments in this category often have a wide range of controls and a display requiring careful attention to ergonomics. Figure 37.9(a) shows a typical instrument, where the following points are worth noting: 1. The user inputs are at the front for easy access. 2. There is a large clear display for comfortable viewing.
648
PART | VI Automation and Control Systems Figure 37.8 Panel-mounting instrument. Courtesy of System teknik AB.
7-segment gas discharge display
Extruded case
Switchedmode power supply
Analog/digital converter
Rechargable nickel-cadmium batteries
Scale switch
Figure 37.9 (a) Bench-mounting instrument. Courtesy of Automatic Systems Laboratories Ltd. (b) General assembly drawing of instrument shown in Figure 37.9(a).
649
Chapter | 37 Design and Construction of Instruments
3. The carrying handle doubles up as a tilt bar. 4. It has modular internal construction with connectors for quick servicing. The general assembly drawing for this instrument is included as Figure 37.9(b), to show how the parts fit together.
37.4.4 Rack-mounting Instruments Most large electronic instrumentation systems are constructed in 19-inch wide metal cabinets of variable height (in units of 1.75 inch 1U). These can be for bench mounting, free standing, or wall mounting. Large instruments are normally designed for bench operation or rack mounting for which optional brackets are supplied. Smaller modules plug into subracks which can then be bolted into a 19-inch cabinet. Figure 37.10 shows some of the elements of a modular instrumentation system with the following points:
The degree of modularity and standardization enables the user to mix a wide range of instruments and components from a large number of different suppliers worldwide.
37.4.5 Portable Instruments Truly portable instruments are now common, due to the reduction in size and power consumption of electronic circuitry. Figure 37.11 shows good examples which incorporate the following features: 1. Lightweight, low-cost molded plastic case. 2. Low-power CMOS circuitry and liquid crystal display (LCD). 3. Battery power source gives long operating life.
1. The modules are standard Eurocard sizes and widths (DIN 41914 or IEC 297). 2. The connectors are to DIN standard (DIN 41612). 3. The subrack uses standard mechanical components and can form part of a much larger instrumentation system.
Figure 37.10 Rack-based modular instruments. Courtesy of Schroff (U.K.) Ltd. and Automatic Systems Laboratories Ltd.
Figure 37.11 Portable instruments. Courtesy of Solomat SA.
650
Size reduction is mainly from circuit integration onto silicon and the use of small outline components.
37.4.6 Encapsulation For particularly severe conditions, especially with regard to vibration, groups of electronic components are sometimes encapsulated (familiarly referred to as “potting”). This involves casting them in a suitable material, commonly epoxy resin. This holds the components very securely in position, and they are also protected from the atmosphere to which the instrument is exposed. To give further protection against stress (for instance, from differential thermal expansion), a complex procedure is occasionally used, with compliant silicone rubber introduced as well as the harder resin. Some epoxies are strong up to 300ºC. At higher temperatures (450ºC) they are destroyed, allowing encapsulated components to be recovered if they are themselves heat resistant. Normally an encapsulated group would be thrown away if any fault developed inside it.
37.5 Mechanical instruments Mechanical instruments are mainly used to interface between the physical world and electronic instrumentation. Examples are: 1. Displacement transducers (linear and rotary). 2. Force transducers (load cells). 3. Accelerometers. Such transducers often have to endure a wide temperature range, shock, and vibration, requiring careful selection of materials and construction. Many matters contribute to good mechanical design and construction, some of which are brought out in the devices described in other chapters of this book. We add to that here by showing details of one or two instruments where particular principles of design can be seen. Before that, however, we give a more general outline of kinematic design, a way of proceeding that can be of great value for designing instruments.
PART | VI Automation and Control Systems
approach geometrical perfection, and the imperfections of the surfaces mean that the relative position of the two parts will depend upon contact between high spots, and will vary slightly if the points of application of the forces holding them together are varied. The points of contact can be reduced, for instance, to four with a conventional chair, but it is notorious that a four-legged chair can rock unless the bottoms of the legs match the floor perfectly. Kinematic design calls for a three-legged chair, to avoid the redundancy of having its position decided by four points of contact. More generally, a rigid solid body has 6 degrees of freedom which can be used to fix its position in space. These are often thought of as three Cartesian coordinates to give the position of one point of the body, and when that has been settled, rotation about three mutually perpendicular axes describes the body’s attitude in space. The essence of kinematic design is that each degree of freedom should be constrained in an identifiable localized way. Consider again the three-legged stool on a flat surface. The Z-coordinate of the tip of the leg has been constrained, as has rotation about two axes in the flat surface. There is still freedom of X- and- Y-coordinates and for rotation about an axis perpendicular to the surface: 3 degrees of freedom removed by the three constraints between the leg-tips and the surface. A classical way of introducing six constraints and so locating one body relative to another is to have three V-grooves in one body and three hemispheres attached to the other body, as shown in Figure 37.12. When the hemispheres enter the grooves (which should be deep enough for contact to be made with their sides and not their edges), each has two constraints from touching two sides, making a total of six. If one degree of freedom, say, linear displacement, is required, five spheres can be used in a precise groove as in Figure 37.13. Each corresponds to a restricted movement. For the required mating, it is important that contact should approximate to point contact and that the construction materials should be hard enough to allow very little deformation perpendicular to the surface under the loads normally encountered. The sphere-on-plane configuration
37.5.1 Kinematic Design A particular approach sometimes used for high-precision mechanical items is called kinematic design. When the relative location of two bodies must be constrained, so that there is either no movement or a closely controlled movement between them, it represents a way of overcoming the uncertainties that arise from the impossibility of achieving geometrical perfection in construction. A simple illustration is two flat surfaces in contact. If they can be regarded as ideal geometrical planes, then the relative movement of the two bodies is accurately defined. However, it is expensive to
Figure 37.12 Kinematic design: three-legged laboratory stands, to illustrate that six contacts fully constrain a body. (Kelvin clamp, as also used for theodolite mounts.)
651
Chapter | 37 Design and Construction of Instruments
described is one possible arrangement: crossed cylinders are similar in their behavior and may be easier to construct. Elastic hinges may be thought of as an extension of kinematic design. A conventional type of door hinge is expensive to produce if friction and play are to be greatly reduced, particularly for small devices. An alternative approach may be adopted when small, repeatable rotations must be catered for. Under this approach, some part is markedly weakened, as in Figure 37.14, so that the bending caused by a turning moment is concentrated in a small region. There is elastic resistance to deformation but very little friction and high repeatability. The advantages of kinematic design may be listed as: 1. Commonly, only simple machining operations are needed at critical points. 2. Wide tolerances on these operations should not affect repeatability, though they may downgrade absolute performance. 3. Only small forces are needed. Often gravity is sufficient or a light spring if the direction relative to the vertical may change from time to time. 4. Analysis and prediction of behavior is simplified. The main disadvantage arises if large forces have to be accommodated. Kinematically designed constraints normally work with small forces holding parts together, and if these forces are overcome—even momentarily under the inertia forces of vibration—there can be serious malfunction.
Indeed, the lack of symmetry in behavior under kinematic design can prove a more general disadvantage (for instance, when considering the effects of wear). Of course, the small additional complexity often means that it is not worth changing to kinematic design. Sometimes a compromise approach is adopted, such as localizing the points of contact between large structures without making them literal spheres on planes. In any case, when considering movements and locations in instruments it is helpful to bear the ideas of kinematic design in mind as a background to analysis.
37.5.2 Proximity Transducer This is a simple device which is used to detect the presence of an earthed surface which affects the capacitance between the two electrodes E1 and E2 in Figure 37.15. In a special application it is required to operate at a temperature cycling between 200ºC and 400ºC in a corrosive atmosphere and survive shocks of 1,000 g. Design points to note are: 1. The device is machined out of the solid to avoid a weak weld at position A. 2. The temperature cycling causes thermal stresses which are taken up by the spring washer B (special nimonic spring material for high temperatures). 3. The ceramic insulator blocks are under compression for increased strength.
37.5.3 Load Cell As discussed in Chapter 7, a load cell converts force into movement against the reaction of a spring. The movement is then measured by a displacement transducer and converted into electrical form.
Figure 37.13 Kinematic design: five constraints allow linear movement.
Figure 37.14 Principle of elastic hinge.
Figure 37.15 Rugged proximity transducer.
652
The load cell in Figure 37.16 consists of four stiff members and four flexures, machined out of a solid block of highquality spring material in the shape of a parallelogram. The members M1, M2 and M3, M4 remain parallel as the force F bends the flexures at the thin sections (called hinges).
PART | VI Automation and Control Systems
Any torque, caused by the load being offset from the vertical, will result in a small twisting of the load cell, but this is kept within the required limit by arranging the rotational stiffness to be much greater than the vertical stiffness. This is determined by the width. The trapezoidal construction is far better in this respect than a normal cantilever, which would result in a nonlinear response.
37.5.4 Combined Actuator Transducer
Figure 37.16 Load cell spring mechanism.
Figure 37.17 illustrates a more complex example, requiring a number of processing techniques to fabricate the complete item. The combined actuator transducer (CAT) is a low-volume product with applications in automatic optical instruments for mirror positioning. The major bought-in components are a torque motor and a miniature pre-amplifier produced by specialist suppliers. The motor incorporates the most advanced rare-earth magnetic materials for compactness and stability, and the pre-amplifier uses small outline electronic components, surface mounted on a copper/fiberglass printed circuit board. The assembled CAT is shown in Figure 37.17(a). It consists of three sections: the motor in its housing, a capacitive
Figure 37.17 Combined actuator/ transducer. (a) Whole assembly; (b) fixing plate; (c) transducer rotor; (d) spring contact.
653
Chapter | 37 Design and Construction of Instruments
angular transducer in its housing, and a rear-mounted plate (Figure 37.17(b)) for pre-amplifier fixing and cable clamping. The motor produces a torque which rotates the shaft in the bearings. Position information is provided by the transducer, which produces an output via the pre-amplifier. The associated electronic servocontrol unit provides the power output stage, position feedback, and loop- stabilization components to control the shaft angle to an arc second. This accuracy is attainable by the use of precise ball bearings which maintain radial and axial movement to within 10 nm. The shaft, motor casing, and transducer components are manufactured by precise turning of a nonmagnetic stainless steel bar and finished by fine bead blasting. The motor and transducer electrodes (not shown) are glued in place with a thin layer of epoxy resin and in the latter case finished by turning to final size. The two parts of the transducer stator are jigged concentric and held together by three screws in threaded holes A, leaving a precisely determined gap in which the transducer rotor (Figure 37.17(c)) rotates. The transducer rotor is also turned, but the screens are precision ground to size, as this determines the transducer sensitivity and range. The screens are earthed, via the shaft and a hardened gold rotating point contact held against vibration by a spring (Figure 37.17(d)). The spring is chemically milled from thin beryllium copper sheet. The shaft with motor and transducer rotor glued in place, motor stator and casing, transducer stator and bearings are then assembled and held together with three screws as B. The fixing plate, assembled with cable, clamp, and preamplifier separately, is added, mounted on standard standoffs, the wires made off, and then finally the cover put on. In addition to the processes mentioned, the manufacture of this unit requires drilling, reaming, bending, screen printing, soldering, heat treatment, and anodizing. Materials include copper, PTFE, stainless steel, samarium cobalt,
epoxy fiberglass, gold, and aluminum, and machining tolerances are typically 25 nm for turning, 3 nm for grinding and 0.1 mm for bending. The only external feature is the clamping ring at the shaft end for axial fixing (standard servo type size 20). This is provided because radial force could distort the thin wall section and cause transducer errors.
References Birbeck, G., “Mechanical Design,” in A Guide to Instrument Design, SIMA and BSIRA, Taylor and Francis, London (1963). Clayton, G. B., Operational Amplifiers, Butterworths, London (1979). Furse, J. E., “Kinematic design of fine mechanisms in instruments,” in Instrument Science and Technology, Volume 2, ed. E. B. Jones, Adam Hilger, Bristol, U.K. (1983). Horowitz, P., and Hill, W., The Art of Electronics, Cambridge University Press, Cambridge (1989). Kibble, B. P., and Rayner, G. H., Co-Axial AC Bridges, Adam Hilger, Bristol, U.K. (1984). Morrell, R., Handbook of Properties of Technical and Engineering Ceramics Part 1, An introduction for the engineer and designer, HMSO, London (1985). Oberg, E., and Jones, F. D., Machinery Handbook, The Machinery Publishing Company (1979). Shields, J., Adhesives Handbook, Butterworths, London (revised 3rd ed., 1985). Smith, S. T., and Chetwynd, J., Foundations of Ultraprecision Mechanism Design, Gordon & Breach, London (1992). The standards referred to in the text are: BS 5252 (1976) and 4800 (1981): Framework for color coordination for building purposes BS 5490 (1977 and 1985): Environmental protection provided by enclosures. DIN 43 700 (1982): Cutout dimensions for panel mounting instruments. DIN 41612: Standards for Eurocard connectors. DIN 41914 and IEC 297: Standards for Eurocards.
This page intentionally left blank
Chapter 38
Instrument Installation and Commissioning A. Danielsson
38.1 Introduction
38.3 Storage and protection
Plant safety and continuous effective plant operability are totally dependent upon correct installation and commissioning of the instrumentation systems. Process plants are increasingly becoming dependent upon automatic control systems, owing to the advanced control functions and monitoring facilities that can be provided in order to improve plant efficiency, product throughput, and product quality. The instrumentation on a process plant represents a significant capital investment, and the importance of careful handling on site and the exactitude of the installation cannot be overstressed. Correct installation is also important in order to ensure long-term reliability and to obtain the best results from instruments which are capable of higher-order accuracies due to advances in technology. Quality control of the completed work is also an important function. Important principles relevant to installing instrumentation are also discussed in Chapter 35.
When instruments are received on a job site it is of the utmost importance that they are unpacked with care, examined for superficial damage, and then placed in a secure store which should be free from dust and suitably heated. In order to minimize handling, large items of equipment, such as control panels, should be programmed to go directly into their intended location, but temporary anti-condensation heaters should be installed if the intended air-conditioning systems have not been commissioned. Throughout construction, instruments and equipment installed in the field should be fitted with suitable coverings to protect them from mechanical abuse such as paint spraying, etc. Preferably, after an installation has been fabricated, the instrument should be removed from the site and returned to the store for safe keeping until ready for precalibration and final loop checking. Again, when instruments are removed, care should be taken to seal the ends of piping, etc., to prevent ingress of foreign matter.
38.2 General requirements Installation should be carried out using the best engineering practices by skilled personnel who are fully acquainted with the safety requirements and regulations governing a plant site. Prior to commencement of the work for a specific project, installation design details should be made available which define the scope of work and the extent of material supply and which give detailed installation information related to location, fixing, piping, and wiring. Such design details should have already taken account of established installation recommendations and measuring technology requirements. The details contained in this chapter are intended to give general installation guidelines.
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00038
38.4 Mounting and accessibility When instruments are mounted in their intended location, either on pipe stands, brackets, or directly connected to vessels, etc., they should be vertically plumbed and firmly secured. Instrument mountings should be vibration free and should be located so that they do not obstruct access ways which may be required for maintenance to other items of equipment. They should also be clear of obvious hazards such as hot surfaces or drainage points from process equipment. Locations should also be selected to ensure that the instruments are accessible for observation and maintenance.
655
656
Where instruments are mounted at higher elevations, it must be ensured that they are accessible either by permanent or temporary means. Instruments should be located as close as possible to their process tapping points in order to minimize the length of impulse lines, but consideration should be paid to the possibility of expansion of piping or vessels which could take place under operating conditions and which could result in damage if not properly catered for. All brackets and supports should be adequately protected against corrosion by priming and painting. When installing final control elements such as control valves, again, the requirement for maintenance access must be considered, and clearance should be allowed above and below the valve to facilitate servicing of the valve actuator and the valve internals.
38.5 Piping systems All instrument piping or tubing runs should be routed to meet the following requirements: 1. They should be kept as short as possible. 2. They should not cause any obstruction that would prohibit personnel or traffic access. 3. They should not interfere with the accessibility for maintenance of other items of equipment. 4. They should avoid hot environments or potential firerisk areas. 5. They should be located with sufficient clearance to permit lagging which may be required on adjacent pipework. 6. The number of joints should be kept to a minimum consistent with good practice. 7. All piping and tubing should be adequately supported along its entire length from supports attached to firm steelwork or structures (not handrails). (Note: Tubing can be regarded as thin-walled seamless pipe that cannot be threaded and which is joined by compression fittings, as opposed to piping, which can be threaded or welded.)
38.5.1 Air Supplies Air supplies to instruments should be clean, dry, and oil free. Air is normally distributed around a plant from a high-pressure header (e.g., 6–7 bar g), ideally forming a ring main. This header, usually of galvanized steel, should be sized to cope with the maximum demand of the instrument air users being serviced, and an allowance should be made for possible future expansion or modifications to its duty. Branch headers should be provided to supply individual instruments or groups of instruments. Again, adequate spare tappings should be allowed to cater for future expansion.
PART | VI Automation and Control Systems
Branch headers should be self draining and have adequate drainage/blow-off facilities. On small headers this may be achieved by the instrument air filter/regulators. Each instrument air user should have an individual filter regulator. Piping and fittings installed after filter regulators should be non-ferrous.
38.5.2 Pneumatic Signals Pneumatic transmission signals are normally in the range of 0.2–1.0 bar (3–15psig), and for these signals copper tubing is most commonly used, preferably with a PVC outer sheath. Other materials are sometimes used, depending on environmental considerations (e.g., alloy tubing or stainless steel). Although expensive, stainless steel tubing is the most durable and will withstand the most arduous service conditions. Plastic tubing should preferably only be used within control panels. There are several problems to be considered when using plastic tubes on a plant site, as they are very vulnerable to damage unless adequately protected, they generally cannot be installed at subzero temperatures, and they can be considerably weakened by exposure to hot surfaces. Also, it should be remembered that they can be totally lost in the event of a fire. Pneumatic tubing should be run on a cable tray or similar supporting steelwork for its entire length and securely clipped at regular intervals. Where a number of pneumatic signals are to be routed to a remote control room they should be marshaled in a remote junction box and the signals conveyed to the control room via multitube bundles. Such junction boxes should be carefully positioned in the plant in order to minimize the lengths of the individually run tubes. (See Figure 38.1 for typical termination of pneumatic multitubes.)
38.5.3 Impulse Lines These are the lines containing process fluid which run between the instrument impulse connection and the process tapping point, and are usually made up from piping and pipe fittings or tubing and compression fittings. Piping materials must be compatible with the process fluid. Generally, tubing is easier to install and is capable of handling most service conditions provided that the correct fittings are used for terminating the tubing. Such fittings must be compatible with the tubing being run (i.e., of the same material). Impulse lines should be designed to be as short as possible, and should be installed so that they are self-draining for liquids and self-venting for vapors or gases. If necessary, vent plugs or valves should be located at high points in liquid-filled lines and, similarly, drain plugs or valves should be fitted at low points in gas or vapor-filled lines. In
Chapter | 38 Instrument Installation and Commissioning
657
Figure 38.1 Typical field termination of pneumatic multitubes.
any case, it should be ensured that there are provisions for isolation and depressurizing of instruments for maintenance purposes. Furthermore, filling plugs should be provided where lines are to be liquid scaled for chemical protection and, on services which are prone to plugging, rodding-out connections should be provided close to the tapping points.
38.6 Cabling 38.6.1 General Requirements Instrument cabling is generally run in multicore cables from the control room to the plant area (either below or above ground) and then from field junction boxes in single pairs to the field measurement or actuating devices. For distributed microprocessor systems the inter-connection between the field and the control room is usually via duplicate data highways from remote located multiplexers or process interface units. Such duplicate highways would take totally independent routes from each other for plant security reasons. Junction boxes must meet the hazardous area requirements applicable to their intended location and should be carefully positioned in order to minimize the lengths of
individually run cables, always bearing in mind the potential hazards that could be created by fire. Cable routes should be selected to meet the following requirements: 1. They should be kept as short as possible. 2. They should not cause any obstruction that would prohibit personnel or traffic access. 3. They should not interfere with the accessibility for maintenance of other items of equipment. 4. They should avoid hot environments or potential firerisk areas. 5. They should avoid areas where spillage is liable to occur or where escaping vapors or gases could present a hazard. Cables should be supported for their whole run length by a cable tray or similar supporting steelwork. Cable trays should preferably be installed with their breadth in a vertical plane. The layout of cable trays on a plant should be carefully selected so that the minimum number of instruments in the immediate vicinity would be affected in the case of a local fire. Cable joints should be avoided other than in approved junction boxes or termination points. Cables entering junction boxes from below ground should be specially protected by fire-resistant ducting or something similar.
658
PART | VI Automation and Control Systems
38.6.2 Cable Types
38.7 Grounding
There are three types of signal cabling generally under consideration, i.e.,
38.7.1 General Requirements
1. Instrument power supplies (above 50 V). 2. High-level signals (between 6 and 50 V). This includes digital signals, alarm signals, and high-level analog signals (e.g., 4–20 mAdc). 3. Low-level signals (below 5V). This generally covers thermocouple compensating leads and resistance element leads. Signal wiring should be made up in twisted pairs. Solid conductors are preferable so that there is no degradation of signal due to broken strands that may occur in stranded conductors. Where stranded conductors are used, crimped connectors should be fitted. Cable screens should be provided for instrument signals, particularly low-level analog signals, unless the electronic system being used is deemed to have sufficient built-in “noise” rejection. Further mechanical protection should be provided in the form of singlewire armor and PVC outer sheath, especially if the cables are installed in exposed areas, e.g., on open cable trays. Cables routed below ground in sand-filled trenches should also have an overall lead sheath if the area is prone to hydrocarbon or chemical spillage.
Special attention must be paid to instrument grounding, particularly where field instruments are connected to a computer or microprocessor type control system. Where cable screens are used, ground continuity of screens must be maintained throughout the installation with the grounding at one point only, i.e., in the control room. At the field end the cable screen should be cut back and taped so that it is independent from the ground. Intrinsically safe systems should be grounded through their own ground bar in the control room. Static grounding of instrument cases, panel frames, etc., should be connected to the electrical common plant ground. (See Figure 38.2 for a typical grounding system.) Instrument grounds should be wired to a common bus bar within the control center, and this should be connected to a remote ground electrode via an independent cable (preferably duplicated for security and test purposes). The resistance to ground, measured in the control room, should usually not exceed 1 Ω unless otherwise specified by a system manufacturer or by a certifying authority.
38.8 Testing and pre-commissioning 38.8.1 General
38.6.3 Cable Segregation Only signals of the same type should be contained within any one multicore cable. In addition, conductors forming part of intrinsically safe circuits should be contained in a multicore reserved solely for such circuits. When installing cables above or below ground they should be separated into groups according to the signal level and segregated with positive spacing between the cables. As a general rule, low-level signals should be installed furthest apart from instrument power supply cables with the high-level signal cables in between. Long parallel runs of dissimilar signals should be avoided as far as possible, as this is the situation where interference is most likely to occur. Cables used for high-integrity systems such as emergency shutdown systems or data highways should take totally independent routes or should be positively segregated from other cables. Instrument cables should be run well clear of electrical power cables and should also, as far as possible, avoid noise-generating equipment such as motors. Cable crossings should always be made at right angles. When cables are run in trenches, the routing of such trenches should be clearly marked with concrete cable markers on both sides of the trench, and the cables should be protected by earthenware or concrete covers.
Before starting up a new installation the completed instrument installation must be fully tested to ensure that the equipment is in full working order. This testing normally falls into three phases, i.e., pre-installation testing; piping and cable testing; loop testing or pre-commissioning.
38.8.2 Pre-installation Testing This is the testing of each instrument for correct calibration and operation prior to its being installed in the field. Such testing is normally carried out in a workshop which is fully equipped for the purpose and should contain a means of generating the measured variable signals and also a method of accurately measuring the instrument input and output (where applicable). Test instruments should have a standard of accuracy better than the manufacturer’s stated accuracy for the instruments being tested and should be regularly certified. Instruments are normally calibration checked at five points (i.e., 0, 25, 50, 75, and 100 percent) for both rising and falling signals, ensuring that the readings are within the manufacturer’s stated tolerance. After testing, instruments should be drained of any testing fluids that may have been used and, if necessary, blown through with dry air. Electronic instruments should be energized for a 24-hour warm-up period prior to the calibration
Chapter | 38 Instrument Installation and Commissioning
659 Figure 38.2 A typical control center gounding system.
test being made. Control valves should be tested in situ after the pipework fabrication has been finished and flushing operations completed. Control valves should be checked for correct stroking at 0, 50, and 100 percent open, and at the same time the valves should be checked for correct closure action.
38.8.3 Piping and Cable Testing This is an essential operation prior to loop testing.
38.8.3.1 Pneumatic Lines All air lines should be blown through with clean, dry air prior to final connection to instruments, and they should also be pressure tested for a timed interval to ensure that they are leak free. This should be in the form of a continuity test from the field end to its destination (e.g., the control room).
38.8.3.2 Process Piping Impulse lines should also be flushed through and hydrostatically tested prior to connection of the instruments. All isolation valves or manifold valves should be checked for tight shutoff. On completion of hydrostatic tests, all piping should be drained and thoroughly dried out prior to reconnecting to any instruments.
38.8.3.3 Instrument Cables All instrument cables should be checked for continuity and insulation resistance before connection to any instrument
or apparatus. The resistance should be checked core to core and core to ground. Cable screens must also be checked for continuity and insulation. Cable tests should comply with the requirements of Part 6 of the IEE Regulation for Electrical Installations (latest edition), or the rules and regulations with which the installation has to comply. Where cables are installed below ground, testing should be carried out before the trenches are back filled. Coaxial cables should be tested using sine-wave reflective testing techniques. As a prerequisite to cable testing it should be ensured that all cables and cable ends are properly identified.
38.8.4 Loop Testing The purpose of loop testing is to ensure that all instrumentation components in a loop are in full operational order when interconnected and are in a state ready for plant commissioning. Prior to loop testing, inspection of the whole installation, including piping, wiring, mounting, etc., should be carried out to ensure that the installation is complete and that the work has been carried out in a professional manner. The control room panels or display stations must also be in a fully functional state. Loop testing is generally a two-person operation, one in the field and one in the control room who should be equipped with some form of communication, e.g., field telephones or radio transceivers. Simulation signals should be injected at the field end equivalent to 0, 50, and 100 percent of the instrument range, and the loop function should be checked
660
for correct operation in both rising and falling modes. All results should be properly documented on calibration or loop check sheets. All ancillary components in the loop should be checked at the same time. Alarm and shutdown systems must also be systematically tested, and all systems should be checked for “fail-safe” operation, including the checking of “burn-out” features on thermocouple installations. At the loop-checking stage all ancillary work should be completed, such as setting zeros, filling liquid seals, and fitting of accessories such as charts, ink, fuses, etc.
38.9 Plant commissioning Commissioning is the bringing “on-stream” of a process plant and the tuning of all instruments and controls to suit the process operational requirements. A plant or section thereof is considered to be ready for commissioning when all instrument installations are mechanically complete and all testing, including loop testing, has been effected. Before commissioning can be attempted it should be ensured that all air supplies are available and that all power supplies are fully functional, including any emergency standby supplies. It should also be ensured that all ancillary devices are operational, such as protective heating systems, air conditioning, etc. All control valve lubricators (when fitted) should be charged with the correct lubricant.
PART | VI Automation and Control Systems
Commissioning is usually achieved by first commissioning the measuring system with any controller mode overridden. When a satisfactory measured variable is obtained, the responsiveness of a control system can be checked by varying the control valve position using the “manual” control function. Once the system is seen to respond correctly and the required process variable reading is obtained, it is then possible to switch to “auto” in order to bring the controller function into action. The controller responses should then be adjusted to obtain optimum settings to suit the automatic operation of plant. Alarm and shutdown systems should also be systematically brought into operation, but it is necessary to obtain the strict agreement of the plant operation supervisor before any overriding of trip systems is attempted or shutdown features are operated. Finally, all instrumentation and control systems would need to be demonstrated to work satisfactorily before formal acceptance by the plant owner.
References BS 6739, British Standard Code of Practice for Instrumentation in Process Control Systems: Installation Design and Practice (1986). Regulations for electrical installations 15th ed. (1981) as issued by the Institution of Electrical Engineers. The reader is also referred to the National Electrical Code of the United States (current edition) and relevant ANSI, IEC, and ISA standards.
Chapter 39
Sampling J. G. Giles
39.1 Introduction 39.1.1 Importance of Sampling Any form of analysis instrument can only be as effective as its sampling system. Analysis instruments are out of commission more frequently due to trouble in the sampling system than to any other cause. Therefore time and care expended in designing and installing an efficient sampling system is well repaid in the saving of servicing time and dependability of instrument readings. The object of a sampling system is to obtain a truly representative sample of the solid, liquid, or gas which is to be analyzed, at an adequate and steady rate, and transport it without change to the analysis instrument, and all precautions necessary should be taken to ensure that this happens. Before the sample enters the instrument it may be necessary to process it to the required physical and chemical state, i.e., correct temperature, pressure, flow, purity, etc., without removing essential components. It is also essential to dispose of the sample and any reagent after analysis without introducing a toxic or explosive hazard. For this reason, the sample, after analysis, is continuously returned to the process at a suitable point, or a sample-recovery and disposal system is provided.
39.1.2 Representative Sample It is essential that the sample taken should represent the mean composition of the process material. The methods used to overcome the problem of uneven sampling depend on the phase of the process sample, which may be in solid, liquid, gas, or mixed-phase form.
39.1.2.1 Solids When the process sample is solid in sheet form it is necessary to scan the whole sheet for a reliable measurement of
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00039-5
the state of the sheet (e.g., thickness, density, or moisture content). A measurement at one point is insufficient to give a representative value of the parameter being measured. If the solid is in the form of granules or powder of uniform size, a sample collected across a belt or chute and thoroughly mixed will give a reasonably representative sample. If measurement of density or moisture content of the solid can be made while it is in a vertical chute under a constant head, packing density problems may be avoided. In some industries where the solids are transported as slurries, it is possible to carry out the analysis directly on the slurry if a method is available to compensate for the carrier fluid and the velocities are high enough to ensure turbulent flow at the measurement point. Variable-size solids are much more difficult to sample, and specialist work on the subject should be consulted.
39.1.2.2 Liquids When sampling liquid it is essential to ensure that either the liquid is turbulent in the process line or that there are at least 200 pipe diameters between the point of adding constituents and the sampling point. If neither is possible, a motorized or static mixer should be inserted into the process upstream of the sample point.
39.1.2.3 Gases Gas samples must be thoroughly mixed and, as gas process lines are usually turbulent, the problem of finding a satisfactory sample point is reduced. The main exception is in large ducts such as furnace or boiler flues, where stratification can occur and the composition of the gas may vary from one point to another. In these cases special methods of sampling may be necessary, such as multiple probes or long probes with multiple inlets in order to obtain a representative sample.
661
662
PART | VI Automation and Control Systems
39.1.2.4 Mixed-Phase Sampling
39.1.4 Time Lags
Mixed phases such as liquid/gas mixtures or liquid/solids (i.e., slurries) are best avoided for any analytical method that involves taking a sample from the process. It is always preferable to use an in-line analysis method where this is possible.
In any measuring instrument, particularly one which may be used with a controller, it is desirable that the time interval between occurrence of a change in the process fluid and its detection at the instrument should be as short as possible consistent with reliable measurement. In order to keep this time interval to a minimum, the following points should be kept in mind.
39.1.3 Parts of Analysis Equipment The analysis equipment consists of five main parts:
39.1.4.1 Sample-Transport Line Length
1. Sample probe. 2. Sample-transport system. 3. Sample-conditioning equipment. 4. The analysis instrument. 5. Sample disposal.
The distance between the sampling point and the analyzer should be kept to the minimum. Where long sample transport lines are unavoidable a “fast loop” may be used. The fast loop transports the sample at a flow rate higher than that required by the analyzer, and the excess sample is either returned to the process, vented to atmosphere, or allowed to flow to drain. The analyzer is supplied with the required amount of sample from the fast loop through a short length of tubing.
39.1.3.1 Sample Probe This is the sampling tube that is used to withdraw the sample from the process.
39.1.3.2 Sample-Transport System This is the tube or pipe that transports the sample from the sample point to the sample-conditioning system.
39.1.3.3 Sample-Conditioning System This system ensures that the analyzer receives the sample at the correct pressure and in the correct state to suit the analyzer. This may require pressure increase (i.e., pumps) or reduction, filtration, cooling, drying, and other equipment to protect the analyzer from process upsets. Additionally, safety equipment and facilities for the introduction of calibration samples into the analyzer may also be necessary.
39.1.3.4 The Analysis Instrument This is the process analyzer complete with the services such as power, air, steam, drain vents, carrier gases, and signal conditioning that are required to make the instrument operational. (Analysis techniques are described in Part 2 of this book.)
39.1.3.5 Sample Disposal The sample flowing from the analyzer and sample conditioning system must be disposed of safely. In many cases it is possible to vent gases to atmosphere or allow liquids to drain, but there are times when this is not satisfactory. Flammable or toxic gases must be vented in such a way that a hazard is not created. Liquids such as hydrocarbons can be collected in a suitable tank and pumped back into the process, whereas hazardous aqueous liquids may have to be treated before being allowed to flow into the drainage system.
39.1.4.2 Sampling Components Pipe, valves, filter, and all sample-conditioning components should have the smallest volume consistent with a permissible pressure drop.
39.1.4.3 Pressure Reduction Gaseous samples should be filtered, and flow in the sample line kept at the lowest possible pressure, as the mass of gas in the system depends on the pressure of the gas as well as the volume in the system. When sampling high-pressure gases the pressure reducing valve must be situated at the sample point. This is necessary, because for a fixed mass flow rate of gas the response time will increase in proportion to the absolute pressure of the gas in the sample line (i.e., gas at 10 bar A will have a time lag five times that of gas at 2 bar A). This problem becomes more acute when the sample is a liquid that has to be vaporized for analysis (e.g., liquid butane or propane). The ratio of volume of gas to volume of liquid can be in the region of 250:1, as is the case for propane. It is therefore essential to vaporize the liquid at the sample point and then treat it as a gas sample from then on.
39.1.4.4 Typical Equations
L S t = time lag
S = velocity (m/s)
L = line length (m)
1. t =
663
Chapter | 39 Sampling
2. General gas law for ideal gases: pv 8314 # W = T 10 5 # M
Table 39.1 Fast-loop calculation for gas oil sample Customer
Demo.
Date
Jan. 1986
Order No.
ABC 123
Tag number
Gas oil (sample line)
Density
825.00 kg/m3
Viscosity
3.00 centipoise
Response time
39.92 s
Flow rate
16.601/min (1 m3/h)
Length
73.00 m
Diameter (ID)
13.88 mm (1/2 in nominal bore (extra strong))
Velocity
1.83 m/s
RE =
6979
Flow
TURB
Friction factor
0.038
Delta P
278.54 kPa (2.7854 bar)
Customer
Demo.
Date
Jan. 1986
Order No.
ABC 123
39.1.4.5 Useful Data
Tag number
Gas oil (return line)
Internal volume per meter (VI) of typical sample lines:
Density
825.00 kg/m3
Viscosity
3.00 centipoise
Response time
71.31 s
Flow rate
16.601/min (1 m3/h)
Length
73.00 m
Diameter (ID)
18.55 mm (1/4 in nominal bore (extra strong))
Velocity
1.02 m/s
RE =
5222
Flow
TURB
Friction factor
0.040
Delta P.
67.85 kPa (0.6785 bar)
p = pressure T = abs. temperature (K) o = volume (1) W = mass (g) M = molecular weight 3. Line volume: rd 2 = VI 4 d = internal diameter of tube (mm) VI = volume (ml/m) 4. t =
6L # V I 100F
L = line length (m) VI = internal volume of line (ml/m) F = sample flow rate (1/min) t = time lag (s) (For an example of a fast loop calculation see Section 39.3.2.2, Table 39.1.)
1
in OD # 0.035 wall = 1.5 ml/m ¼ in OD # 0.035 wall = 16.4 ml/m 3 8 in OD # 0.035 wall = 47.2 ml/m ½ in OD # 0.065 wall = 69.4 ml/m ½ in nominal bore steel pipe (extra strong)(13.88 mm ID) = 149.6 ml/m 3 mm OD # 1 mm wall = 0.8 ml/m 6 mm OD # 1 mm wall = 12.6 ml/m 8 mm OD # 1 mm wall = 28.3 ml/m 10 mm OD # 1 m wall = 50.3 ml/m 12 mm OD # 1.5 mm wall = 63.6 ml/m 8
39.1.5 Construction Materials Stainless steel (Type 316 or 304) has become one of the most popular materials for the construction of sample systems due to its high resistance to corrosion, low surface adsorption (especially moisture), wide operating temperature range, high-pressure capability, and the fact that it is easily obtainable. Care must be taken when there are materials in the sample which cause corrosion, such as chlorides and sulfides, in which case it is necessary to change to more expensive materials such as Monel. When atmospheric sampling is carried out for trace constituents, Teflon tubing is frequently used, as the surface
adsorption of the compounds is less than stainless steel, but it is necessary to check that the compound to be measured does not diffuse through the wall of the tubing. For water analysis (e.g., pH and conductivity) it is possible to use plastic (such as PVC or ABS) components, although materials such as Kunifer 10 (copper 90 percent, nickel 10 percent) are increasing in popularity when chlorides (e.g., salt water) are present, as they are totally immune to chloride corrosion.
664
PART | VI Automation and Control Systems
39.2 Sample system components
39.2.1.3 Furnace Gas Probes
39.2.1 Probes
Low-Temperature Probe Figure 39.3 shows a gas-sampling probe with a ceramic outside filter for use in temperatures up to 400°C.
The most important function of a probe is to obtain the sample from the most representative point (or points) in the process line.
39.2.1.1 Sample Probe A typical probe of this type for sampling all types of liquid and gases at low pressure is shown in Figure 39.1. It can be seen that the probe which is made of 21 mm OD (11.7 mm ID) stainless steel pipe extends to the center of the line being sampled. However, if the latter is more than 500mm OD the probe intrusion is kept at 250 mm to avoid vibration in use.
Water-Wash Probe This probe system is used for sampling furnace gases with high dust content at high temperatures (up to 1600°C) (Figure 39.4). The wet gas-sampling probe is water cooled and self-priming. The water/gas mixture passes from the probe down to a water trap, where the gas and water are separated. The gas leaves the trap at a pressure of approximately 40 mbar, and precautions should be taken
39.2.1.2 Small-Volume Sample Probe This probe is used for sampling liquids that must be vaporized or for high-pressure gases (Figure 39.2). Typically, a 6mm OD # 2 mm ID tube is inserted through the center of a probe of the type described in Section 39.2.1.1. The probe may be withdrawn through the valve for cleaning.
Figure 39.3 Gas-sampling probe. Courtesy of ABB Hartmann and Braun. 1. Gas intake; 2. ceramic intake filter; 3. bushing tube with flange; 4. case with outlet filter; 5. internal screwthread; 6. gas outlet.
Figure 39.1 Sample probe. Courtesy of Ludlam Sysco.
Figure 39.2 Small-volume sample probe. Courtesy of Ludlam Sysco.
Figure 39.4 Water-wash probe (courtesy ABB Hartmann and Braun). 1. Water intake; 2. water filter; 3. gas intake; 4. gas-water outlet; 5. connecting hose; 6. gas-water intake; 7. gas outlet; 8. water outlet; 9. water separator.
665
Chapter | 39 Sampling
to avoid condensation in the analyzer, either by ensuring that the analyzer is at a higher temperature than the water trap or by passing the sample gas through a sample cooler at 5°C to reduce the humidity. Note that this probe is not suitable for the measurement of water-soluble gases such as CO2, SO2, or H2S. Steam Ejector The steam ejector illustrated in Figure 39.5 can be used for sample temperatures up to 180°C and, because the condensed steam dilutes the condensate present in the flue gas, the risk of corrosion of the sample lines when the steam/ gas sample cools to the dew point is greatly reduced. Dry steam is supplied to the probe and then ejected through a jet situated in the mouth of a Venturi. The flow of steam causes sample gas to be drawn into the probe. The steam and gas pass out of the probe and down the sample line to the analyzer system at a positive pressure. The flow of steam through the sample line prevents the build-up of any corrosive condensate.
a stainless steel or a ceramic element. Solid particles tend to be carried straight on in the sample stream so that maintenance time is very low. Filtration sizes are available from 150 nm (100 mesh) down to 5 nm. It is suitable for use with liquids or gases.
39.2.2.3 Filters with Disposable Glass Microfiber Element These filters are available in a wide variety of sizes and porosities (Figure 39.7). Bodies are available in stainless
39.2.2 Filters 39.2.2.1 “Y” Strainers “Y” strainers are available in stainless steel, carbon steel, and bronze. They are ideal for preliminary filtering of samples before pumps or at sample points to prevent line scale from entering sample lines. Filtration sizes are available from 75 to 400 nm (200 to 40 mesh). The main application for this type of filter is for liquids and steam.
Figure 39.6 In-line filter. Courtesy of Microfiltrex.
39.2.2.2 In-Line Filters This design of filter is normally used in a fast loop configuration and is self-cleaning (Figure 39.6). Filtration is through
Figure 39.5 Steam ejection probe. Courtesy of Servomex.
Figure 39.7 Filter with disposable element. Courtesy of Balston.
666
steel, aluminum, or plastic. Elements are made of glass microfiber and are bonded with either resin or fluorocarbon. The fluorocarbon-bonded filter is particularly useful for low-level moisture applications because of the low adsorption/desorption characteristic. The smallest filter in this range has an internal volume of only 19 ml and is therefore suitable when a fast response time is required.
39.2.2.4 Miniature in-Line Filter These are used for filtration of gases prior to pressure reduction and are frequently fitted as the last component in the sample system to protect the analyzer (Figure 39.8).
PART | VI Automation and Control Systems
39.2.3 Coalescers Coalescers are a special type of filter for separating water from oil or oil from water (Figure 39.10). The incoming sample flows from the center of a specially treated filter element through to the outside. In so doing, the diffused water is slowed down and coalesced, thus forming droplets which, when they reach the outer surface, drain downwards as the water is denser than the hydrocarbon. A bypass stream is taken from the bottom of the coalescer to remove the water. The dry hydrocarbon stream is taken from the top of the coalescer.
39.2.4 Coolers
39.2.2.5 Manual Self-Cleaning Filter
39.2.4.1 Air Coolers
This type of filter works on the principle of edge filtration using discs usually made of stainless steel and fitted with a manual cleaning system (Figure 39.9). The filter is cleaned by rotating a handle which removes any deposits from the filter element while the sample is flowing. The main uses are for filtration of liquids where filter cleaning must be carried out regularly without the system being shut down; it is especially suitable for waxy material which can rapidly clog up normal filter media.
These are usually used to bring the sample gas temperature close to ambient before feeding into the analyzer.
39.2.4.2 Water-Jacketed Coolers These are used to cool liquid and gas samples and are available in a wide range of sizes (Figure 39.11).
39.2.4.3 Refrigerated Coolers These are used to reduce the temperature of a gas to a fixed temperature (e.g., +5°C) in order to condense the water out of a sample prior to passing the gas into the analyzer. Two types are available: one with an electrically driven compressor type refrigerator and another using a Peltier cooling element. The compressor type has a large cooling capacity whereas the Peltier type, being solid state, needs less maintenance.
39.2.5 Pumps, Gas Whenever gaseous samples have to be taken from sample points which are below the pressure required by the analyzer, a sample pump of some type is required. The pumps that are available can broadly be divided into two groups: 1. The eductor or aspirator type 2. The mechanical type.
39.2.5.1 Eductor or Aspirator Type
Figure 39.8 Miniature in-line filter. Courtesy of Nupro.
All these types of pump operate on the principle of using the velocity of one fluid which may be liquid or gas to induce the flow in the sample gas. The pump may be fitted before or after the analyzer, depending on the application. A typical application for a water-operated aspirator (similar to a laboratory vacuum pump) is for taking a sample of flue gas for oxygen measurement. In this case the suction port of the aspirator is connected directly to the probe via
667
Chapter | 39 Sampling
Figure 39.9 Manual self-cleaning filter. Courtesy of AMF CUNO.
a sample line and the water/gas mixture from the outlet feeds into a separator arranged to supply the sample gas to the analyzer at a positive pressure of about 300 mm water gauge. In cases where water will affect the analysis it is sometimes possible to place the eductor or aspirator after the analyzer and draw the sample through the system. In these cases the eductor may be supplied with steam, air, or water to provide the propulsive power. Figure 39.10 Coalescer. Courtesy of Fluid Data.
39.2.5.2 Mechanical Gas Pumps There are two main types of mechanical gas pump available: 1. Rotary pump 2. Reciprocating piston or diaphragm pump.
Figure 39.11 Water-jacketed cooler. Courtesy of George E. Lowe. Dimensions are shown in mm.
Rotary Pumps Rotary pumps can be divided into two categories, the rotary piston and the rotating fan types, but the latter is very rarely used as a sampling pump. The rotary piston pump is manufactured in two configurations. The Rootes type has two pistons of equal size which rotate inside a housing with the synchronizing carried out by external gears. The rotary vane type is similar to those used extensively as vacuum pumps. The Rootes type is ideal where very large flow rates are required and, because there is a clearance between the pistons and the housing, it is possible to operate them on very dirty gases. The main disadvantage of the rotary vane type is that, because there is contact between the vanes and the housing, lubrication is usually required, and this may interfere with the analysis.
668
PART | VI Automation and Control Systems
Reciprocating Piston and Diaphragm Pump Of these two types the diaphragm pump has become the most popular. The main reason for this is the improvement in the types of material available for the diaphragms and the fact that there are no piston seals to leak. The pumps are available in a wide variety of sizes, from the miniature units for portable personnel protection analyzers to large heavy-duty industrial types. A typical diaphragm pump (Figure 39.12) for boosting the pressure of the gas into the analyzer could have an allstainless-steel head with a Terylene reinforced Viton diaphragm and Viton valves. This gives the pump a very long service life on critical hydrocarbon applications. Many variations are possible; for example, a Tefloncoated diaphragm can be fitted where Viton may not be compatible with the sample, and heaters may be fitted to the head to keep the sample above the dew point. The piston pump is still used in certain cases where high accuracy is required in the flow rate (for example, gas blending) to produce specific gas mixtures. In these cases the pumps are usually operated immersed in oil so that the piston is well lubricated, and there is no chance of gas leaks to and from the atmosphere.
39.2.6 Pumps, Liquid There are two situations where pumps are required in liquid sample systems: 1. Where the pressure at the analyzer is too low because either the process line pressure is too low, or the pressure drop in the sample line is too high, or a combination of both.
2. When the process sample has to be returned to the same process line after analysis. The two most common types of pumps used for sample transfer are: 1. Centrifugal (including turbine pump) 2. Positive displacement (e.g., gear, peristaltic, etc.).
25.2.6.1 Centrifugal The centrifugal and turbine pumps are mainly used when high flow rates of low-viscosity liquids are required. The turbine pumps are similar to centrifugal pumps but have a special impeller device which produces a considerably higher pressure than the same size centrifugal. In order to produce high pressures using a centrifugal pump there is a type available which has a gearbox to increase the rotor speed to above 20,000 rev/min.
39.2.6.2 Positive-Displacement Pumps Positive-displacement pumps have the main characteristic of being constant flow devices. Some of these are specifically designed for metering purposes where an accurate flow rate must be maintained (e.g., process viscometers). They can take various forms: 1. Gear pump 2. Rotary vane pump 3. Peristaltic pump. Gear Pumps Gear pumps are used mainly on high-viscosity products where the sample has some lubricating properties.
Figure 39.12 Diaphragm pump. Courtesy of Charles Austen Pumps.
110 mm
273 mm
180 mm
114 mm (a)
(b)
669
Chapter | 39 Sampling
They can generate high pressures for low flow rates and are used extensively for hydrocarbon samples ranging from diesel oil to the heaviest fuel oils. Rotary Vane Pumps These pumps are of two types, one having rigid vanes (usually metal) and the other fitted with a rotor made of an elastomer such as nitrile (Buna-n) rubber or Viton. The metal vane pumps have characteristics similar to the gear pumps described above, but can be supplied with a method of varying the flow rate externally while the pump is operating. Pumps manufactured with the flexible vanes (Figure 39.13) are particularly suitable for pumping aqueous solutions and are available in a wide range of sizes but are only capable of producing differential pressures of up to 2 bar. Peristaltic Pumps Peristaltic pumps are used when either accurate metering is required or it is important that no
contamination should enter the sample. As can be seen from Figure 39.14, the only material in contact with the sample is the special plastic tubing, which may be replaced very easily during routine servicing.
39.2.7 Flow Measurement and Indication Flow measurement on analyzer systems falls into three main categories: 1. Measuring the flow precisely where the accuracy of the analyzer depends on it 2. Measuring the flow where it is necessary to know the flow rate but it is not critical (e.g., fast loop flow) 3. Checking that there is flow present but measurement is not required (e.g., cooling water for heat exchangers). It is important to decide which category the flowmeter falls into when writing the specification, as the prices vary over a wide range, depending on the precision required. The types of flowmeter available will be mentioned but not the construction or method of operation, as this is covered in Chapter 1.
39.2.7.1 Variable-Orifice Meters The variable-orifice meter is extensively used in analyzer systems because of its simplicity, and there are two main types. Glass Tube This type is the most common, as the position of the float is read directly on the scale attached to the tube, and it is available calibrated for liquids or gases. The high-precision versions are available with an accuracy of !1 percent full-scale deflection (FSD), whereas the low-priced units have a typical accuracy of !5 percent FSD. Metal Tube The metal tube type is used mainly on liquids for high-pressure duty or where the liquid is flammable or hazardous. A good example is the fast loop of a hydrocarbon analyzer. The float has a magnet embedded in it, and the position is detected by an external follower system. The accuracy of metal tube flowmeters varies from !10 percent FSD to !2 percent FSD, depending on the type and whether individual calibration is required.
39.2.7.2 Differential-Pressure Devices
Figure 39.13 Flexible impeller pump. Courtesy of ITT Jabsco. (a) Upon leaving the offset plate the impeller blade straightens and creates a vacuum, drawing in liquid–instantly priming the pump, (b) As the impeller rotates it carries the liquid through the pump from the intake to outlet port, each successive blade drawing in liquid, (c) When the flexible blades again contact the offset plate they bend with a squeezing action which provides a continuous, uniform discharge of liquid.
On sample systems these normally consist of an orifice plate or preset needle valve to produce the differential pressure, and are used to operate a gauge or liquid-filled manometer when indication is required or a differential pressure switch when used as a flow alarm.
39.2.7.3 Spinner or Vane-Type Indicators In this type the flow is indicated either by the rotation of a spinner or by the deflection of a vane by the fluid. It is ideal
670
PART | VI Automation and Control Systems Figure 39.14 Peristaltic pump. Courtesy of Watson-Marlow. The advancing roller occludes the tube which, as it recovers to its normal size, draws in fluid which is trapped by the next roller (in the second part of the cycle) and expelled from the pump (in the third part of the cycle). This is the peristaltic flow-inducing action.
for duties such as cooling water flow, where it is essential to know that a flow is present but the actual flow rate is of secondary importance.
39.2.8 Pressure Reduction and Vaporization The pressure-reduction stage in a sample system is often the most critical, because not only must the reduced pressure be kept constant, but also provision must be made to ensure that under faulty conditions dangerously high pressures cannot be produced. Pressure reduction can be carried out in a variety of ways.
39.2.8.1 Simple Needle Valve This is capable of giving good flow control if upstream and downstream pressures are constant. Advantage: Simplicity and low cost. Disadvantage: Any downstream blockage will allow pressure to rise. They are only practical if downstream equipment can withstand upstream pressure safely.
Figure 39.15 Lute-type pressure stabilizer. Courtesy of Ludlam Sysco.
39.2.8.4 Vaporization There are cases when a sample in the liquid phase at high pressure has to be analyzed in the gaseous phase. The pressure reduction and vaporization can be carried out in a specially adapted diaphragm-operated pressure controller as detailed above, where provision is made to heat the complete unit to replace the heat lost by the vaporization.
39.2.9 Sample Lines, Tube and Pipe Fitting 39.2.8.2 Needle Valve with Liquid-Filled Lute
39.2.9.1 Sample Lines
This combination is used to act as a pressure stabilizer and safety system combined. The maintained pressure will be equal to the liquid head when the needle value flow is adjusted until bubbles are produced (Figure 39.15). It is essential to choose a liquid that is not affected by sample gas and also does not evaporate in use and cause a drop in the controlled pressure.
Sample lines can be looked at from two aspects: first, the materials of construction, which are covered in Section 39.1.5, and, second, the effect of the sample line on the process sample, which is detailed below. The most important consideration is that the material chosen must not change the characteristics of the sample during its transportation to the analyzer. There are two main ways in which the sample line material can affect the sample.
39.2.8.3 Diaphragm-Operated Pressure Controller These regulators are used when there is either a very large reduction in pressure required or the downstream pressure must be accurately controlled (Figure 39.16). They are frequently used on gas cylinders to provide a controllable lowpressure gas supply.
Adsorption and Desorption Adsorption and desorption occur when molecules of gas or liquid are retained and discharged from the internal surface of the sample line material at varying rates. This has the effect of delaying the transport of the adsorbed material to the analyzer and causing erroneous results.
671
Chapter | 39 Sampling
Where this problem occurs it is possible to reduce the effects in the following ways: 1. Careful choice of sample tube material 2. Raising the temperature of the sample line 3. Cleaning the sample line to ensure that it is absolutely free of impurities such as traces of oil 4. Increasing the sample flow rate to reduce the time the sample is in contact with the sample line material. Permeability Permeability is the ability of gases to pass through the wall of the sample tubing. Two examples are: 1. Polytetrafluoroethylene (PTFE) tubing is permeable to water and oxygen. 2. Plasticized polyvinyl chloride (PVC) tubing is permeable to the smaller hydrocarbon molecules such as methane. Permeability can have two effects on the analysis: 1. External gases getting into the sample such as when measuring low-level oxygen using PTFE tubing. The results would always be high due to the ingress of oxygen from the air. 2. Sample gases passing outwards through the tubing, such as when measuring a mixed hydrocarbon stream using plasticized PVC. The methane concentration would always be too low. Adjusting screw
Handknob
39.2.9.2 Tube, Pipe, and Method of Connection Definition
Locking ring
Spring button
Bonnet sub-assembly
Load spring
Diaphragm plate
Diaphragm
Seat retainer
Valve stem
Seat Outlet
1. Pipe is normally rigid, and the sizes are based on the nominal bore.
Inlet
2. Tubing is normally bendable or flexible, and the sizes are based on the outside diameter and wall thickness.
Typical materials: Metallic: carbon steel, brass, etc. Plastic: UPVC, ABS, etc.
Methods of joining Pipe (metallic): 1. Screwed 2. Flanged 3. Welded 4. Brazed or soldered
Pipe (plastic): 1. Screwed 2. Flanged 3. Welded (by heat or use of solvents)
Filter Mounting holes(2)
Figure 39.16 Diaphragm-operated pressure controller. Courtesy of Tescom.
Water and hydrogen sulfide at low levels are two common measurements where this problem is experienced. An example is when measuring water at a level of 10 ppm in a sample stream, where copper tubing has an adsorption/ desorption which is twenty times greater than stainless steel tubing, and hence copper tubing would give a very sluggish response at the analyzer.
Typical materials: Metallic: carbon steel, brass, etc. Plastic: UPVC, ABS, etc.
672
PART | VI Automation and Control Systems
Tubing (metallic): 1. Welding 2. Compression fitting 3. Flanged
Tubing (plastic): 1. Compression 2. Push-on fitting (especially for plastic tubing) with hose clip where required to withstand pressure.
General The most popular method of connecting metal tubing is the compression fitting, as it is capable of withstanding pressures up to the limit of the tubing itself and is easily dismantled for servicing as well as being obtainable manufactured in all the most common materials.
39.3 Typical sample systems 39.3.1 Gases 39.3.1.1 High-Pressure Sample to a Process Chromatograph The example taken is for a chromatograph analyzing the composition of a gas which is in the vapor phase at 35 bar (Figure 39.17). This is the case described in Section 39.1.4.3, where it is necessary to reduce the pressure of the gas at the sample point in order to obtain a fast response time with a minimum wastage of process gas. The sample is taken from the process line using a lowvolume sample probe (Section 39.2.1.2) and then flows immediately into a pressure reducing valve to drop the pressure to a constant 1.5 bar, which is measured on a local
Figure 39.17 Schematic: high-pressure gas sample to chromatograph. Courtesy of Ludlam Sysco.
pressure gauge. A pressure relief valve set to relieve at 4 bar is connected at this point to protect downstream equipment if the pressure-reducing valve fails. After pressure reduction the sample flows in small-bore tubing (6 mm OD) to the main sample system next to the analyzer, where it flows through a filter (such as shown in Section 39.2.2.3) to the sample selection system. The fast loop flows out of the bottom of the filter body, bypassing the filter element, and then through a needle valve and flowmeter to an atmospheric vent on a low-pressure process line. The stream selection system shown in Figure 39.17 is called a block-and-bleed system, and always has two or more three-way valves between each stream and the analyzer inlet. The line between two of the valves on the stream which is not in operation is vented to atmosphere, so guaranteeing that the stream being analyzed cannot be contaminated by any of the other streams. A simple system without a blockand-bleed valve is described in Section 39.3.2.1 below. After the stream-selection system the sample flows through a needle valve flowmeter and a miniature in-line filter to the analyzer sample inlet. The analyzer sample outlet on this system flows to the atmospheric vent line.
39.3.1.2 Furnace Gas Using Steam-Injection Probe Inside the Flue This system utilizes a Venturi assembly located inside the flue (Figure 39.18). High-pressure steam enters the Venturi via a separate steam tube, and a low-pressure region results inside the flue at the probe head. A mixture of steam and sample gas passes down the sample line. Butyl or EDPDM rubber-lined steam hose is recommended for sample lines, especially when high-sulfur fuels are used. This will minimize the effects of corrosion. At the bottom end of the sample line the sample gas is mixed with a constant supply of water. The gas is separated from the water and taken either through a ball valve or a solenoid valve towards the sample loop, which minimizes the dead volume between each inlet valve and the analyzer inlet. The water, dust, etc., passes out of a dip leg (A) and to a drain. It is assumed that the gas leaves the separator saturated with water at the temperature of the water. In the case of a flue gas system on a ship operating, for example, in the Red Sea, this could be at 35°C. The system is designed to remove condensate that may be formed because of lower temperatures existing in downstream regions of the sample system. At the end of the loop there is a second dip leg (B) passing into a separator. A 5 cm water differential pressure is produced by the difference in depth of the two dip legs, so there is always a continuous flow of gas round the loop and out to vent via dip-leg (B).
673
Chapter | 39 Sampling
Figure 39.18 Schematic: furnace gas sampling. Courtesy of Servomex.
The gas passes from the loop to a heat exchanger, which is designed so that the gas leaving the exchanger is within 1 K of the air temperature. This means that the gas leaving the heat exchanger can be at 36°C and saturated with water vapor. The gas now passes into the analyzer which is maintained at 60°C. The gas arrives in the analyzer and enters the first chamber, which is a centrifugal separator in a stainless steel block at 60°C. The condensate droplets will be removed at this point and passed down through the bottom end of the separator into a bubbler unit (C). The bubbles in this tube represent the bypass flow. At the same time the gas is raised from 36°C to 60°C inside the analyzer. Gas now passes through a filter contained in the second chamber, the measuring cell, and finally to a second dip leg (D) in the bubbler unit. The flow of gas through the analyzer cell is determined by the difference in the length of the two legs inside the bubbler unit and cannot be altered by the operator. This system has the following operating advantages: 1. The system is under a positive pressure right from inside the flue, and so leaks in the sample line can only allow steam and sample out and not air in. 2. The high-speed steam jet scours the tube, preventing build-up. 3. The steam maintains the whole of the sample probe above the dew point and so prevents corrosive condensate forming on the outside of the probe.
4. The steam keeps the entire probe below the temperature of the flue whenever the temperature of the flue is above the temperature of the steam. 5. The actual sampling system is intrinsically safe, as no electrical pumps are required.
39.3.1.3 Steam Sampling for Conductivity The steam sample is taken from the process line by means of a special probe and then flows through thick-wall 316 stainless steel tubing to the sample system panel (Figure 39.19). The sample enters the sampling panel through a high-temperature, high-pressure isolating valve and then flows into the cooler, where the steam is condensed and the condensate temperature is reduced to a suitable temperature for the analyzer (typically 30°C). After the cooler, the condensate passes to a pressurecontrol valve to reduce the pressure to about 1 bar gauge. The temperature and pressure of the sample are then measured on suitable gauges and a pressure-relief valve (set at 2 bar) is fitted to protect downstream equipment from excess pressure if a fault occurs in the pressure control valve. The constant-pressure, cooled sample passes through a needle valve, flowmeter, and three-way valve into the conductivity cell and then to drain. Facilities are provided for feeding water of known conductivity into the conductivity cell through the three-way valve for calibration purposes. The sample coolers are
674
normally supplied with stainless steel cooling coils which are suitable where neither the sample nor the coolant contain appreciable chloride which can cause stress corrosion cracking. When chlorides are known to be present in the sample or cooling water, cooling coils are available, made of alternative materials which are resistant to chloride-induced stress corrosion cracking.
39.3.2 Liquids 39.3.2.1 Liquid Sample to a Process Chromatograph The example taken is for a chromatograph measuring butane in gasoline (petrol) (Figure 39.20). The chromatograph in
PART | VI Automation and Control Systems
this case would be fitted with a liquid inject valve so that the sample will remain in the liquid phase at all times within the sample system. In a liquid inject chromatograph the sample flow rate through the analyzer is very low (typically 25 ml/min), so that a fast loop system is essential. The sample flow from the process enters the sample system through an isolating valve, then through a pump (if required) and an in-line filter, from which the sample is taken to the analyzer. After the filter the fast loop flows through a flowmeter followed by a needle valve, then through an isolating valve back to the process. Pressure gauges are fitted, one before the in-line filter and one after the needle valve, so that it is possible at any time to check that the pressure differential is sufficient (usually 1 bar minimum) to force the sample through the analyzer.
Figure 39.19 Schematic: steam sampling for conductivity. Courtesy of Ludlam Sysco.
Figure 39.20 Schematic: liquid sample to process chromatograph. Courtesy of Ludlam Sysco.
Chapter | 39 Sampling
The filtered sample flows through small-bore tubing (typically, 3 mm OD) to the sample/calibration selection valves. The system shown is the block-and-bleed configuration as described in Section 39.3.1.1. Where there is no risk of cross contamination the sample stream-selection system shown in the inset of Figure 39.20 may be used. The selected sample flows through a miniature in-line filter (Section 39.2.2.4) to the analyzer, then through the flow control needle valve and non-return valve back to the fast loop return line. When the sample is likely to vaporize at the sample return pressure it is essential to have the flow control needle valve after the flowmeter in both the fast loop and the sample through the analyzer. This is done to avoid the possibility of any vapor flashing off in the needle valve and passing through the flowmeter, which would give erroneous readings. The calibration sample is stored in a nitrogen-pressurized container and may be switched either manually or automatically from the chromatograph controller.
39.3.2.2 Gas Oil Sample to a Distillation Point Analyzer In this case the process conditions are as follows (Figure 39.21): Sample tap: Normal pressure: 5 bar g Normal temperature: 70°C Sample line length: 73 m Sample return: Normal pressure: 5 bar g Return line length: 73 m
675
This is a typical example of an oil-refinery application where, for safety reasons, the analyzer has to be positioned at the edge of the process area and consequently the sample and return lines are relatively long. Data to illustrate the fast loop calculation, based on equations in Crane’s Publication No. 410M, are given in Table 39.1. An electrically driven gear pump is positioned immediately outside the analyzer house which pumps the sample round the fast loop and back to the return point. The sample from the pump enters the sample system cabinet and flows through an in-line filter from which the sample is taken to the analyzer, through a needle valve and flowmeter back to the process. The filtered sample then passes through a water-jacketed cooler to reduce the temperature to that required for the coalescer and analyzer. After the cooler the sample is pressure reduced to about 1 bar with a pressure-control valve. The pressure is measured at this point and a relief valve is fitted so that, in the event of the pressure control valve failing open, no downstream equipment will be damaged. The gas oil sample may contain traces of free water and, as this will cause erroneous readings on the analyzer, it is removed by the coalescer. The bypass sample from the bottom of the coalescer flows through a needle valve and flow-meter to the drain line. The dry sample from the coalescer flows through a three-way ball valve for calibration purposes and then a needle valve and flowmeter to control the sample flow into the analyzer. The calibration of this analyzer is carried out by filling the calibration vessel with the sample which had previously been accurately analyzed in the laboratory. The vessel is then pressurized with nitrogen to the same pressure as that set on the pressure-control valve and then the calibration sample is allowed to flow into the analyzer by turning the three-way ball valve to the calibrate position. Figure 39.21 Schematic: gas oil sample to distillation point analyzer. Courtesy of Ludlam Sysco.
676
PART | VI Automation and Control Systems Figure 39.22 Schematic: water sample system for dissolved oxygen analyzer. Courtesy of Ludlam Sysco.
The waste sample from the analyzer has to flow to an atmospheric drain and, to prevent product wastage, it flows into a sample recovery unit along with the sample from the coalescer bypass and the pressure relief valve outlet. The sample recovery unit consists of a steel tank from which the sample is returned to the process intermittently by means of a gear pump controlled by a level switch. An extra level switch is usually fitted to give an alarm if the level rises too high or falls too low. A laboratory sample take-off point is fitted to enable a sample to be taken for separate analysis at any time without interfering with the operation of the analyzer.
39.3.2.3 Water-Sampling System for Dissolved Oxygen Analyzer Process conditions: Sample tap:
Normal pressure: 3.5 bar Normal temperature: 140°C
Sample line length: 10 m This system (Figure 39.22) illustrates a case where the sample system must be kept as simple as possible to prevent degradation of the sample. The analyzer is measuring 0–20 ng/1 oxygen and, because of the very low oxygen content, it is essential to avoid places in the system where oxygen can leak in or be retained in a pocket. Wherever possible ball or plug valves must be used, as they leave no dead volumes which are not purged with the sample. The only needle valve in this system is the one controlling the flow into the analyzer, and this must be mounted
with the water flowing vertically up through it so that all air is displaced. The sample line, which should be as short as possible, flows through an isolating ball valve into a water-jacketed cooler to drop the temperature from 140°C to approximately 30°C, and then the sample is flow controlled by a needle valve and flowmeter. In this case, it is essential to reduce the temperature of the water sample before pressure reduction; otherwise, the sample would flash off as steam. A bypass valve to drain is provided so that the sample can flow through the system while the analyzer is being serviced. When starting up an analyzer such as this it may be necessary to loosen the compression fittings a little while the sample pressure is on to allow water to escape, fill the small voids in the fitting, and then finally tighten them up again to give an operational system. An extra unit that is frequently added to a system such as this is automatic shutdown on cooling-water failure to protect the analyzer from hot sample. This unit is shown dotted on Figure 39.22, and consists of an extra valve and a temperature detector, either pneumatic or electronically operated, so that the valve is held open normally but shuts and gives an alarm when the sample temperature exceeds a preset point. The valve would then be reset manually when the fault has been corrected.
References Cornish, D. C., et al., Sampling Systems for Process Analyzers, Butterworths, London (1981). Flow of Fluids, Publication No. 410M, Crane Limited (1982). Marks, J. W., Sampling and Weighing of Bulk Solids, Transtech Publications, Clausthal-Zellerfeld (1985).
Chapter 40
Telemetry M. L. Sanderson
40.1 Introduction Within instrumentation there is often a need for telemetry in order to transmit data or information between two geographi cal locations. The transmission may be required to enable centralized supervisory data logging, signal processing, or control to be exercised in large-scale systems which employ distributed data logging or control subsystems. In a chemical plant or power station these subsystems may be spread over a wide area. Telemetry may also be required for systems which are remote or inaccessible, such as a spacecraft, a satellite, or an unmanned buoy in the middle of the ocean. It can be used to transmit information from the rotating sections of an electrical machine without the need for slip rings. By using telemetry-sensitive signal processing and recording, an apparatus can be physically remote from hazardous and aggressive environments and can be operated in more closely monitored and controlled conditions. Telemetry has traditionally been provided by either pneumatic or electrical transmission. Pneumatic transmission, as shown in Figure 40.1, has been used extensively in process
instrumentation and control. The measured quantity (pressure, level, temperature, etc.) is converted to a pneumatic pressure, the standard signal ranges being 20-100 kPa gauge pressure (3-15 lb/in2/g) and 20-180 kPa (3-27 lb/in2/g). The lower limit of pressure provides a live zero for the instrument which enables line breaks to be detected, eases instrument calibration and checking, and provides for improved dynamic response since, when venting to atmospheric pressure, there is still sufficient driving pressure at 20 kPa. The pneumatic signals can be transmitted over distances up to 300 m in 6.35 mm or 9.5 mm OD plastic or metal tubing to a pneumatic indicator, recorder, or controller. Return signals for control purposes are transmitted from the control element. The distance is limited by the speed of response, which quadruples with doubling the distance. Pneumatic instrumentation generally is covered at greater length in Chapter 31. Pneumatic instruments are intrinsically safe, and can therefore be used in hazardous areas. They provide protection against electrical power failure, since systems employing air storage or turbine-driven compressors can continue to provide measurement and control during power failure. Figure 40.1 Pneumatic transmission.
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00040-1
677
678
Pneumatic signals also directly interface with control valves which are pneumatically operated and thus do not require the electrical/pneumatic converters required by electrical telemetry systems, although they do suffer from the difficulty of being difficult to interface to data loggers. Pneumatic transmission systems require a dry, regulated air supply. Condensed moisture in the pipework at subzero temperatures or small solid contaminants can block the small passages within pneumatic instruments and cause loss of accuracy and failure. Further details of pneumatic transmission and instrumentation can be found in Bentley (1983) and Warnock (1985). Increasingly, telemetry in instrumentation is being undertaken using electrical, radio frequency, microwave, or optical fiber techniques. The communication channels used include transmission lines employing two or more conductors which may be a twisted pair, a coaxial cable, or a telephone line physically connecting the two sites; radio frequency (rf) or microwave links which allow the communication of data by modulation of an rf or microwave carrier; and optical links in which the data are transmitted as a modulation of light down a fiber-optic cable. All of these techniques employ some portion of the electromagnetic spectrum, as shown in Figure 40.2. Figure 40.3 shows a complete telemetry system. Signal conditioning in the form of amplification and filtering normalizes the outputs from different transducers and restricts
PART | VI Automation and Control Systems
their bandwidths to those available on the communication channel. Transmission systems can employ voltage, current, position, pulse, or frequency techniques in order to transmit analog or digital data. Direct transmission of analog signals as voltage, current, or position requires a physical connection between the two points in the form of two or more wires and cannot be used over the telephone network. Pulse and frequency telemetry can be used for transmission over both direct links and also for telephone, rf, microwave, and optical links. Multiplexing either on a time or frequency basis enables more than one signal to be transmitted over the same channel. In pulse operation the data are encoded as the amplitude, duration, or position of the pulse or in a digital form. Transmission may be as a baseband signal or as an amplitude, frequency, or phase modulation of a carrier wave. In the transmission of digital signals the information capacity of the channel is limited by the available bandwidth, the power level, and the noise present on the channel. The Shannon-Hartley theorem states that the information capacity, C, in bits/s (bps) for a channel having a bandwidth B Hz and additative Gaussian band-limited white noise is given by C = B $ log 2 c 1 +
S m N
where S is the average signal power at the output of the channel and N is the noise power at the output of the channel.
Figure 40.2 Electromagnetic spectrum.
Figure 40.3 Telemetry system.
679
Chapter | 40 Telemetry
This capacity represents the upper limit at which data can be reliably transmitted over a particular channel. In general, because the channel does not have the ideal gain and phase characteristics required by the theorem and also because it would not be practical to construct the elaborate coding and decoding arrangements necessary to come close to the ideal, the capacity of the channel is significantly below the theoretical limit. Channel bandwidth limitations also give rise to bit rate limitations in digital data transmission because of intersymbol interference (ISI), in which the response of the channel to one digital signal interferes with the response to the next. The impulse response of a channel having a limited bandwidth of B Hz is shown in Figure 40.4(a). The response has zeros separated by 1/2B s. Thus for a second impulse transmitted across the channel at a time 1/2B s later there
will be no ISI from the first impulse. This is shown in Figure 40.4(b). The maximum data rate for the channel such that no ISI occurs is thus 2B bps. This is known as the Nyquist rate. Figure 40.4(c) shows the effect of transmitting data at a rate in excess of the Nyquist rate.
40.2 Communication channels 40.2.1 Transmission Lines Transmission lines are used to guide electromagnetic waves, and in instrumentation these commonly take the form of a twisted pair, a coaxial cable, or a telephone line. The primary constants of such lines in terms of their resistance, leakage conductance, inductance, and capacitance are distributed as shown in Figure 40.5. At low frequencies, generally below 100 kHz, a medium-length line may be represented by the circuit shown in Figure 40.6, where RL is the resistance of the wire and CL is the lumped capacitance of the line. The line thus acts as a lowpass filter. The frequency response can be extended by loading the line with regularly placed lumped inductances. Transmission lines are characterized by three secondary constants. These are the characteristic impedance, Z0; the attenuation, a, per unit length of line which is usually expressed in dB/unit length; and the phase shift, b, which is measured in radians/unit length. The values of Z0, a, b are related to the primary line constants by:
R + j~L e R + j~L o X e G + j~C o X G + j~C 1/2 1/2 R + j~L 2 9R0.+ a = + ~22 L22 h ^ G 22 + ~22 C 22 h .1/2 + ^ RG - ~22 LC h k C 1/2 8 . 68 5 aj$~^ R L oX Z0 = e 2 h . + ^ RG - ~ LC h kC e 9 0.5 a $ ^ R o+ G + j~C X~ L h ^ G + ~ C Za0 = 8.68 1/2 G$ ^+ 2j~C 2 2 2 2 2 1/2 2 1 2 / a k 9 C 1 2 / . h ^ h ^ h b = + ~ + ~ ~ 0 . 5 R L G C RG LC radi 1 2 / 1/2 R + 2j~L 2 2 2 2 2 2 2 1/2 9a$^. 2 -1^/2RG - ~2 LC h2k C radi +2 LC ~ +u~ 0.5^ RG R 2~ h.= + h k Ch ^2G2dB/ a =Z80.68 L h ^ G 2 + ~2 C 2 b n2it Clenh2gth = 9 0.5e a $ ^ R + ~o X 2 L a k 9 a = 8.68 0.5 $ ^ R 1+ ~ L h ^ G + ~ C h . + ^ RG - ~ LC h C G + j~C 2 / 1/2 2 1/2 1/2unit length b = 0.5 9 a $ ^ R 2 + ~2 L22h ^ G 2 + ~2 C 2 h . - ^ RG1/2 ~ LC 2 h k C 2 22radians/ 2 2 1/2 h^G ^ RG - ~2 LC h k C radi +-~~L LC Cunhit. lenh k2C + ~ a = 8.68 9 0.5 a $ ^ R + ~2 L2 h ^ G 2 + ~b2 C=2 h9 .0.5 a+$ ^^RRG dB/ gth Z0 = Z0 =
b = 0.5 9 a $ ^ R 2 + ~2 L2 h ^ G 2 + ~2 C 2 h .
Figure 40.4 (a) Impulse response of a bandlimited channel; (b) impulse responses delayed by 1/2B s; (c) impulse responses delayed by less than 1/2B s.
1/2
- ^ RG - ~2 LC h k C
1/2
radians/ unit length
where R is the resistance per unit length, G is the leakage conductance per unit length, C is the capacitance per unit length, and L is the inductance per unit length.
Figure 40.5 Distributed primary constants of a transmission line.
680
PART | VI Automation and Control Systems
It is necessary to terminate transmission lines with their characteristic impedance if reflection or signal echo is to be avoided. The magnitude of the reflection for a line of characteristic impedance Z0 terminated with an impedance ZT is measured by the reflection coefficient, t, given by: t=
ZT - Z0 ZT + Z0
Twisted pairs are precisely what they say they are, namely, two insulated conductors twisted together. The conductors are generally copper or aluminum, and plastic is often used as the insulating material. The twisting reduces the effect of inductively coupled interference. Typical values of the primary constants for a 22 gauge copper twisted pair are R = 100 Ω/km, L = 1 mH/km, G = 10-5 S/km and C = 0.05 nF/km. At high frequencies the characteristic impedance of the line is approximately 140 Ω. Typical values for the attenuation of a twisted pair are 3.4 dB/km at 100 kHz, 14 dB/km at 1 MHz, and 39 dB/km at 10 MHz. The high-frequency limitation for the use of twisted pairs at approximately 1 MHz occurs not so much as a consequence of attenuation but because of crosstalk caused by capacitive coupling between adjacent twisted pairs in a cable. Coaxial cables which are used for data transmission at higher frequencies consist of a central core conductor surrounded by a dielectric material which may be either polyethylene or air. The construction of such cables is shown in Figure 40.7. The outer conductor consists of a solid or braided sheath around the dielectric. In the case of the air dielectric the central core is supported on polyethylene spacers placed uniformly along the line. The outer conductor is usually covered by an insulating coating. The loss at high frequencies in coaxial cable is due to the “skin effect,” which forces the current in the central core to flow near to its surface and thus increases the resistance of the conductor. Such cables have a characteristic impedance of between 50 and 75 Ω. The
typical attenuation of a 0.61 cm diameter coaxial cable is 8 dB/100 m at 100 MHz and 25 dB/100 m at 1 GHz. Trunk telephone cables connecting exchanges consist of bunched twisted conductor pairs. The conductors are insulated with paper or polyethylene, the twisting being used to reduce the crosstalk between adjacent conductor pairs. A bunch of twisted cables is sheathed in plastic, and the whole cable is given mechanical strength by binding with steel wire or tape which is itself sheathed in plastic. At audio frequencies the impedance of the cable is dominated by its capacitance and resistance. This results in an attenuation that is frequency dependent and phase delay distorted, since signals of different frequencies are not transmitted down the cable with the same velocity. Thus a pulse propagated down a cable results in a signal which is not only attenuated (of importance in voice and analog communication) but which is also phase distorted (of importance in digital signal transmission). The degree of phase delay distortion is measured by the group delay db/dΩ. The bandwidth of telephone cables is restricted at low frequencies by the use of ac amplification in the repeater stations used to boost the signal along the line. Loading is used to improve the high-frequency amplitude response of the line. This takes the form of lumped inductances which correct the attenuation characteristics of the line. These leave the line with a significant amount of phase delay distortion and give the line attenuation at high frequencies. The usable frequency band of the telephone line is between 300 Hz and 3 kHz. Figure 40.8 shows typical amplitude and phase or group delay distortions
Figure 40.6 Low-frequency lumped approximation for a transmission line.
Figure 40.7 Coaxial cable.
Figure 40.8 Gain and delay distortion on telephone lines.
681
Chapter | 40 Telemetry
relative to 800 Hz for a typical leased line and a line to which equalization or conditioning has been applied. In order to transmit digital information reliably the transmission equipment has to contend with a transmission loss which may be as high as 30 dB; a limited bandwidth caused by a transmission loss which varies with frequency; group delay variations with frequency; echoes caused by impedance mismatching and hybrid crosstalk; and noise which may be either Gaussian or impulsive noise caused by dial pulses, switching equipment, or lightning strikes. Thus it can be seen that the nature of the telephone line causes particular problems in the transmission of digital data. Devices known as modems (Modulators/DEModulators) are used to transmit digital data along telephone lines. These are considered in Section 40.9.1.
40.2.2 Radio Frequency Transmission Radio frequency (rf) transmission is widely used in both civilian and military telemetry and can occur from 3 Hz (which is referred to as very low frequency (VLF)) up to as high as 300 GHz (which is referred to as extremely high frequency (EHF)). The transmission of the signal is by means of line-of-sight propagation, ground or surface wave diffraction, ionospheric reflection or forward scattering (Coates 1982). The transmission of telemetry or data signals is usually undertaken as the amplitude, phase, or frequency modulation of some rf carrier wave. These modulation techniques are described in Section 40.5. The elements of an rf telemetry system are shown in Figure 40.9. The allocation of frequency bands has been internationally agreed upon under the Radio Regulations of the International Telecommunication Union based in Geneva. These regulations were agreed to in 1959 and revised in 1979 (HMSO, 1980). In the UK the Radio Regulatory Division of the Department of Trade approves equipment and issues licenses for the users of radio telemetry links. In the United States, the Federal Communications Commission (FCC) serves the same purpose. In other countries, there is an analogous office. For general-purpose low-power telemetry and telecontrol there are four bands which can be used. These are 0-185 kHz and 240-315 kHz, 173.2-173.35 MHz and 458.5-458.8 MHz. For high-power private point systems the allocated frequencies are in the UHF band 450470 MHz. In addition, systems that use the cellular telephony bands are becoming common. For medical and biological telemetry there are three classes of equipment. Class I are low-power devices operating
between 300 kHz and 30 MHz wholly contained within the body of an animal or human. Class II is broad-band equipment operating in the band 104.6-105 MHz. Class III equipment is narrow-band equipment operating in the same frequency band as the Class II equipment. Details of the requirements for rf equipment can be found in the relevant documents cited in the References (HMSO, 1963, 1978, 1979).
40.2.3 Fiber-Optic Communication Increasingly, in data-communication systems there is a move toward the use of optical fibers for the transmission of data. Detailed design considerations for such systems can be found in Keiser (1983), Wilson and Hawkes (1983), and Senior (1985). As a transmission medium fiber-optic cables offer the following advantages: 1. They are immune to electromagnetic interference. 2. Data can be transmitted at much higher frequencies and with lower losses than twisted pairs or coaxial cables. Fiber optics can therefore be used for the multiplexing of a large number of signals along one cable with greater distances required between repeater stations. 3. They can provide enhanced safety when operating in hazardous areas. 4. Ground loop problems can be reduced. 5. Since the signal is confined within the fiber by total internal reflection at the interface between the fiber and the cladding, fiber-optic links provide a high degree of data security and little fiber-to-fiber crosstalk. 6. The material of the fiber is very much less likely to be attacked chemically than copper-based systems, and it can be provided with mechanical properties which will make such cables need less maintenance than the equivalent twisted pair or coaxial cable. 7. Fiber-optic cables can offer both weight and size advantages over copper systems.
40.2.3.1 Optical Fibers The elements of an optical fiber, as shown in Figure 40.10, are the core material, the cladding, and the buffer coating. The core material is either plastic or glass. The cladding is a material whose refractive index is less than that of the core. Total internal reflection at the core/cladding interface confines the light to travel within the core. Fibers with plastic cores also have plastic cladding. Such fibers exhibit high Figure 40.9 RF telemetry system.
682
PART | VI Automation and Control Systems
Figure 40.10 Elements of an optical fiber.
losses but are widely used for short-distance transmission. Multicomponent glasses containing a number of oxides are used for all but the lowest-loss fibers, which are usually made from pure silica. In low- and medium-loss fibers the glass core is surrounded by a glass or plastic cladding. The buffer coating is an elastic, abrasion-resistant plastic material which increases the mechanical strength of the fiber and provides it with mechanical isolation from geometrical irregularities, distortions, or roughness of adjacent surfaces which could otherwise cause scattering losses when the fiber is incorporated into cables or supported by other structures. The numerical aperture (NA) of a fiber is a measure of the maximum core angle for light rays to be reflected down the fiber by total internal reflection. By Snell’s Law: NA = sin i =
_ n21 - n22 i
where n1 is the refractive index of the core material and n2 is the refractive index of the cladding material. Fibers have NAs in the region of 0.15-0.4, corresponding to total acceptance angles of between 16 and 46 degrees. Fibers with higher NA values generally exhibit greater losses and low bandwidth capabilities. The propagation of light down the fibers is described by Maxwell’s equations, the solution of which gives rise to a set of bounded electromagnetic waves called the “modes” of the fiber. Only a discrete number of modes can propagate down the fiber, determined by the particular solution of Maxwell’s equation obtained when boundary conditions appropriate to the particular fiber are applied. Figure 40.11 shows the propagation down three types of fiber. The largercore radius multimode fibers are either step index or graded index fibers. In the step index fibers there is a step change in the refractive index at the core/cladding interface. The refractive index of the graded index fiber varies across the core of the fiber. Monomode fibers have a small-core radius, which permits the light to travel along only one path in the fiber. The larger-core radii of multimode fibers make it much easier to launch optical power into the fiber and facilitate the connecting of similar fibers. Power can be launched into such a fiber using light-emitting diodes (LEDs), whereas single-mode fibers must be excited with a laser diode. Intermodal dispersion occurs in multimode fibers because each of the modes in the fibers travels at a slightly different velocity. An optical pulse launched into a fiber has its energy
Figure 40.11 Propagation down fibers.
distributed among all its possible modes, and therefore as it travels down the fiber the dispersion has the effect of spreading the pulse out. Dispersion thus provides a bandwidth limit ation on the fiber. This is specified in MHz $ km. In graded index fibers the effect of intermodal dispersion is reduced over that in step index fibers because the grading bends the various possible light rays along paths of nominal equal delay. There is no intermodal dispersion in a single-mode fiber, and therefore these are used for the highest-capacity systems. The bandwidth limitation for a plastic clad step index fiber is typically 6-25 MHz $ km. Employing graded index plastic-clad fibers this can be increased to the range of 200-400 MHz $ km. For monomode fibers the bandwidth limitation is typically 500-1500 MHz $ km. Attenuation within a fiber, which is measured in dB/km, occurs as a consequence of absorption, scattering, and radiative losses of optical energy. Absorption is caused by extrinsic absorption by impurity atoms in the core and intrinsic absorption by the basic constituents of the core material. One impurity which is of particular importance is the OH (water) ion, and for low-loss materials this is controlled to a concentration of less than 1 ppb. Scattering losses occur as a consequence of microscopic variations in material density or composition, and from structural irregularities or defects introduced during manufacture. Radiative losses occur whenever an optical fiber undergoes a bend having a finite radius of curvature. Attenuation is a function of optical wavelength. Figure 40.12 shows the typical attenuation versus wavelength characteristics of a plastic and a monomode glass fiber. At 0.8 nm the attenuation of the plastic fiber is 350 dB/km and that of the glass fiber is approximately 1 dB/km. The minimum attenuation of the glass fiber is 0.2 dB/km at 1.55 nm. Figure 40.13 shows the construction of the lightand medium-duty optical cables.
683
Chapter | 40 Telemetry
Figure 40.14 Spectral output from a LED.
Figure 40.12 Attenuation characteristics of optical fibers. Figure 40.15 Spectral output from a laser diode.
Figure 40.13 Light- and medium-duty optical cables.
40.2.3.2 Sources and Detectors The sources used in optical fiber transmission are LEDs and semiconductor laser diodes. LEDs are capable of launching a power of between 0.1 and 10 mW into the fiber. Such devices have a peak emission frequency in the near infrared, typically between 0.8 and 1.0 nm. Figure 40.14 shows the typical spectral output from a LED. Limitations on the transmission rates using LEDs occur as a consequence of rise time, typically between 2 and 10 ns, and chromatic dispersion. This occurs because the refractive index of the core material varies with optical wavelength, and therefore the various spectral components of a given mode will travel at different speeds. Semiconductor laser diodes can provide significantly higher power, particularly with low duty cycles, with outputs typically in the region of 1 to 100 mW. Because they couple into the fiber more efficiently, they offer a higher electrical to optical efficiency than do LEDs. The lasing action means that the device has a narrower spectral width compared with a LED, typically 2 nm or less, as shown in Figure 40.15. Chromatic dispersion is therefore less for laser diodes, which also have a faster rise time, typically 1 ns. For digital transmissions of below 50 Mbps LEDs require less complex drive circuitry than laser diodes and require no thermal or optical power stabilization. Both p-i-n (p material-intrinsic-n material) diodes and avalanche photodiodes are used in the detection of the optical
Figure 40.16 LED and p-i-n diode detector for use in a fiber-optic system.
signal at the receiver. In the region 0.8-0.9 nm silicon is the main material used in the fabrication of these devices. The p-i-n diode has a typical responsivity of 0.65 A/W at 0.8 nm. The avalanche photodiode employs avalanche action to provide current gain and therefore higher detector responsivity. The avalanche gain can be 100, although the gain produces additional noise. The sensitivity of the photodetector and receiver system is determined by photodetector noise which occurs as a consequence of the statistical nature of the production of photoelectrons, and bulk and dark surface current, together with the thermal noise in the detector resistor and amplifier. For p-i-n diodes the thermal noise of the resistor and amplifier dominates, whereas with avalanche photodiodes the detector noise dominates. Figure 40.16 shows a LED and p-i-n diode detector for use in a fiber-optic system.
40.2.3.3 Fiber-Optic Communication Systems Figure 40.17 shows a complete fiber-optic communications system. In the design of such systems it is necessary to compute the system insertion loss in order that the system can
684
PART | VI Automation and Control Systems Figure 40.17 Fiber-optic communication system.
Figure 40.18 (a) Frequency-division multiplexing: (b) transmission of three signals using FDM.
(b)
be operated using the minimum transmitter output flux and minimum receiver input sensitivity. In addition to the loss in the cable itself, other sources of insertion loss occur at the connections between the transmitter and the cable and the cable and the receiver; at connectors joining cables; and at points where the cable has been spliced. The losses at these interfaces occur as a consequence of reflections, differences in fiber diameter, NA, and fiber alignment. Directional couplers and star connectors also increase the insertion loss.
40.3 Signal multiplexing In order to enable several signals to be transmitted over the same medium it is necessary to multiplex the signals. There are two forms of multiplexing: frequency-division multiplexing (FDM) and time-division multiplexing (TDM). FDM splits the available bandwidth of the transmission medium into a series of frequency bands and uses each of
the frequency bands to transmit one of the signals. TDM splits the transmission into a series of time slots and allocates certain time slots, usually on a cyclical basis, for the transmission of one signal. The basis of FDM is shown in Figure 40.18(a). The bandwidth of the transmission medium fm is split into a series of frequency bands, having a bandwidth fch, each one of which is used to transmit one signal. Between these channels there are frequency bands, having bandwidth fg, called “guard bands,” which are used to ensure that there is adequate separation and minimum cross-talk between any two adjacent channels. Figure 40.18(b) shows the transmission of three band-limited signals having spectral characteristics as shown, the low-pass filters at the input to the modulators being used to bandlimit the signals. Each of the signals then modulates a carrier. Any form of carrier modulation can be used, although it is desirable to use a modulation which requires minimum bandwidth. The modulation shown in Figure 40.18(b) is amplitude modulation (see Section 40.5).
685
Chapter | 40 Telemetry
The individually modulated signals are then summed and transmitted. Bandpass filters after reception are used to separate the channels, by providing attenuation which starts in the guard bands. The signals are then demodulated and smoothed. TDM is shown schematically in Figure 40.19. The multiplexer acts as a switch connecting each of the signals in turn to the transmission channel for a given time. In order to recover the signals in the correct sequence it is necessary to employ a demultiplexer at the receiver or to have some means inherent within the transmitted signal to identify its source. If N signals are continuously multiplexed then each one of them is sampled at a rate of 1/N Hz. They must therefore be bandlimited to a frequency of 1/2N Hz if the Shannon sampling theorem is not to be violated. The multiplexer acts as a multi-input-single-output switch, and for electrical signals this can be done by mechan
ical or electronic switching. For high frequencies electronic multiplexing is employed, with integrated circuit multiplexers which use CMOS or BIFET technologies. TDM circuitry is much simpler to implement than FDM circuitry, which requires modulators, band-pass filters, and demodulators for each channel. In TDM only small errors occur as a consequence of circuit non-linearities, whereas phase and amplitude non-linearities have to be kept small in order to limit intermodulation and harmonic distortion in FDM systems. TDM achieves its low channel crosstalk by using a wideband system. At high transmission rates errors occur in TDM systems due to timing jitter, pulse accuracy, and synchronization problems. Further details of FDM and TDM systems can be found in Johnson (1976) and Shanmugan (1979).
40.4 Pulse encoding
Figure 40.19 Time-division multiplexing.
Pulse code modulation (PCM) is one of the most commonly used methods of encoding analog data for transmission in instrumentation systems. In PCM the analog signal is sampled and converted into binary form by means of an ADC, and these data are then transmitted in serial form. This is shown in Figure 40.20. The bandwidth required for the transmission of a signal using PCM is considerably in excess of the bandwidth of the original signal. If a signal having a bandwidth fd is encoded into an N-bit binary code the minimum bandwidth required to transmit the PCM encoded signal is fd $ N Hz, i.e., N times the original signal bandwidth. Several forms of PCM are shown in Figure 40.21. The non-return to zero (NRZ-L) is a common code that is easily interfaced to a computer. In the non-return to zero mark (NRZ-M) and the non-return to zero space (NRZ-S) codes level transitions represent bit changes. In bi-phase level (BIΦ-L) a bit transition occurs at the center of every period. One is represented by a “1” level changing to a “0” level at the center transition point, and zero is represented by a “0” level changing to a “1” level. In bi-phase mark and space code (BIΦ-M) and (BIΦ-S) a level change occurs at the beginning of each bit period. In BIΦ-M one is represented by a mid-bit transition; a zero has no transition. BIΦ-S is the converse of BIΦ-M. Delay modulation code DM-M and DM-S have transitions at mid-bit and at the end of the bit time. In DM-M a one is represented by a level change at mid-bit; a zero followed by a zero is represented by a level change after the first zero. No level change occurs if a zero precedes a one. DM-S is the converse of DM-M. Bi-phase codes have a transition at least every bit time which can be used for synchronization, but they require twice the bandwidth of the NRZ-L code. The delay modulation codes offer the greatest bandwidth saving but are more susceptible to error, and are used if bandwidth compression is needed or high signal-to-noise ratio is expected. Alternative forms of encoding are shown in Figure 40.22. In pulse amplitude modulation (PAM) the amplitude of the
686
PART | VI Automation and Control Systems Figure 40.20 Pulse code modulation.
Figure 40.22 Other forms of pulse encoding.
Figure 40.21 Types of pulse code modulation.
signal transmitted is proportional to the magnitude of the signal being transmitted, and it can be used for the transmission of both analog and digital signals. The channel bandwidth required for the transmission of PAM is less than that required for PCM, although the effects of ISI are more marked. PAM as a means of transmitting digital data requires more complex decoding schemes, in that it is necessary to discriminate between an increased number of levels. Other forms of encoding which can be used include pulse-width modulation (PWM), otherwise referred to
as pulse-duration modulation (PDM), which employs a constant height variable width pulse with the information being contained in the width of the pulse. In pulse-position modulation (PPM) the position of the pulses corresponds to the width of the pulse in PWM. Delta modulation and sigma-delta modulation use pulse trains, the frequencies of which are proportional to either the rate of change of the signal or the amplitude of the signal itself. For analyses of the above systems in terms of the bandwidth required for transmission, signal-to-noise ratios, and error rate, together with practical details, the reader is directed to Hartley et al. (1967), Cattermole (1969), Steele (1975), and Shanmugan (1979).
687
Chapter | 40 Telemetry
40.5 Carrier wave modulation Modulation is used to match the frequency characteristics of the data to be transmitted to those of the Transmission channel, to reduce the effects of unwanted noise and interference, and to facilitate the efficient radiation of the signal. These are all effected by shifting the frequency of the data into some frequency band centered around a carrier frequency. Modulation also allows the allocation of specific frequency bands for specific purposes, such as in a FDM system or in rf transmission systems, where certain frequency bands are assigned for broadcasting, telemetry, etc. Modulation can also be used to overcome the limitations of signal-processing equipment in that the frequency of the signal can be shifted into frequency bands where the design of filters or amplifiers is somewhat easier, or into a frequency band that the processing equipment will accept. Modulation can be used to provide the bandwidth against signal-to-noise trade-offs which are indicated by the Hartley-Shannon theorem. Carrier-wave modulation uses the modulation of one of its three parameters, namely amplitude, frequency, or phase, and these are all shown in Figure 40.23. The techniques can be used for the transmission of both analog and digital signals. In amplitude modulation the amplitude of the carrier varies linearly with the amplitude of the signal to be transmitted. If the data signal d(t) is represented by a sinusoid d(t) = cos 2rfdt then in amplitude modulation the carrier wave c(t) is given by: c (t) = C ` 1 + m $ cos 2 rfd t j cos 2rfc t where C is the amplitude of the unmodulated wave, fc its frequency, and m is the depth of modulation which has a value lying between 0 and 1. If m = 1 then the carrier is said to have 100 percent modulation. The above expression for c(t) can be rearranged as: c ] t g = C $ cos 2 rfc t +
Cm 8 cos 2r ^ fc + fd h t + cos 2r ^ fc - fd h t B 2
showing that the spectrum of the transmitted signal has three frequency components at the carrier frequency fc and at the sum and difference frequencies ( fc + fd) and ( fc - fd). If the signal is represented by a spectrum having frequencies up to fd then the transmitted spectrum has a bandwidth of 2fd centered around fc. Thus in order to transmit data using AM a bandwidth equal to twice that of the data is required. As can be seen, the envelope of the AM signal contains the information, and thus demodulation can be effected simply by rectifying and smoothing the signal. Both the upper and lower sidebands of AM contain sufficient amplitude and phase information to reconstruct the data, and thus it is possible to reduce the bandwidth
Figure 40.23 Amplitude, frequency, and phase modulation of a carrier wave.
requirements of the system. Single side-band modulation (SSB) and vestigial side-band modulation (VSM) both trans mit the data using amplitude modulation with smaller bandwidths than straight AM. SSB has half the bandwidth of a simple AM system; the low-frequency response is generally poor. VSM transmits one side band almost completely and only a trace of the other. It is very often used in highspeed data transmission, since it offers the best compromise between bandwidth requirements, low-frequency response, and improved power efficiency. In frequency modulation consider the carrier signal c(t) given by: c (t) = C cos _ 2rfc t + z (t ) i
688
PART | VI Automation and Control Systems
Then the instantaneous frequency of this signal is given by: fi ] t g = fc +
1 dz $ 2r dt
The frequency deviation
z = k p $ cos 2rfd t
1 dz $ 2r dt of the signal from the carrier frequency is made to be proportional to the data signal. If the data signal is represented by a single sinusoid of the form:
then: dz (t) = 2rk f $ cos 2rfd t dt where kf is the frequency deviation constant which has units of Hz/V. Thus: t
#
-3
d (x) $ dx p
and assuming zero initial phase deviation, then the carrier wave can be represented by: c (t) = C cos _ 2rfc t + b sin 2rfd t i where b is the modulation index and represents the maximum phase deviation produced by the data. It is possible to show that c(t) can be represented by an infinite series of frequency components f ! nfd, n = 1, 2, 3,..., given by: c (t) = C
3
/
n = -3
and it can be shown that the carrier wave c(t) can be represented by: c (t ) = C cos _ 2rfc t + b cos 2rfd t i where b is now given by kp. For further details of the various modulation schemes and their realizations the reader should consult Shanmugan (1979) and Coates (1982).
d (t) = cos 2rfd t
c (t ) = C cos f 2rfc t + 2rk f
bandwidth of 2fD. Frequency modulation is used extensively in rf telemetry and in FDM. In phase modulation the instantaneous phase deviation z is made proportional to the data signal. Thus:
40.6 Error detection and correction codes Errors occur in digital data communications systems as a consequence of the corruption of the data by noise. Figure 40.24 shows the bit error probability as a function of signal-to-noise ratio for a PCM transmission system using NRZ-L coding. In order to reduce the probability of an error occurring in the transmission of the data, bits are added to the transmitted message. These bits add redundancy to the transmitted data, and since only part of the transmitted message is now the actual data, the efficiency of the transmission is reduced. There are two forms of error coding, known as forward error detection and correction coding (FEC), in which the transmitted message is coded in such a way that errors can
J n ^ b h cos _ 2rfc + n 2rfd i t
where Jn(b) is a Bessel function of the first kind of order n and argument b. Since the signal consists of an infinite number of frequency components, limiting the transmission bandwidth distorts the signal, and the question arises as to what is a reasonable bandwidth for the system to have in order to transmit the data with an acceptable degree of distortion. For b % 1 only J0 and J1 are important, and large b implies large bandwidth. It has been found in practice that if 98 percent or more of the FM signal power is transmitted, then the signal distortion is negligible. Carson’s rule indicates that the bandwidth required for FM transmission of a signal having a spectrum with components up to a frequency of fd is given by 2( fD + fd), where fD is the maximum frequency deviation. For narrow-band FM systems having small frequency deviations the bandwidth required is the same as that for AM. Wide-band FM systems require a
Figure 40.24 Bit-error probability for PCM transmission using a NRZ-L code.
689
Chapter | 40 Telemetry
be both detected and corrected continuously, and automatic repeat request coding (ARQ), in which if an error is detected then a request is sent to repeat the transmission. In terms of data-throughput rates FEC codes are more efficient than AQR codes because of the need to retransmit the data in the case of error in an ARQ code, although the equipment required to detect errors is somewhat simpler than that required to correct the errors from the corrupted message. ARQ codes are commonly used in instrumentation systems. Parity-checking coding is a form of coding used in ARQ coding in which (n - k) bits are added to the k bits of the data to make an n-bit data system. The simplest form of coding is parity-bit coding in which the number of added bits is one, and this additional bit is added to the data stream in order to make the total number of ones in the data stream either odd or even. The received data are checked for parity. This form of coding will detect only an odd number of bit errors. More complex forms of coding include linear block codes such as Hamming codes, cyclic codes such as BoseChanhuri-Hocquenghen codes, and geometric codes. Such codes can be designed to detect multiple-burst errors, in which two or more successive bits are in error. In general the larger the number of parity bits, the less efficient the coding is, but the larger are both the maximum number of random errors and the maximum burst length that can be detected. Figure 40.25 shows examples of some of these coding techniques. Further details of coding techniques can be found in Shanmugan (1979), Bowdell (1981), and Coates (1982).
40.7 Direct analog signal transmission Analog signals are rarely transmitted over transmission lines as a voltage since the method suffers from errors due to series and common mode inductively and capacitively coupled interference signals and those due to line resistance. The most common form of analog signal transmission is as current. Current transmission as shown in Figure 40.26 typically uses 0-20 or 4-20 mA. The analog signal is converted to a current at the transmitter and is detected at the receiver either by measuring the potential difference developed across a fixed resistor or using the current to drive an indicating instrument or chart recorder. The length of line over which signals can be transmitted at low frequencies is primarily limited by the voltage available at the transmitter to overcome voltage drop along the line and across the receiver. With a typical voltage of 24 V the system is capable of transmitting the current over several kilometers. The percentage error in a current transmission system can be calculated as 50 # the ratio of the loop resistance in ohms to the total line insulation resistance, also expressed in ohms. The accuracy of current transmission system systems is typically !0.5 percent. The advantage of using 4-20 mA instead of 0-20 mA is that the use of a live zero enables instrument or line faults to
Figure 40.25 Error-detection coding
be detected. In the 4-20 mA system zero value is represented by 4 mA and failure is indicated by 0 mA. It is possible to use a 4-20 mA system as a two-wire transmission system in which both the power and the signal are transmitted along the same wire, as shown in Figure 40.27. The 4 mA standing current is used to power the remote instrumentation and the transmitter. With 24 V drive the maximum power available to the remote station is 96 mW. Integrated-circuit devices such as the Burr-Brown XTR 100 are available for providing two-wire transmission. This is capable of providing a 420 mA output span for an input voltage as small as 10 mV, and is capable of transmitting at frequencies up to 2 kHz over a distance of 600 m. Current transmission cannot be used over the public telephone system because it requires a dc transmission path, and telephone systems use ac amplifiers in the repeater stations. Position telemetry transmits an analog variable by reproducing at the receiver the positional information available at the transmitter. Such devices employ null techniques with either resistive or inductive elements to achieve the position telemetry. Figure 40.28 shows an inductive “synchro.” The ac power applied to the transmitter induces EMF in the three stator windings, the magnitude of which are dependent upon the position of the transmitter rotor. If the receiver
690
PART | VI Automation and Control Systems Figure 40.26 4-20 mA current transmission system.
Figure 40.27 Two-wire transmission system.
rotor is aligned in the same direction as the transmitter rotor then the EMF induced in the stator windings of the receiver will be identical to those on the stator windings of the transmitter. There will therefore be no resultant circulating currents. If the receiver rotor is not aligned to the direction of the transmitter rotor then the circulating currents in the stator windings will be such as to generate a torque which will move the receiver rotor in such a direction as to align itself with the transmitter rotor.
40.8 Frequency transmission By transmitting signals as frequency the characteristics of the transmission line in terms of amplitude and phase characteristics are less important. On reception the signal can be counted over a fixed period of time to provide a digital measurement. The resolution of such systems will be one count in the total number received. Thus for high resolution it is necessary to count the signal over a long time period, and this method of transmission is therefore unsuitable for rapidly changing or multiplexed signals but is useful for such applications as batch control, where, for example, a totalized value of a variable over a given period is required. Figure 40.29(a) shows a frequency-transmission system. Frequency-transmission systems can also be used in two-wire transmission systems, as shown in Figure 40.29(b), where the twisted pair carries both the power to the remote device and the frequency signal in the form of current modulation. The frequency range of such systems is governed by the bandwidth of the channel over which the signal is to be transmitted, but commercially available integrated circuit V-to-f converters, such as the Analog Devices AD458, convert a 0-10 V dc signal to a frequency in the range 0-10 kHz or 0-100 kHz with a maximum non-linearity of !0.01 percent of FS output, a maximum temperature coefficient of !5 ppm/K, a maximum input offset voltage of !10 mV, and a maximum input offset voltage temperature coefficient of 30 nV/K. The response time is two output pulses plus 2 n. A low-cost f to V converter, such as the Analog Devices AD453, has an input frequency range of 0-100 kHz with a variable threshold
Figure 40.28 Position telemetry using an inductive “synchro.”
voltage of between 0 and !12 V, and can be used with low-level signals as well as high-level inputs from TTL and CMOS. The converter has a full-scale output of 10 V and a non-linearity of less than !0.008 percent of FS with a maximum temperature coefficient of !50 ppm/K. The maximum response time is 4 ms.
40.9 Digital signal transmission Digital signals are transmitted over transmission lines using either serial or parallel communication. For long-distance communication serial communication is the preferred method. The serial communication may be either synchronous or asynchronous. In synchronous communication the data are sent in a continuous stream without stop or start information. Asynchronous communication refers to a mode of communication in which data are transmitted as individual blocks framed by start and stop bits. Bits are also added to the data stream for error detection. Integrated circuit devices known as universal asynchronous receiver transmitters (UARTS) are available for converting parallel data into a serial format suitable for transmission over a twisted pair or coaxial line and for reception of the data in serial format and reconversion to parallel format with parity-bit checking. The schematic diagram for such a device is shown in Figure 40.30. Because of the high capacitance of twisted-pair and coaxial cables the length of line over which standard 74 series TTL can transmit digital signals is limited typically to a length of
691
Chapter | 40 Telemetry
Figure 40.29 (a) Frequencytransmission system; (b) two-wire frequency-transmission system.
(a)
(c) Figure 40.30 (a) Universal asynchronous receiver transmitter (UART). (b) Serial data format. (c) Transmitter timing (not to scale). (d) Receiver timing (not to scale). (e) Start bit timing.
692
3 m at 2 Mbit/s. This can be increased to 15 m by the use of open-collector TTL driving a low-impedance terminated line. In order to drive digital signals over long lines specialpurpose line driver and receiver circuits are available. Integrated circuit Driver/Receiver combinations such as the Texas Instruments SN75150/SN75152, SN75158/SN75157, and SN75156/SN75157 devices meet the internationally agreed EIA Standards RS-232C, RS-422A, and RS-423A, respectively (see Section 40.9.2).
40.9.1 Modems In order to overcome the limitations of the public telephone lines digital data are transmitted down these lines by means of a modem. The two methods of modulation used by modems are frequency-shift keying (FSK) and phaseshift keying (PSK). Amplitude-modulation techniques are not used because of the unsuitable response of the line to step changes in amplitude. Modems can be used to transmit information in two directions along a telephone line. Full-duplex operation is transmission of information in both directions simultaneously; half-duplex is the transmission of information in both directions but only in one direction at any one time; and simplex is the transmission of data in one direction only. The principle of FSK is shown in Figure 40.31. FSK uses two different frequencies to represent a 1 and a 0, and this
Figure 40.31 Frequency-shift keying.
PART | VI Automation and Control Systems
can be used for data transmission rates up to 1200 bits/s. The receiver uses a frequency discriminator whose threshold is set midway between the two frequencies. The recommended frequency shift is not less than 0.66 of the modulating frequency. Thus a modem operating at 1200 bits/s has a recommended central frequency of 1700 Hz and a frequency deviation of 800 Hz, with a 0 represented by a frequency of 1300 Hz and a 1 by a frequency of 2100 Hz. At a transmission rate of 200 bits/s it is possible to operate a full-duplex system. At 600 and 1200 bits/s half-duplex operation is used incorporating a slow-speed backward channel for supervisory control or low-speed return data. At bit rates above 2400 bits/s the bandwidth and group delay characteristics of telephone lines make it impossible to transmit the data using FSK. It is necessary for each signal to contain more than one bit of information. This is achieved by a process known as phase shift keying (PSK), in which the phase of a constant-amplitude carrier is changed. Figure 40.32(a) shows the principle of PSK and
Figure 40.32 (a) Principle of phase-shift keying; (b) two-, four-, and eight-state shift keying.
693
Chapter | 40 Telemetry
Figure 40.32(b) shows how the information content of PSK can be increased by employing two-, four-, and eight-state systems. It should now be seen that the number of signal elements/s (which is referred to as the Baud rate) has to be multiplied by the number of states to obtain the data transmission rate in bits/s. Thus an eight-state PSK operating at a rate of 1200 baud can transmit 9600 bits/s. For years, 9600 bps was the fastest transmission over telephone cables using leased lines, i.e., lines which are permanently allocated to the user as opposed to switched lines. At the higher data transmission rates it is necessary to apply adaptive equalization of the line to ensure correct operation and also to have in-built error-correcting coding. Details of various schemes for modem operation are to be found in Coates (1982) and Blackwell (1981). The International Telephone and Telegraph Consultative Committee (CCITT) has made recommendations for the mode of operation of modems operating at different rates over telephone lines. These are set out in recommendations V21, V23, V26, V27, and V29. These are listed in the References. Since the publication of the second edition of this book, much improvement in telephone modems has been made. Typically, today’s modems utilize the V90 protocol and operate at maximum speeds of up to 56 Kbps. Figure 40.33 shows the elements and operation of a typical modem. The data set ready (DSR) signal indicates to the equipment attached to the modem that it is ready to transmit data. When the equipment is ready to send the data it sends a request to send (RTS) signal. The modem then starts transmitting down the line. The first part of the transmission is to synchronize the receiving modem. Having given sufficient time for the receiver to synchronize, the transmitting modem sends a clear to send (CTS) signal to the equipment, and the data are then sent. At the receiver the detection of the transmitted signal sends the data carrier detected (DCD) line high, and the signal transmitted is demodulated.
40.9.2 Data Transmission and Interfacing Standards To ease the problem of equipment interconnection various standards have been introduced for serial and parallel data transmission. For serial data transmission between data terminal equipment (DTE), such as a computer or a piece of peripheral equipment, and data communication equipment (DCE), such as modem, the standards which are currently being used are the RS-232C standard produced in the USA by the Electronic Industries Association (EIA) in 1969 and their more recent RS-449 standard with its associated RS-422 and RS-423 standards. The RS-232C standard defines an electro-mechanical interface by the designation of the pins of a 25-pin plug and socket which are used for providing electrical ground, data interchange, control, and clock or timing signals between the
Figure 40.33 Modem operation.
two pieces of equipment. The standard also defines the signal levels, conditions, and polarity at each interface connection. Table 40.1 gives the pin assignments for the interface, and it can be seen that only pins 2 and 3 are used for data transmission. Logical 1 for the driver is an output voltage between -5 and -15 V, with logical zero being between +5 and +15 V. The receiver detects logical 1 for input voltages 3 V, thus giving the system a minimum 2 V noise margin. The maximum transmission rate of data is 20,000 bits/s, and the maximum length of the interconnecting cable is limited by the requirement that the receiver should not have more than 2500 pF across it. The length of cable permitted thus depends on its capacitance/unit length. The newer RS-449 interface standard which is used for higher data-transmission rates defines the mechanical characteristics in terms of the pin designations of a 37-pin interface. These are listed in Table 40.2. The electrical
694
PART | VI Automation and Control Systems
Table 40.1 Pin assignments for RS-232 Pin number
Signal nomenclature
Signal a bbreviation
Signal description
Category
1
AA
-
Protective ground
Ground
2
BA
TXD
Transmitted data
Data
3
BB
RXD
Received data
Data
4
CA
RTS
Request to send
Control
5
CB
CTS
Clear to send
Control
6
CC
DSR
Data set ready
Ground
7
AB
-
Signal ground
8
CF
DCD
Received line signal detector
Control
9
-
-
-
Reserved for test
10
-
-
-
Reserved for test
11
-
-
-
Unassigned
12
SCF
-
Secondary received line signal detector
Control
13
SCB
-
Secondary clear to send
Control
14
SBA
-
Secondary transmitted data
Data
15
DB
-
Transmission signal element timing
Timing
16
SBB
-
Secondary received data
Data
17
DD
-
Received signal element timing
Timing
18
-
-
-
Unassigned
19
SCA
-
Secondary request to send
Control
20
CD
DTR
Data terminal ready
Control
21
CG
-
Signal quality detector
Control
22
CE
-
Ring indicator
Control
23
CH/CI
-
Data signal rate selector
Control
24
DA
-
Transmit signal element timing
Timing
25
-
-
-
Unassigned
characteristics of the interface are specified by the two other associated standards, RS-422, which refers to communication by means of a balanced differential driver along a balanced interconnecting cable with detection being by means of a differential receiver, and RS-423, which refers to communication by means of a single-ended driver on an unbalanced cable with detection by means of a differential receiver. These two systems are shown in Figure 40.34. The maximum recommended cable lengths for the balanced RS-422 standard are 4000 ft at 90 kbits/s, 380 ft at 1 Mbits/s and 40 ft at 10 Mbits/s. For the unbalanced RS-423 standard the limits are 4000 ft at 900 bits/s, 380 ft at 10 kbits/s and 40 ft at 100 kbits/s. In addition, the RS-485 standard has been developed to permit communication between multiple addressable devices in a ring format, with up to 33 addresses per loop.
For further details of these interface standards the reader is directed to the standards produced by the EIA and IEEE. These are listed in the References. Interfaces are also discussed in Part 5. The IEEE-488 Bus (IEEE 1978), often referred to as the HPIB Bus (the Hewlett-Packard Interface Bus or the GPIP, General Purpose Interface Bus), is a standard which specifies a communications protocol between a controller and instruments connected onto the bus. The instruments typically connected on to the bus include digital voltmeters, signal generators, frequency meters, and spectrum and impedance analyzers. The bus allows up to 15 such instruments to be connected onto the bus. Devices talk, listen, or do both, and at least one device on the bus must provide control, this usually being a computer. The bus uses 15 lines, the pin connections for which are shown in Table 40.3. The signal
695
Chapter | 40 Telemetry
Table 40.2 Pin assignments for RS-449 Circuit mnemonic
Circuit name
Circuit direction
Circuit type
SG
Signal ground
-
Common
SC
Send common
To DCE
RC
Received common
From DCE
IS
Terminal in service
To DCE
IC
Incoming call
From DCE
TR
Terminal ready
To DCE
DM
Data mode
From DCE
SD
Send data
To DCE
RD
Receive data
From DCE
TT
Terminal timing
To DCE
ST
Send timing
From DCE
RT
Receive timing
From DCE
RS
Request to send
To DCE
CS
Clear to send
From DCE
RR
Receiver ready
From DCE
SQ
Signal quality
From DCE
NS
New signal
To DCE
SF
Select frequency
To DCE
SR
Signal rate selector
To DCE
SI
Signal rate indicator
From DCE
SSD
Secondary send data
To DCE
SRD
Secondary receive data
From DCE
Control
Primary channel data Primary channel timing
Primary channel control
Secondary channel data
SRS
Secondary request to send
To DCE
Secondary
SCS
Secondary clear to send
From DCE
channel
SRR
Secondary receiver ready
From DCE
control Control
LL
Local loopback
To DCE
RL
Remote loopback
To DCE
TM
Test mode
From DCE
SS
Select standby
To DCE
SB
Standby indicator
From DCE
levels are TTL and the cable length between the controller and the device is limited to 2 m. The bus can be operated at a frequency of up to 1 MHz. The connection diagram for a typical system is shown in Figure 40.35. Eight lines are used for addresses, program data, and measurement data transfers, three lines are used for the control of data transfers by means of a handshake technique, and five lines are used for general interface management. CAMAC (which is an acronym for Computer Automated Measurement and Control) is a multiplexed interface system which not only specifies the connections and the communications protocol between the modules of the system which act as interfaces between the computer system and peripheral
Control
Figure 40.34 RS-422 and RS-423 driver/receiver systems.
696
PART | VI Automation and Control Systems
Table 40.3 Pin assignment for IEEE-488 interface Pin no.
Function
Pin no.
Function
1
DIO 1
13
DIO 5
2
DIO 2
14
DIO 6
3
DIO 3
15
DIO 7
4
DIO 4
16
DIO 8
5
EOI
17
REN
6
DAV
18
GND twisted pair with 6
7
NRFD
19
GND twisted pair with 7
8
NDAC
20
GND twisted pair with 8
9
IFC
21
GND twisted pair with 9
10
SRQ
22
GND twisted pair with 10
11
ATN
23
GND twisted pair with 11
12
Shield (to earth)
24
Signal ground DIO = Data Input-Output EOI = End Or Identify REN = Remote Enable DAV = Data Valid NRFD = Not Ready For Data NDAC = Not Data Accepted IFC = Interface Clear SRQ = Service Request ATN = Attention GND = Ground
Figure 40.35 IEEE-488 bus system.
devices but also stipulates the physical dimensions of the plug-in modules. These modules are typically ADCs, DACs, digital buffers, serial to parallel converters, parallel to serial converters, and level changers. The CAMAC system offers a 24-bit parallel data highway via an 86-way socket at the rear of each module. Twenty-three of the modules are housed in
a single unit known as a “crate,” which additionally houses a controller. The CAMAC system was originally specified for the nuclear industry, and is particularly suited for systems where a large multiplexing ratio is required. Since each module addressed can have up to 16 subaddresses a crate can have up to 368 multiplexed inputs/outputs. For further details of the CAMAC system the reader is directed to Barnes (1981) and to the CAMAC standards issued by the Commission of the European Communities, given in the References. The S100 Bus (also referred to as the IEEE-696 interface (IEEE 1981)) is an interface standard devised for busoriented systems and was originally designed for interfacing microcomputer systems. Details of this bus can be found in the References. Two more bus systems are in common usage: USB or Universal Serial Bus, and IEEE 1394 “FireWire.” These are high-speed serial buses with addressable nodes and the ability to pass high bandwidth data. USB is becoming the preferred bus for microcomputer peripherals, while Fire-Wire is becoming preferred for high bandwidth data communications over short distances, such as testbed monitoring.
697
Chapter | 40 Telemetry
In addition to telephone modems, new methods of information transfer that have reached widespread use since 1990 include ISDN, the digital subscriber line ADSL (asynchronous) and SDSL (synchronous), and high-bandwidth cable modems.
References Barnes, R. C. M., “A standard interface: CAMAC,” in Minicomputers: A Handbook for Engineers, Scientists, and Managers (ed. Y. Parker), Abacus, London (1981), pp. 167-187. Bentley, J., Principles of Measurement Systems, Longman, London (1983). Bowdell, K., “Interface data transmission,” in Microcomputers: A Handbook for Engineers, Scientists, and Managers (ed. Y. Parker), Abacus, London (1981), pp. 148-166. Blackwell, J., “Long distance communication,” in Minicomputers: A Handbook for Engineers, Scientists, and Managers (ed. Y. Parker), Abacus, London (1981), pp. 301-316. Cattermole, K. W., Principles of Pulse Code Modulation, Iliffe, London (1969). CCITT, Recommendation V 24. List of definitions for interchange circuits between data-terminal equipment and data circuit-terminating equipment, in CCITT, Vol. 8.1, Data Transmission over the Telephone Network, International Telecommunication Union, Geneva (1977). Coates, R. F. W., Modern Communication Systems, 2d ed., Macmillan, London (1982). EEC Commission: CAMAC, A Modular System for Data Handling. Revised Description and Specification, EUR 4100e, HMSO, London (1972). EEC Commission: CAMAC, Organisation of Multi-Crate Systems. Specification of the Branch Highway and CAMAC Crate Controller Type A, EUR 4600e, HMSO, London (1972). EEC Commission: CAMAC, A Modular Instrumentation System for Data Handling. Specification of Amplitude Analog Signals, EUR 5100e, HMSO, London (1972). EIA, Standard RS-232C Interface between Data Terminal Equipment and Data Communications Equipment Employing Serial Binary Data Interchange, EIA, Washington, D.C. (1969). EIA, Standard RS-449 General-purpose 37-position and 9-position Interface for Data Terminal Equipment and Data Circuit-terminating
Equipment Employing Serial Binary Data Interchange, EIA, Washington, D.C. (1977). Hartley, G., P. Mornet, F. Ralph, and D. J. Tarron, Techniques of Pulse Code Modulation in Communications Networks, Cambridge University Press, Cambridge (1967). HMSO, Private Point-to-point Systems Performance Specifications (Nos W. 6457 and W. 6458) for Angle-Modulated UHF Transmitters and Receivers and Systems in the 450–470 Mcls Band, HMSO, London (1963). HMSO, Performance Specification: Medical and Biological Telemetry Devices, HMSO, London (1978). HMSO, Performance Specification: Transmitters and Receivers for Use in the Bands Allowed to Low Power Telemetry in the PMR Service, HMSO, London (1979). HMSO, International Telecommunication Union World Administrative Radio Conference, 1979, Radio Regulations. Revised International Table of Frequency Allocations and Associated Terms and Definitions, HMSO, London (1980). IEEE, IEEE-488–1978 Standard Interface for Programmable Instruments, IEEE, New York (1978). IEEE, IEEE-696–1981 Standard Specification for S-100 Bus Interfacing Devices, IEEE, New York (1981). Johnson, C. S., “Telemetry data systems,” Instrument Technology, 39-53, Aug. (1976); 47-53, Oct. (1976). Keiser, G., Optical Fiber Communication, McGraw-Hill International, London (1983). Senior, J., Optical Fiber Communications, Principles and Practice, PrenticeHall, London (1985). Shanmugan, S., Digital and Analog Communications Systems, John Wiley, New York (1979). Steele, R., Delta Modulation Systems, Pentech Press, London (1975). Warnock, J. D., Section 16.27 in The Process Instruments and Controls Handbook 3rd ed., (ed. by D. M. Considine), McGraw-Hill, London (1985). Wilson, J., and J. F. B. Hawkes, Optoelectronics: An Introduction, PrenticeHall, London (1983).
Further Reading Strock, O. J., Telemetry Computer Systems: An Introduction, Prentice-Hall, London (1984).
This page intentionally left blank
Chapter 41
Display and Recording M. L. Sanderson
41.1 Introduction Display devices are used in instrumentation systems to provide instantaneous but non-permanent communication of information between a process or system and a human observer. The data can be presented to the observer in either an analog or a digital form. Analog indicators require the observer to interpolate the reading when it occurs between two scale values, which requires some skill on the part of the observer. They are, however, particularly useful for providing an overview of the process or system and an indication of trends when an assessment of data from a large number of sources has to be quickly assimilated. Data displayed in a digital form require little skill from the observer in reading the value of the measured quantity, though any misreading can introduce a large observational error as easily as a small one. Using digital displays it is much more difficult to observe trends within a process or system and to quickly assess, for example, the deviation of the process or system from its normal operating conditions. Hybrid displays incorporating both analog and digital displays combine the advantages of both. The simplest indicating devices employ a pointer moving over a fixed scale; a moving scale passing a fixed pointer; or a bar graph in which a semi-transparent ribbon moves over a scale. These devices use mechanical or electromechanical means to effect the motion of the moving element. Displays can also be provided using illuminative devices such as light-emitting diodes (LEDs), liquid crystal displays (LCDs), plasma displays, and cathode ray tubes (CRTs). The mechanisms and configurations of these various display techniques are identified in Table 41.1. Recording enables hard copy of the information to be obtained in graphical or alphanumeric form or the information to be stored in a format which enables it to be retrieved at a later stage for subsequent analysis, display, or conversion into hard copy. Hard copy of graphical information is made by graphical recorders; x–t recorders enable the relationships between one or more variables and time to be obtained while ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00041-3
x–y recorders enable the relationship between two variables to be obtained. These recorders employ analog or digital drive mechanisms for the writing heads, generally with some form of feedback. The hard copy is provided using a variety of techniques, including ink pens or impact printing on normal paper, or thermal, optical, or electrical writing techniques on specially prepared paper. Alphanumeric recording of data is provided by a range of printers, including impact printing with cylinder, golf ball, daisywheel, or dot matrix heads; or non-impact printing techniques including ink-jet, thermal, electrical, electrostatic, electromagnetic, or laser printers. Recording for later retrieval can use either magnetic or semiconductor data storage. Magnetic tape recorders are used for the storage of both analog and digital data. Transient/waveform recorders (also called waveform digitizers) generally store their information in semiconductor memory. Data-logger systems may employ both semiconductor memory and magnetic storage on either disc or tape. Table 41.2 shows the techniques and configurations of the commonly used recording systems. The display and recording of information in an instrumentation system provides the human/machine interface (HMI) between the observer and the system or process being monitored. It is of fundamental importance that the information should be presented to the observer in as clear and unambiguous a way as possible and in a form that is easily assimilated. In addition to the standard criteria for instrument performance such as accuracy, sensitivity, and speed of response, ergonomic factors involving the visibility, legibility, and organization and presentation of the information are also of importance in displays and recorders.
41.2 Indicating devices In moving-pointer indicator devices (Figure 41.1) the pointer moves over a fixed scale which is mounted vertically or horizontally. The scale may be either straight or an
699
700
PART | VI Automation and Control Systems
TABLE 41.1 Commonly used display techniques Display technique
Mechanism
Configurations
Moving pointer
Mechanical/electromechanical movement of pointer over a fixed scale
Horizontal/vertical, straight, arc circular, or segment scales with edgewise strip, hairline, or arrowshaped pointers
Moving scale
Mechanical/electromechanical movement of scale; indication given by position of scale with respect to fixed pointer
Moving dial or moving drum analog indicators, digital drum indicators
Bar graph
Indication given by height or length of vertical or horizontal column
Moving column provided by mechanically driven ribbon or LED or LCD elements
Light emitting diodes
Light output provided by recombination electroluminescence in a forward-biased semiconductor diode
Red, yellow, green displays configured as lamps, bar graphs, 7- and 16-segment alphanumeric displays, dot matrix displays
Liquid crystal displays
The modulation of intensity of transmitted-reflected light by the application of an electric field to a liquid crystal cell
Reflective or transmissive displays, bar graph, 7-segment, dot matrix displays, alphanumeric panels
Plasma displays
Cathode glow of a neon gas discharge
Nixie tubes, 7-segment displays, plasma panels
CRT displays
Conversion into light of the energy of scanning electron beam by phosphor
Monochrome and color tubes, storage tubes, configured as analog, storage, sampling, or digitizing oscilloscopes, VDUs, graphic displays
Indicating devices
Illuminative displays
arc. The motion is created by mechanical means, as in pressure gauges, or by electromechanical means using a moving coil movement. (For details of such movements the reader is directed to Chapter 20 in Part 3 of this book.) The pointer consists of a knife edge, a line scribed on each side of a transparent member, or a sharp, arrow-shaped tip. The pointers are designed to minimize the reading error when the instrument is read from different angles. For precision work such “parallax errors” are reduced by mounting a mirror behind the pointer. Additional pointers may be provided on the scale. These can be used to indicate the value of a set point or alarm limits. The pointer scales on such instruments should be designed for maximum clarity. BS 3693 sets out recommended scale formats. The standard recommends that the scale base length of an analog indicating device should be at least 0.07 D, where D is the viewing distance. At a distance of 0.7 m, which is the distance at which the eye is in its resting state of accommodation, the minimum scale length should be 49 mm. The reading of analog indicating devices requires the observer to interpolate between readings. It has been demonstrated that observers can subdivide the distance between two scale markings into five, and therefore a scale which is to be read to within 1
percent of full-scale deflection (FSD) should be provided with twenty principal divisions. For electromechanical indicating instruments accuracy is classified by BS 89 into nine ranges, from !0.05 percent to !5 percent of FSD. A fast response is not required for visual displays since the human eye cannot follow changes much in excess of 20 Hz. Indicating devices typically provide frequency responses up to 1–2 Hz. Moving-scale indicators in which the scale moves past a fixed pointer can provide indicators with long scale lengths. Examples of these are also shown in Figure 41.1. In the bar graph indicator (Figure 41.2) a semitransparent ribbon moves over a scale. The top of the ribbon indicates the value of the measured quantity and the ribbon is generally driven by a mechanical lead. Arrays of LEDs or LCDs can be used to provide the solid state equivalent of the bar graph display.
41.3 Light-emitting diodes (LEDs) These devices, as shown in Figures 41.3(a) and 41.3(b), use recombination (injection) electroluminescence and consist of a forward-biased p–n junction in which the majority carriers
701
Chapter | 41 Display and Recording
TABLE 41.2 Commonly used recording techniques Recording system
Technique
Configurations
Graphical recorders
Provide hard copy of data in graphical form using a variety of writing techniques, including pen-ink, impact printing, thermal, optical, and electric writing
Single/multichannel x–t strip chart and circular chart recorders, galvanometer recorders, analog and digital x–y recorders, digital plotters
Printers
Provide hard copy of data in alphanumeric form using impact and non-impact printing techniques
Serial impact printers using cylinder, golf ball, daisywheel, or dot matrix heads. Line printers using drum, chain-belt, oscillating bar, comb, and needle printing heads. Non-impact printers using thermal, electrical, electrostatic, magnetic, ink-jet, electrophotographic, and laser printing techniques
Magnetic recording
Use the magnetization of magnetic particles on a substrate to store information
Magnetic tape recorders using direct, frequency modulation, or pulse code modulation technique for the storage of analog or digital data. Spool to spool or cassette recorders. Floppy or hard discs for the storage of digital data
Transient recorders
Use semiconductor memory to store high-speed transient waveforms
Single/multichannel devices using analog-to-digital conversion techniques. High-speed transient recorders using optical scanning techniques to capture data before transfer to semiconductor memory
Data loggers
Data-acquisition system having functions programmed from the front panel
Configured for a range of analog or digital inputs with limited logical or mathematical functions. Internal display using LED, LCD, CRT. Hard copy provided by dot matrix or thermal or electrical writing technique. Data storage provided by semiconductor or magnetic storage using tape or disc
from both sides of the junction cross the internal potential barrier and enter the material on the other side, where they become minority carriers, thus disturbing the local minority carrier population. As the excess minority carriers diffuse away from the junction, recombination occurs. Electroluminescence takes place if the recombination results in radiation. The wavelength of the emitted radiation is inversely proportional to the band gap of the material, and therefore for the radiation to be in the visible region the band gap must be greater than 1.8 eV. GaAs0.6P0.4 (gallium arsenide phosfide) emits red light at 650 nm. By increasing the proportion of phosphorus and doping with nitrogen the wavelength of the emitted light is reduced. GaAs0.15P0.65N provides a source of orange light, whilst GaAs0.15P0.85N emits yellow light at 589 nm. Gallium phosfide doped with nitrogen radiates green light at 570 nm.
Although the internal quantum efficiencies of LED materials can be high, the external quantum efficiencies may be much lower. This is because the materials have a high refractive index and therefore a significant proportion of the emitted radiation strikes the material/air interface beyond the critical angle and is totally internally reflected. This is usually overcome by encapsulating the diode in a hemispherical dome made of epoxy resin, as shown in Figure 41.3(c). The construction of an LED element in a seven-segment display is shown in Figure 41.3(d). The external quantum efficiencies of green diodes tend to be somewhat lower than those of red diodes, but, for the same output power, because of the sensitivity of the human eye the green diode has a higher luminous intensity. Typical currents required for LED elements are in the range of 10–100 mA. The forward diode drop is in the range
702
Figure 41.1 Moving-pointer and moving-scale indicators.
PART | VI Automation and Control Systems
Figure 41.3 Light-emitting diodes.
41.4 Liquid crystal displays (LCDs)
Figure 41.2 Bar-graph indicator.
of 1.6–2.2 V, dependent on the particular device. The output luminous intensities of LEDs range from a few to over a hundred millicandela, and their viewing angle can be up to !60°C. The life expectancy of a LED display is twenty years, over which time it is expected that there will be a 50 percent reduction in output power. LEDs are available as lamps, seven- and sixteen-segment alphanumeric displays, and in dot matrix format (Figure 41.3(e)). Alphanumeric displays are provided with on-board decoding logic which enables the data to be entered in the form of ASCII or hexadecimal code.
These displays are passive and therefore emit no radiation of their own but depend upon the modulation of reflected or transmitted light. They are based upon the optical properties of a large class of organic materials known as liquid crystals. Liquid crystals have molecules which are rod shaped and which, even in their liquid state, can take up certain defined orientations relative to each other and also with respect to a solid interface. LCDs commonly use nematic liquid crystals such as p-azoxyanisole, in which the molecules are arranged with their long axes approximately parallel, as shown in Figure 41.4(a). They are highly anisotropic—that, is, they have different optical or other properties in different directions. At a solid-liquid interface the ordering of the crystal can be either homogeneous (in which the molecules are parallel to the interface) or homeotropic (in which the molecules are aligned normal to the interface), as shown in Figure 41.4(b). If a liquid crystal is confined between two plates which without the application of an electric field is a homogeneous state, then when an electric field is applied the molecules will align themselves with this field in order to minimize their energy. As shown in Figure 41.4(c), if the E field is less than some critical value Ec, then the ordering is not affected. If E 2 Ec then the molecules farthest away from the interfaces are realigned. For values of the electric field E & E c most of the molecules are realigned. A typical reflective LCD consists of a twisted nematic cell in which the walls of the cell are such as to produce a homogeneous ordering of the molecules but rotated by 90° (Figure 41.4(d)). Because of the birefringent nature of the crystal, light polarized on entry in the direction of alignment
703
Chapter | 41 Display and Recording
employed with the cell having a response corresponding to the rms value of the applied voltage. The frequency of the ac is in the range 25 Hz to 1 kHz. Power consumption of LCD displays is low, with a typical current of 0.3–0.5 nA at 5 V. The optical switching time is typically 100–150 ms. They can operate over a temperature range of -10°C to +70°C. Polarizing devices limit the maximum light which can be reflected from the cell. The viewing angle is also limited to approximately !45°. This can be improved by the use of cholesteric liquid crystals used in conjunction with dichroic dyes which absorb light whose direction of polarization is parallel to its long axis. Such displays do not require polarizing filters and are claimed to provide a viewing angle of !90°. LCDs are available in bar graph, seven-segment, dot matrix, and alphanumeric display panel configurations. The availability of inexpensive color LCD panels for laptop computers has made possible a whole new class of recording instruments, in which the LCD replaces the pointer-andpaper chart recorder as a virtual instrument.
41.5 Plasma displays
Figure 41.4 Liquid crystal displays. (a) Ordering of nematic crystals; (b) ordering at liquid crystal/solid interface; (c) application of an electric field to a liquid crystal cell; (d) reflective LCD using a twisted nematic cell.
of the molecules at the first interface will leave the cell with its direction of polarization rotated by 90°. On the application of an electric field in excess of the critical field (for a cell of thickness 10 nm a typical critical voltage is 3 V) the molecules align themselves in the direction of the applied field. The polarized light does not then undergo any rotation. If the cell is sandwiched between two pieces of polaroid with no applied field, both polarizers allow light to pass through them, and therefore incident light is reflected by the mirror and the display appears bright. With the application of the voltage the polarizers are now crossed, and therefore no light is reflected and the display appears dark. Contrast ratios of 150:1 can be obtained. DC voltages are generally not used in LCDs because of the electromechnical reactions. AC waveforms are
In a plasma display as shown in Figure 41.5(a) a glass envelope filled with neon gas, with added argon, krypton, and mercury to improve the discharge properties of the display, is provided with an anode and a cathode. The gas is ioni zed by applying a voltage of approximately 180 V between the anode and cathode. When ionization occurs there is an orange yellow glow in the region of the cathode. Once ionization has taken place the voltage required to sustain the ionization can be reduced to about 100 V. The original plasma display was the Nixie tube, which had a separate cathode for each of the figures 0-9. Plasma displays are now available as seven-segment displays or plasma panels as shown in Figures 41.5(a) and 41.5(b). The seven-segment display used a separate cathode for each of the segments. The plasma panel consists, for example, of 512 # 512 vertical and horizontal x and y electrodes. By applying voltages to specific x and y electrode pairs (such that the sum of the voltages at the spot specified by the (x, y) address exceeds the ionization potential) then a glow will be produced at that spot. The display is operated by applying continuous ac voltages equivalent to the sustaining potential to all the electrodes. Ionization is achieved by pulsing selected pairs. The glow at a particular location is quenched by applying antiphase voltages to the electrodes. The display produced by plasma discharge has a wide viewing angle capability, does not need back lighting, and is flicker free. A typical 50 mm seven-segment display has a power consumption of approximately 2 W.
704
PART | VI Automation and Control Systems
Figure 41.6 Cathode ray tube. (a) Electrostatic focusing and deflection; (b) electromagnetic focusing and deflection.
Figure 41.5 Plasma display. (a) Seven-segment display; (b) plasma panel.
41.6 Cathode ray tubes (CRTs) CRTs are used in oscilloscopes which are commonly employed for the display of repetitive or transient waveforms. They also form the basis of visual display units (VDUs) and graphic display units. The display is provided by using a phosphor which converts the energy from an electron beam into light at the point at which the beam impacts on the phosphor. A CRT consists of an evacuated glass envelope in which the air pressure is less than 10-4 Pascal (Figure 41.6). The thermionic cathode of nickel coated with oxides of barium, strontium, and calcium is indirectly heated to approximately 1100K and thus gives off electrons. The number of electrons which strike the screen and hence control the brightness of the display is adjusted by means of the potential applied to a control grid surrounding the cathode. The control grid which has a pin hole through which the electrons can pass is held at a potential of between 0 and 100 V negative with respect to the cathode. The beam of electrons pass through the first anode A1 which is typically held at a potential of 300 V positive with respect to the cathode before being focused, accelerated, and deflected. Focusing and deflection of the beam can be by either electrostatic or magnetic means. In the electrostatic system shown in Figure 41.6(a) the cylindrical focusing anode A2,
which consists of disc baffles having an aperture in the center of them, is between the first anode A1 and the accelerating anode A3, which is typically at a potential of 3–4 kV with respect to the cathode. Adjusting the potential on A2 with respect to the potentials on A1 and A3 focuses the beam such that the electrons then travel along the axis of the tube. In a magnetic focusing system magnetic field coils around the tube create a force on the electrons, causing them to spiral about the axis and also inwardly. By employing magnetic focusing it is possible to achieve a smaller spot size than with electrostatic focusing. Deflection of the electron beam in a horizontal and vertical direction moves the position of the illuminated spot on the screen. Magnetic deflection provides greater deflection capability and is therefore used in CRTs for television, alphanumeric, and graphical displays. It is slower than electrostatic deflection, which is the deflection system commonly used in oscilloscopes. Acceleration of the beam is by either the use of the accelerating electrode A3 (such tubes are referred to as monoaccelerator tubes) or by applying a high potential (10–14 kV) on to a post-deflection anode situated close to the CRT screen. This technique, which is known as post-deflection acceleration (PDA), gives rise to tubes with higher light output and increased deflection sensitivity. The phosphor coats the front screen of the CRT. A range of phosphors are available, the choice of which for a particular situation depends on the color and efficiency of the luminescence required and its persistence time, that is, the time for which the afterglow continues after the electron beam has been removed. Table 41.3 provides the characteristics of some commonly used phosphors.
705
Chapter | 41 Display and Recording
TABLE 41.3 Characteristics of commonly used phosphors (Courtesy of Tektronix) Phosphor
Fluorescence
Relativea luminance (%)
Relativeb photographic writing speed (%)
Decay
Relative burn resistance
Comments
PI
Yellow–green
50
20
Medium
Medium
In most applications replaced by P31
P4
White
50
40
Medium/short
Medium/high
Television displays
P7
Blue
35
75
Long
Medium
Long-decay, double-layer screen
P11
Blue
15
100
Medium/short
Medium
For photographic applications
P31
Green
100
50
Medium/short
High
General purposes, brightest available phosphor
aMeasured with a photometer and luminance probe incorporating a standard eye filter. Representative of 10 kV aluminized screens with P31 phosphor as reference. bP11 as reference with Polaroid 612 or 106 film. Representative of 10 kV aluminized screens.
41.6.1 Color Displays Color displays are provided using a screen that employs groups of three phosphor dots. The material of the phosphor for each of the three dots is chosen such that it emits one of the primary colors (red, green, blue). The tube is provided with a shadow mask consisting of a metal screen with holes in it placed near the screen and three electron guns, as shown in Figure 41.7(a). The guns are inclined to each other so that the beams coincide at the plane of the shadow mask. After passing through the shadow mask the beams diverge and, on hitting the screen, energize only one of the phosphors at that particular location. The effect of a range of colors is achieved by adjusting the relative intensities of the three primary colors by adjusting the electron beam currents. The resolution of color displays is generally lower than that of monochrome displays because the technique requires three phosphors to produce the effect of color. Alignment of the shadow mask with the phosphor screen is also critical, and the tube is sensitive to interfering magnetic fields. An alternative method of providing a color display is to use penetration phosphors which make use of the effect that the depth of penetration of the electron beam is dependent on beam energy. By using two phosphors, one of which has a non-luminescent coating, it is possible by adjusting the energy of the electron beam to change the color of the display. For static displays having fixed colors this technique provides good resolution capability and is reasonably insensitive to interfering magnetic fields.
Figure 41.7 Color displays. (a) Shadow mask color tube; (b) liquid crystal color display.
706
Liquid crystal color displays employ a single phosphor having two separate emission peaks (Figure 41.7(b)). One is orange and the other blue-green. The color polarizers orthogonally polarize the orange and blue-green components of the CRT’s emission, and the liquid crystal cell rotates the polarized orange and blue-green information into the transmission axis of the linear polarizer and thus selects the color of the display. Rotation of the orange and blue-green information is performed in synchronism with the information displayed by the sequentially addressed CRT. Alternate displays of information viewed through different colored polarizing filters are integrated by the human eye to give color images. This sequential technique can be used to provide all mixtures of the two primary colors contained in the phosphor.
41.6.2 Oscilloscopes Oscilloscopes can be broadly classified as analog, storage, sampling, or digitizing devices. Figure 41.8 shows the elements of a typical analog oscilloscope. The signals to be observed as a function of time are applied to the vertical (Y) plates of the oscilloscope. The input stage of the vertical system matches the voltage levels of these signals to the drive requirements of the deflection plate of the oscilloscope which will have a typical deflection sensitivity of 20 V/cm. The coupling of the input stage can be either dc or ac. The important specifications for the vertical system include bandwidth and sensitivity. The bandwidth is generally specified as the highest frequency which can be displayed with less than 3 dB loss in amplitude compared with its value at low frequencies. The rise time, Tr, of an oscilloscope to a step input is related to its bandwidth, B, by Tr = 0.35/B. In order to measure the rise time of a waveform with an accuracy of better than 2 percent it is necessary that
PART | VI Automation and Control Systems
the rise time of the oscilloscope should be less than 0.2 of that of the waveform. Analog oscilloscopes are available having bandwidths of up to 1 GHz. The deflection sensitivity of an oscilloscope, commonly quoted as mV/cm, mV/div, or nV/cm, nV/div, gives a measure of the smallest signal the oscilloscope can measure accurately. Typically, the highest sensitivity for the vertical system corresponds to 10 nV/cm. There is a trade-off between bandwidth and sensitivity since the noise levels generated either by the amplifier itself or by pickup by the amplifier are greater in wideband measurements. High-sensitivity oscilloscopes may provide bandwidth-limiting controls to improve the display of low-level signals at moderate frequencies. For comparison purposes, simultaneous viewing of multiple inputs is often required. This can be provided by the use of dual-trace or dual-beam methods. In the dual-trace method the beam is switched between the two input signals. Alternate sweeps of the display can be used for one of the two signals, or, in a single sweep, the display can be chopped between the two signals. Chopping can occur at frequencies up to 1 MHz. Both these methods have limitations for measuring fast transients since in the alternate method the transient may occur on one channel whilst the other is being displayed. The chopping rate of the chopped display limits the frequency range of the signals that can be observed. Dual-beam oscilloscopes use two independent electron beams and vertical deflection systems. These can be provided with either a common horizontal system or two independent horizontal systems to enable the two signals to be displayed at different sweep speeds. By combining a dualbeam system with chopping it is possible to provide systems with up to eight inputs. Other functions which are commonly provided on the Y inputs include facilities to invert one channel or to take the
Figure 41.8 Analog oscilloscope.
707
Chapter | 41 Display and Recording
difference of two input signals and display the result. This enables unwanted common mode signals present on both inputs to be rejected. For the display of signals as a function of time the horizontal system provides a sawtooth voltage to the X plates of the oscilloscope together with the blanking waveform necessary to suppress the flyback. The sweep speed required is determined by the waveform being observed. Sweep rates corresponding to as little as 200 ps/div can be obtained. In time measurements the sweep can either be continuous, providing a repetitive display, or single shot, in which the horizontal system is triggered to provide a single sweep. To provide a stable display in the repetitive mode the display is synchronized either internally from the vertical amplifier or externally using the signal triggering or initiating the signal being measured. Most oscilloscopes provide facilities for driving the X plates from an external source to enable, for example, a Lissajous figure to be displayed. Delayed sweep enables the sweep to be initiated sometime after the trigger. This delayed sweep facility can be used in conjunction with the timebase to provide expansion of one part of the waveform. The trigger system allows the user to specify a position on one or more of the input signals in the case of internal triggering or on a trigger signal in the case of external triggering where the sweep is to initiate. Typical facilities provided by trigger level controls are auto (which triggers at the mean level of the waveform) and trigger level and direction control, i.e., triggering occurs at a particular signal level for positive-going signals.
41.6.3 Storage Oscilloscopes Storage oscilloscopes are used for the display of signals that are either too slow or too fast and infrequent for a conventional oscilloscope. They can also be used for comparing events occurring at different times. The techniques which are used in storage oscilloscopes include bistable, variable persistence, fast transfer storage, or a combination of fast transfer storage with bistable or variable persistence storage. The storage capability is specified primarily by the writing speed. This is usually expressed in cm/ns or div/ns. The phosphor in a bistable CRT has two stable states: written and unwritten. As shown in Figure 41.9(a), when writing the phosphor is charged positive in those areas where it is written on. The flood gun electrons hit the unwritten area but are too slow to illuminate it. However, in the written areas the positive charge of the phosphor attracts the electrons and provides them with sufficient velocity to keep the phosphor lit and also to knock out sufficient secondaries to keep the area positive. A bistable tube displays stored data at one level of intensity. It provides a bright, long-lasting display (up to 10 h), although with less contrast than other techniques. It has a slow writing speed with a typical value
Figure 41.9 Storage CRTs. (a) Bistable; (b) variable persistence.
of 0.5 Cm/ns. Split-screen operation, in which the information on one part of the screen is stored whilst the remainder has new information written onto it, can be provided using a bistable CRT. The screen of the variable persistence CRT, shown in Figure 41.9(b), is similar to that of a conventional CRT. The storage screen consists of a fine metal mesh coated with a dielectric. A collector screen and ion-repeller screen are located behind the storage screen. When the writing gun is employed a positive trace is written out on the storage screen by removing electrons from its dielectric. These electrons are collected by the collector screen. The positively charged areas of the dielectric are transparent to lowvelocity electrons. The flood gun sprays the entire screen with low-velocity electrons. These penetrate the transparent areas but not other areas of the storage screen. The storage screen thus becomes a stencil for the flood gun. The charge on the mesh can be controlled, altering the contrast between the trace and the background and also modifying how long the trace is stored. Variable persistence storage provides high contrast between the waveform and the background. It enables waveforms to be stored for only as long as the time between repetitions, and therefore a continuously updated display can be obtained. Time variations in system responses can thus be observed. Integration of repetitive signals can also be provided since noise or jitter not common to all traces will not be stored or displayed. Signals with low repetition rates and fast rise times can be displayed by allowing successive repetitions to build up the trace brightness. The typical writing rate for variable
708
persistence storage is 5 cm/ns. A typical storage time would be 30 s. Fast transfer storage uses an intermediate mesh target that has been optimized for speed. This target captures the waveform and then transfers it to another mesh optimized for long-term storage. The technique provides increased writing speeds (typically up to 5500 cm/ns). From the transfer target storage can be by either bistable or variable persistence. Oscilloscopes are available in which storage by fast variable persistence, fast bistable, variable persistence, or bistable operation is user selectable. Such devices can provide a range of writing speeds, with storage time combinations ranging from 5500 cm/ns and 30 s using fast variable persistence storage to 0.2 cm/ns and 30 min using bistable storage.
41.6.4 Sampling Oscilloscopes The upper frequency limit for analog oscilloscopes is typically 1 GHz. For the display of repetitive signals in excess of this frequency, sampling techniques are employed. These extend the range to approximately 14 GHz. As shown in Figure 41.10, samples of different portions of successive waveforms are taken. Sampling of the waveform can be either sequential or random. The samples are stretched in time, amplified by relatively low bandwidth amplifiers,
PART | VI Automation and Control Systems
and then displayed. The display is identical to the sampled waveform. Sampling oscilloscopes are typically capable of resolving events of less than 5 mV in peak amplitude that occur in less than 30 ps on an equivalent timebase of less than 20 ps/cm.
41.6.5 Digitizing Oscilloscopes Digitizing oscilloscopes are useful for measurements on single-shot or low-repetition signals. The digital storage techniques they employ provide clear, crisp displays. No fading or blooming of the display occurs and since the data are stored in digital memory the storage time is unlimited. Figure 41.11 shows the elements of a digitizing oscilloscope. The Y channel is sampled, converted to a digital form, and stored. A typical digitizing oscilloscope may provide dual-channel storage with a 100 MHz, 8-bit ADC on each channel feeding a 1 K # 8 bit store, with simultaneous sampling of the two channels. The sample rate depends on the timebase range but typically may go from 20 samples/s at 5 s/div to 100 M samples/s at 1 ns/div. Additional stores are often provided for comparative data. Digitizing oscilloscopes can provide a variety of display modes, including a refreshed mode in which the stored data and display are updated by a triggered sweep; a roll mode in which the data and display are continually updated, producing the effect of new data rolling in from the right of the screen; a fast roll mode in which the data are continually updated but the display is updated after the trigger; an arm and release mode which allows a single trigger capture; a pre-trigger mode which in roll and fast roll allocates 0, 25, 75, and 100 percent of the store to the pre-trigger signal; and a hold display mode which freezes the display immediately. Communication to a computer system is generally provided by a IEEE-488 interface. This enables programming of the device to be undertaken for stored data to be sent to an external controller, and, if required, to enable the device to receive new data for display. Digitizing oscilloscopes may
Figure 41.10 Sampling oscilloscope. Figure 41.11 Digitizing oscilloscope.
709
Chapter | 41 Display and Recording
also provide outputs for obtaining hard copy of the stored data on an analog x–y plotter.
41.6.6 Visual Display Units (VDUs) A VDU comprising a keyboard and CRT display is widely used as a HMI in computer-based instrumentation systems. Alphanumeric VDU displays use the raster scan technique, as shown in Figure 41.12. A typical VDU will have 24 lines of 80 characters. The characters are made up of a 7 # 5 dot matrix. Thus seven raster lines are required for a single line of text and five dots for each character position. The space between characters is equivalent to one dot and that between lines is equivalent to one or two raster scans. As the electron beam scans the first raster line the top row of the dot matrix representation for each character is produced in turn. This is used to modulate the beam intensity. For the second raster scan the second row of the dot matrix representation is used. Seven raster scans generate one row of text. The process is repeated for each of the 24 lines and the total cycle repeated at a typical rate of 50 times per second. The characters to be displayed on the screen are stored in the character store as 7-bit ASCII code, with the store
organized as 24 # 80 words of 7 bits. The 7-bit word output of the character store addresses a character pattern ROM. A second input to the ROM selects which particular row of the dot matrix pattern is required. This pattern is provided as a parallel output which is then converted to a serial form to be applied to the brightness control of the CRT. For a single scan the row-selection inputs remain constant whilst the character pattern ROM is successively addressed by the ASCII codes corresponding to the 80 characters on the line. To build up a row of character the sequence of ASCII codes remains the same but on successive raster scans the row address of the character pattern ROM is changed.
41.6.7 Graphical Displays A graphical raster-scan display is one in which the picture or frame is composed of a large number of raster lines, each one of which is subdivided into a large number of picture elements (pixels). Standard television pictures have 625 lines, consisting of two interlaced frames of 313 lines. With each 313-line frame scanned every 20 ms the line scan rate is approximately 60 ns/line. A graphic raster-scan display stores every pixel in random access or serial store (frame), as shown in Figure 41.13. The storage requirements in such systems can often be quite large. A system using 512 of the 625 lines for display with 1024 pixels on each line having a dual intensity (on/off) display requires a store of 512 Kbits. For a display having the same resolution but with either eight levels of brightness or eight colors the storage requirements then increase to 1.5 Mbits. Displays are available which are less demanding on storage. Typically, these provide an on/off display having a resolution of 360 lines, each having 720 pixels. This requires a 16 K # 16 bit frame store. Limited graphics can be provided by alphanumeric raster scan displays supplying additional symbols to the ASCII set. These can then be used to produce graphs or pictures.
41.7 Graphical recorders Hard copy of data from a process or system can be displayed in a graphical form either as the relationships between one or more variables and time using an x–t
Figure 41.12 Visual display unit.
Figure 41.13 Elements of a graphical display.
710
recorder or as the relationship between two variables using an x–y recorder. x–t recorders can be classified as either strip chart recorders, in which the data are recorded on a continuous roll of chart paper, or circular chart recorders, which, as their name implies, record the data on a circular chart.
41.7.1 Strip Chart Recorders Most strip chart recorders employ a servo-feedback system (as shown in Figure 41.14) to ensure that the displacement of the writing head across the paper tracks the input voltage over the required frequency range. The position of the writing head is generally measured by a potentiometer system. The error signal between the demanded position and the actual position of the writing head is amplified using an ac or dc amplifier, and the output drives either an ac or dc motor. Movement of the writing head is effected by a mechanical linkage between the output of the motor and the writing head. The chart paper movement is generally controlled by a stepping motor. The methods used for recording the data onto the paper include: 1. Pen and ink. In the past these have used pens having ink supplied from a refillable reservoir. Increasingly, such systems use disposable fiber-tipped pens. Multichannel operation can be achieved using up to six pens. For fullwidth recording using multiple pens, staggering of the pens is necessary to avoid mechanical interference. 2. Impact printing. The “ink” for the original impact systems was provided by a carbon ribbon placed between the pointer mechanism and the paper. A mark was made on the paper by pressing the pointer mechanism onto the paper. Newer methods can simultaneously record the data from up to twenty variables. This is achieved by having a wheel with an associated ink pad which provides the ink for the symbols on the wheel. By rotating the wheel different symbols can be printed
PART | VI Automation and Control Systems
on the paper for each of the variables. The wheel is moved across the paper in response to the variable being recorded. 3. Thermal writing. These systems employ thermally sensitive paper which changes color on the application of heat. They can use a moving writing head which is heated by an electric current or a fixed printing head having a large number of printing elements (up to several hundred for 100 mm wide paper). The particular printing element is selected according to the magnitude of the input signal. Multichannel operation is possible using such systems and time and date information can be provided in alphanumeric form. 4. Optical writing. This technique is commonly used in galvanometer systems (see Section 41.7.3). The source of light is generally ultra-violet to reduce unwanted effects from ambient light. The photographic paper used is sensitive to ultraviolet light. This paper develops in daylight or artificial light without the need for special chemicals. Fixing of the image is necessary to prevent long-term degradation. 5. Electric writing. The chart paper used in this technique consists of a paper base coated with a layer of a colored dye—black, blue, or red—which in turn is coated with a very thin surface coating of aluminum. The recording is effected by a tungsten wire stylus moving over the aluminum surface. When a potential of approximately 35 V is applied to this stylus an electric discharge occurs which removes the aluminum, revealing the dye. In multichannel recording the different channels are distinguished by the use of different line configurations (for example, solid, dashed, dotted). Alphanumeric information can also be provided in these systems. 6. Inkjet writing. In this method, a standard chart paper is used and a monochrome or color inkjet printhead, adapted from microcomputer printers, is used to draw the alphanumeric traces. The adaptability of this method makes possible many graphical formats from a single printhead. Figure 41.14 Strip chart recorder.
711
Chapter | 41 Display and Recording
The major specifications for a strip chart recorder include: 1. Number of channels. Using a numbered print wheel, up to 30 channels can be monitored by a single recorder. In multichannel recorders, recording of the channels can be simultaneous or on a time-shared basis. The channels in a multichannel recording can be distinguished either by color or by the nature of line marking. 2. Chart width. This can be up to 250 mm. 3. Recording technique. The recorder may employ any of the techniques described above. 4. Input specifications. These are given in terms of the range of the input variable which may be voltage, current, or from a thermocouple, RTD, pH electrode, or conductivity probe. Typical dc voltage spans are from 0.1 mV to 100 V. Those for dc current are typically from 0.1 mA to 1 A. The zero suppression provided by the recorder is specified by its suppression ratio, which gives the number of spans by which the zero can be suppressed. The rejection of line frequency signals at the input of the recorder is measured by its common and normal mode rejection ratios. 5. Performance specifications. These include accuracy, deadband, resolution, response time, and chart speed. A typical voltage accuracy specification may be !(0.3 + 0.1 # suppression ratio) percent of span, or 0.20 mV, whichever is greater. Deadband and resolution are usually expressed as a percentage of span, with 0.1 and !0.15 percent, respectively, being typical figures. Chart recorders are designed for use with slowly varying inputs ( m/2r, the wave impedance is determined by the characteristics of the source. A low-current, high-voltage radiator (such as a rod) will generate mainly an electric field of high impedance, while a high-current, low-voltage radiator (such as a loop) will generate mainly a magnetic field of low impedance. The region around m/2r, Figure 45.6 Electromagnetic fields.
Figure 45.7 The wave impedance.
805
Chapter | 45 EMC
or approximately one-sixth of a wavelength, is the transition region between near and far fields. Coupling Modes The concepts of differential mode, common mode, and antenna mode radiated field coupling are fundamental to an understanding of EMC and will crop up in a variety of guises throughout this chapter. They apply to coupling of both emissions and incoming interference. Consider two items of equipment interconnected by a cable (Figure 45.8). The cable carries signal currents in differential mode (go and return) down the two wires in close proximity. A radiated field can couple to this system and induce differential mode interference between the two wires; similarly, the differential current will induce a radiated field of its own. The ground reference plane (which may be external to the equipment or may be formed by its supporting structure) plays no part in the coupling. The cable also carries currents in common mode, that is, all flowing in the same direction on each wire. These currents very often have nothing at all to do with the signal currents. They may be induced by an external field coupling to the loop formed by the cable, the ground plane, and the various impedances connecting the equipment to ground, and may then cause internal differential currents to which the equipment is susceptible. Alternatively, they may be generated by internal noise voltages between the ground reference point and the cable connection, and be responsible for radiated emissions. Note that the stray capacitances and inductances associated with the wiring and enclosure of each unit are an
integral part of the common mode coupling circuit, and play a large part in determining the amplitude and spectral distribution of the common mode currents. These stray reactances are incidental rather than designed into the equipment and are therefore much harder to control or predict than parameters such as cable spacing and filtering which determine differential mode coupling. Antenna Mode currents are carried in the same direction by the cable and the ground reference plane. They should not arise as a result of internally generated noise, but they will flow when the whole system, ground plane included, is exposed to an external field. An example would be when an aircraft flies through the beam of a radar transmission; the aircraft structure, which serves as the ground plane for its internal equipment, carries the same currents as the internal wiring. Antenna mode currents only become a problem for the radiated field susceptibility of self-contained systems when they are converted to differential or common mode by varying impedances in the different current paths.
45.2.2 Emissions When designing a product to a specification without knowledge of the system or environment in which it will be installed, one will normally separate the two aspects of emissions and susceptibility, and design to meet minimum requirements for each. Limits are laid down in various standards, but individual customers or market sectors may have more specific requirements. In those standards which derive from CISPR, emissions are subdivided into radiated emissions from the system as a whole, and conducted emissions present on the interface and power cables. Conventionally, the breakpoint between radiated (high frequency) and conducted (low frequency) is set at 30 MHz. Radiated emissions can themselves be separated into emissions that derive from internal PCBs or other wiring, and emissions from commonmode currents that find their way onto external cables that are connected to the equipment.
45.2.2.1 Radiated Emissions
Figure 45.8 Radiated coupling modes.
Radiation from the PCB In most equipment, the primary emission sources are currents flowing in circuits (clocks, video and data drivers, and other oscillators) that are mounted on PCBs. Radiated emission from a PCB can be modeled as a small loop antenna carrying the interference current (Figure 45.9). A small loop is one whose dimensions are smaller than a quarter wavelength (m/4) at the frequency of interest (e.g., 1 m at 75 MHz). Most PCB loops count as “small” at emission frequencies of up to a few hundred megahertz. When the dimensions approach (m/4) the currents at different points on the loop appear out of phase at a distance, so that the effect is to reduce the field strength at any given
806
PART | VI Automation and Control Systems Figure 45.9 PCB radiated emissions.
point. The maximum electric field strength from such a loop over a ground plane at 10 m distance is proportional to the square of the frequency (Ott, 1988):
E = 263 # 10 –12 _ f 2 AI S i volts/meter
(45.4)
where A is the loop area (cm2), and f (MHz) is the frequency of IS, the source current (mA). In free space, the field falls off proportionally to distance from the source. The figure of 10 m is used as this is the standard measurement distance for the European radiated emissions standards. A factor of 2 is allowed for worst-case field reinforcement due to reflection from the ground plane, which is also a required feature of testing to standards. The loop whose area must be known is the overall path taken by the signal current and its return. Equation (45.4) assumes that IS is at a single frequency. For square waves with many harmonics, the Fourier spectrum must be used for IS. These points are taken up again in Section 45.3.2.2. Assessing PCB Design You can use equation (45.4) to indicate roughly whether a given PCB design will need extra screening. For example, if A = 10 cm2, IS = 20 mA and f = 50 MHz, then the field strength E is 42 dBnV/m, which is 12 dB over the European Class B limit. Thus if the frequency and operating current are fixed, and the loop area cannot be reduced, screening will be necessary. The converse, however, is not true. Differential mode radiation from small loops on PCBs is by no means the only contributor to radiated emissions; common mode currents flowing on the PCB and, more important, on attached cables can contribute much more. Paul (1989) goes so far as to say: … predictions of radiated emissions based solely on differential-mode currents will generally bear no resemblance to measured levels of radiated emissions. Therefore, basing system EMC design on differentialmode currents and the associated prediction models that use them exclusively while neglecting to consider the (usually much larger) emissions due to common-mode currents can lead to a strong “false sense of security.”
Common-mode currents on the PCB itself are not at all easy to predict, in contrast with the differential mode currents which are governed by Kirchhoff’s current law. The return path for common-mode currents is via stray capacitance (displacement current) to other nearby objects, and therefore a full prediction would have to take the detailed mechanical structure of the PCB and its case, as well as its proximity to ground and to other equipment, into account. Except for trivial cases this is for all intents and purposes impossible. It is for this reason more than any other that EMC design has earned itself the distinction of being a “black art.” Radiation from Cables Fortunately (from some viewpoints) radiated coupling at VHF tends to be dominated by cable emissions, rather than by direct radiation from the PCB. This is for the simple reason that typical cables resonate in the 30–100 MHz region and their radiating efficiency is higher than PCB structures at these frequencies. The interference current is generated in common mode from ground noise developed across the PCB or elsewhere in the equipment and may flow along the conductors, or along the shield of a shielded cable. The model for cable radiation at lower frequencies (Figure 45.10) is a short (L < m/4) monopole antenna over a ground plane. (When the cable length is resonant the model becomes invalid.) The maximum field strength allowing +6 dB for ground plane reflections at 10 m due to this radiation is directly proportional to frequency (Ott, 1985):
E = 1.26 # 10 –4 _ fLI CM i volt/meter 5 25 ?
(45.5)
where L is the cable length (meters), and ICM is the commonmode current (mA) at f(MHz) flowing in the cable. For a 1 m cable, ICM must be less than 20 nA for a field strength at 10 m of 42 dBnV/m, i.e., a thousand times less than the equivalent differential mode current! Common-Mode Cable Noise At the risk of repetition, it is vital to appreciate the difference between common-mode and differential-mode cable currents. Differential-mode current, ICM in Figure 45.10, is the current which flows in one
Chapter | 45 EMC
direction along one cable conductor and in the reverse direction along another. It is normally equal to the signal or power current, and is not present on the shield. It contributes little to the net radiation as long as the total loop area formed by the two conductors is small; the two currents tend to cancel each other. Common mode current ICM flows equally in the same direction along all conductors in the cable, potentially including the shield, and is only related to the differential signal currents insofar as these are converted to common mode by unbalanced external impedances, and may be quite unrelated to them. It returns via the associated ground network, and therefore the radiating loop area is large and uncontrolled. As a result, even a small ICM can result in large emitted signals.
45.2.2.2 Conducted Emissions Interference sources within the equipment circuit or its power supply are coupled onto the power cable to the equipment.
Figure 45.10 Cable radiated emissions.
807
Interference may also be coupled either inductively or capacitively from another cable onto the power cable. Until recently, attention has focused on the power cable as the prime source of conducted emissions since CISPR-based standards have only specified measurements on this cable. However, signal and control cables can and do also act as coupling paths, and amendments to the standards will apply measurements to these cables as well. The resulting interference may appear as differential mode (between live and neutral, or between signal wires) or as common mode (between live/neutral/signal and ground) or as a mixture of both. For signal and control lines, only common-mode currents are regulated. For the line-voltage port, the voltages between live and ground and between neutral and ground at the far end of the line-voltage cable are measured. Differential mode emissions are normally associated with low-frequency switching noise from the power supply, while common-mode emissions can be due to the higher frequency switching components, internal circuit sources, or intercable coupling. Coupling Paths The equivalent circuit for a typical product with a switch-mode power supply, shown in Figure 45.11, gives an idea of the various paths these emissions can take. (Section 45.3.2.4 looks at SMPS emissions in more detail.) Differential-mode current IDM generated at the input of the switching supply is converted by imbalances in stray capacitance, and by the mutual inductance of the conductors in the line-voltage cable, into interference voltages with respect to earth at the measurement point. Higher frequency switching noise components VNsupply are coupled through Cc to appear between L/N and E on the line-voltage cable, and Cs to appear with respect to the ground plane. Circuit ground noise VNcct (digital noise and clock harmonics) is referenced to ground by Cs and coupled out via signal cables as ICMsig or via the safety earth as ICME.
Figure 45.11 Coupling paths for conducted emissions.
808
The problem in a real situation is that all these mechanisms are operating simultaneously, and the stray capacitances Cs are widely distributed and unpredictable, depending heavily on proximity to other objects if the case is unscreened. A partially screened enclosure may actually worsen the coupling because of its higher capacitance to the environment.
45.2.2.3 Line-Voltage Harmonics One EMC phenomenon, which comes under the umbrella of the EMC Directive and is usually classified as an “emission,” is the harmonic content of the line-voltage input current. This is mildly confusing since the equipment is not actually “emitting” anything: it is simply drawing its power at harmonics of the line frequency as well as at the fundamental. The Supplier’s Problem The problem of line-voltage harmonics is principally one for the supply authorities, who are mandated to provide a high-quality electricity supply. If the aggregate load at a particular line-voltage distribution point has a high harmonic content, the non-zero distribution source impedance will cause distortion of the voltage waveform at this point. This in turn may cause problems for other users connected to that point, and the currents themselves may also create problems (such as overheating of transformers and compensating components) for the supplier. The supplier does, of course, have the option of uprating the distribution components or installing special protection measures, but this is expensive, and the supplier has room to argue that the users should bear some of the costs of the pollution they create. Harmonic pollution is continually increasing, and it is principally due to low-power electronic loads installed in large numbers. Between them, domestic TV sets and office information technology equipment account for about 80 percent of the problem. Other types of load which also take significant harmonic currents are not widely enough distributed to cause a serious problem yet, or are dealt with individually at the point of installation as in the case of industrial plant. The supply authorities are nevertheless sufficiently worried to want to extend harmonic emission limits to all classes of electronic products. Non-Linear Loads A plain resistive load across the linevoltage draws current only at the fundamental frequency (50 Hz in Europe). Most electronic circuits are anything but resistive. The universal rectifier-capacitor input draws a high current at the peak of the voltage waveform and zero current at other times; the well known triac phase control method for power control (lights, motors, heaters, etc.) begins to draw current only partway through each half-cycle. These current waveforms can be represented as a Fourier series, and it is the harmonic amplitudes of the series that are subject to regulation. The relevant standard is EN 60 555: Part 2,
PART | VI Automation and Control Systems
which in its present (1987) version applies only to household products. There is a proposal to extend the scope of EN 60 555 to cover a wide range of products, and it will affect virtually all line-voltage powered electronic equipment above a certain power level which has a rectifier-reservoir input. The harmonic limits are effectively an additional design constraint on the values of the input components, most notably, the input series impedance (which is not usually considered as a desirable input component at all). With a typical input resistance of a few ohms for a 100 W power supply, the harmonic amplitudes are severely in excess of the proposed revision to the limits of EN 60 555: Part 2. Increasing input series resistance to meet the harmonic limits is expensive in terms of power dissipation except at very low powers. In practice, deliberately dissipating between 10 percent and 20 percent of the input power rapidly becomes unreasonable above levels of 50–100 W. Alternatives are to include a series input choke which, since it must operate down to 50 Hz, is expensive in size and weight; or to include electronic power factor correction (PFC), which converts the current waveform to a near-sinusoid but is expensive in cost and complexity. PFC is essentially a switch-mode converter on the frontend of the supply and therefore is likely to contribute extra radio frequency switching noise at the same time as it reduces input current harmonics. It is possible to combine PFC with the other features of a direct-off-line switching supply, so that if you are intending to use a SMPS anyway there will be little extra penalty. It also fits well with other contemporary design requirements such as the need for a “universal” (90–260 V) input voltage range. Such power supplies can already be bought off the shelf, but unless you are a power supply specialist, to design a PFC-SMPS yourself will take considerable extra design and development effort. Phase Control Power control circuits which vary the switch-on point with the phase of the line-voltage waveform are another major source of harmonic distortion on the input current. Lighting controllers are the leading example of these. Figure 45.12 shows the harmonic content of such a waveform switched at 90° (the peak of the cycle, corresponding to half power). The maximum harmonic content occurs at this point, decreasing as the phase is varied either side of 90°. Whether lighting dimmers will comply with the draft limits in EN 60 555-2 without input filtering or PFC depends at present on their power level, since these limits are set at an absolute value.
45.2.3 Susceptibility Electronic equipment will be susceptible to environmental electromagnetic fields and/or to disturbances coupled into
809
Chapter | 45 EMC
Figure 45.12 Mains input current harmonics for 500 W phase control circuit at half power.
Figure 45.13 Radiated field coupling.
its ports via connected cables. An electrostatic discharge may be coupled in via the cables or the equipment case, or a nearby discharge can create a local field which couples directly with the equipment. The potential threats are: 1. 2. 3. 4. 5.
Radiated radio frequency fields Conducted transients Electrostatic discharge (ESD) Magnetic fields Supply voltage disturbances
Quite apart from legal requirements, equipment that is designed to be immune to these effects—especially ESD and transients—will save its manufacturer considerable expense through preventing field returns. Unfortunately, the shielding and circuit suppression measures that are required for protection against ESD or radio frequency interference may be more than you need for emission control.
45.2.3.1 Radiated Field An external field can couple either directly with the internal circuitry and wiring in differential mode or with the cables to induce a common mode current (Figure 45.13). Coupling with internal wiring and PCB tracks is most efficient at frequencies above a few hundred megahertz, since wiring lengths of a few inches approach resonance at these frequencies. Radio frequency voltages or currents in analog circuits can induce non-linearity, overload, or d.c. bias, and in digital circuits can corrupt data transfer. Modulated fields can have a greater effect than unmodulated ones. Likely sources of radiated fields are walkie-talkies, cellphones, high-power broadcast transmitters, and radars. Field strengthsbetween 1 and 10 V/m from 20 MHz to 1 GHz are typical, and higher field strengths can occur in environments close to such sources.
810
Cable Resonance Cables are most efficient at coupling r adio frequency energy into equipment at the lower end of the vhf spectrum (30–100 MHz). The external field induces a common mode current on the cable shield or on all the cable conductors together, if it is unshielded. The common mode current effects in typical installations tend to dominate the direct field interactions with the equipment as long as the equipment’s dimensions are small compared with half the wavelength of the interfering signal. A cable connected to a grounded victim equipment can be modeled as a single conductor over a ground plane, which appears as a transmission line (Figure 45.14). The current induced in such a transmission line by an external field increases steadily with frequency until the first resonance is reached, after which it exhibits a series of peaks and nulls at higher resonances (Smith, 1977). The coupling mechanism is enhanced at the resonant frequency of the cable, which depends on its length and on the reactive loading of whatever equipment is attached to its end. A length of 2 m is quarterwave resonant at 35.5 MHz, half-wave resonant at 75 MHz. Cable Loading The dominant resonant mode depends on the radio frequency impedance (high or low) at the distant end of the cable. If the cable is connected to an ungrounded object such as a hand controller it will have a high rf impedance, which will cause a high coupled current at quarterwave resonance and high coupled voltage at half-wave. Extra capacitive loading such as body capacitance will lower its apparent resonant frequency. Conversely, a cable connected to another grounded object, such as a separately earthed peripheral, will see
Figure 45.14 Cable coupling to radiated field.
PART | VI Automation and Control Systems
a low impedance at the far end, which will generate high coupled current at half-wave and high coupled voltage at quarter-wave resonance. Extra inductive loading, such as the inductance of the earth connection, will again tend to lower the resonant frequency. These effects are summarized in Figure 45.15. The rf common-mode impedance of the cable varies from around 35 X at quarter-wave resonance to several hundred ohms maximum. A convenient average figure (and one that is taken in many standards) is 150 X. Because cable configuration, layout, and proximity to grounded objects are outside the designer’s control, attempts to predict resonances and impedances accurately are unrewarding. Current Injection A convenient method for testing the radio frequency susceptibility of equipment without reference to its cable configuration is to inject radio frequency as a common mode current or voltage directly onto the cable port. This represents real-life coupling situations at lower frequencies well, until the equipment dimensions approach a half wavelength. It can also reproduce the fields (ERF and HRF) associated with radiated field coupling. The route taken by the interference currents, and hence their effect on the circuitry, depends on the various internal and external radio frequency impedances to earth, as shown in Figure 45.16. Connecting other cables will modify the current flow to a marked extent, especially if the extra cables interface to a physically different location on the PCB or equipment. An applied voltage of 1 V, or an injected current of 10 mA, corresponds in typical cases to a radiated field strength of 1 V/m.
811
Chapter | 45 EMC
Cavity Resonance A screened enclosure can form a resonant cavity; standing waves in the field form between opposite sides when the dimension between the sides is a multiple of a half-wavelength. The electric field is enhanced in the middle of this cavity while the magnetic field is enhanced at the sides. This effect is usually responsible for peaks in the susceptibility versus frequency profile in the UHF region. Predicting the resonant frequency accurately from the enclosure dimensions is rarely successful because the
contents of the enclosure tend to “detune” the resonance. But for an empty cavity, resonances occur at
F = 150 7 ] k/l g2 + ] m/h g2 + ] n/w g2 A MHz
(45.6)
where l, h, and w are the enclosure dimensions (meters), and k, m, and n are positive integers, but no more than one at a time can be zero. For approximately equal enclosure dimensions, the lowest possible resonant frequency is given by
F ] MHz g . 212/l . 212/h . 212/w
(45.7)
45.2.3.2 Transients
Figure 45.15 Current and voltage distributions along a resonant cable.
Transient overvoltages occur on the line-voltage supply leads due to switching operations, fault clearance, or lightning strikes elsewhere on the network. Transients over 1 kV account for about 0.1 percent of the total number of transients observed. A study by the German ZVEI (Goedbloed, 1987) made a statistical survey of 28,000 live-to-earth transients exceeding 100 V, at 40 locations over a total measuring time of about 3400 h. Their results were analyzed for peak amplitude, rate of rise, and energy content. Table 45.2 shows the average rate of occurrence of transients for four classes of location, and Figure 45.17 shows the relative number of transients as a function of maximum transient amplitude. This shows that the number of transients varies roughly in inverse proportion to the cube of peak voltage. High-energy transients may threaten active devices in the equipment power supply. Fast-rising edges are the most disruptive to circuit operation, since they are attenuated least by the coupling paths, and they can generate large voltages in inductive ground and signal paths. The ZVEI study found that rate of rise increased roughly in proportion to the square root of peak voltage, being typically 3 V/ns for 200 V pulses and 10 Vns for 2 kV pulses. Other field experience has shown that mechanical switching produces multiple transients (bursts) with risetimes as short as a few nanoseconds and peak amplitudes of several hundred volts. Attentuation through the line-voltage network (see Section 45.2.1.2) restricts fast risetime pulses to those generated locally.
TABLE 45.2 Average rate of occurrence of mains transients (Goedbloed, 1987)
Figure 45.16 Common-mode radiofrequency injection. JRF represents common-mode radiofrequency current density through the PCB.
Area class
Average rate of occurrence (transients/hour)
Industrial
17.5
Business
2.8
Domestic
0.6
Laboratory
2.3
812
Figure 45.17 Relative number of transients (percent) versus maximum transient amplitude (volts). Line-voltage lines (VT = 100 V); O, telecommunication lines (VT = 50 V) (Goedbloed, 1987, 1990).
Analog circuits are almost immune to isolated short transients, whereas digital circuits are easily corrupted by them. As a general guide, microprocessor equipment should be tested to withstand pulses at least up to 2 kV peak amplitude. Thresholds below 1 kV will give unacceptably frequent corruptions in nearly all environments, while between 1–2 kV thresholds occasional corruption will occur. For a belt-andbraces approach for high-reliability equipment, a 4–6 kV threshold is recommended. Coupling Mode Line-voltage transients may appear in differential mode (symmetrically between live and neutral) or common mode (asymmetrically between live neutral and earth). Coupling between the conductors in a supply network tends to mix the two modes. Differential-mode spikes are usually associated with relatively slow rise times and high energy, and require suppression to prevent input circuit damage but do not, provided this suppression is incorporated, affect circuit operation significantly. Common-mode transients are harder to suppress because they require connection of suppression components between live and earth, or in series with the earth lead, and because stray capacitances to earth are harder to control. Their coupling paths are very similar to those followed by common mode radio frequency signals. Unfortunately, they are also more disruptive because they result in transient current flow in ground traces. Transients on Signal Lines Fast transients can be coupled, usually capacitively, onto signal cables in common mode, especially if the cable passes close to or is routed alongside
PART | VI Automation and Control Systems
an impulsive interference source. Although such transients are generally lower in amplitude than line-voltage-borne ones, they are coupled directly into the I/O ports of the circuit and will therefore flow in the circuit ground traces, unless the cable is properly screened and terminated or the interface is properly filtered. Other sources of conducted transients are telecommunication lines and the automotive 12 V supply. The automotive environment can regularly experience transients that are many times the nominal supply range. The most serious automotive transients are the load dump, which occurs when the alternator load is suddenly disconnected during heavy charging; switching of inductive loads, such as motors and solenoids; and alternator field decay, which generates a negative voltage spike when the ignition switch is turned off. A recent standard (ISO 7637) has been issued to specify transient testing in the automotive field. Work on common mode transients on telephone subscriber lines (Goedbloed, 1990) has shown that the amplitude versus rate of occurrence distribution also follows a roughly inverse cubic law as in Figure 45.17. Actual amplitudes were lower than those on the line-voltage (peak amplitudes rarely exceeded 300 V). A transient ringing frequency of 1 MHz and rise times of 10–20 ns were found to be typical.
45.2.3.3 Electrostatic Discharge When two non-conductive materials are rubbed together or separated, electrons from one material are transferred to the other. This results in the accumulation of triboelectric charge on the surface of the material. The amount of the charge caused by movement of the materials is a function of the separation of the materials in the triboelectric series (Figure 45.18(a)). Additional factors are the closeness of contact, rate of separation, and humidity. The human body can be charged by triboelectric induction to several kilovolts. When the body (in the worst case, holding a metal object such as a key) approaches a conductive object, the charge is transferred to that object normally via a spark, when the potential gradient across the narrowing air gap is high enough to cause breakdown. The energy involved in the charge transfer may be low enough to be imperceptible to the subject; at the other extreme it can be extremely painful. The ESD Waveform When an electrostatically charged object is brought close to a grounded target the resultant discharge current consists of a very fast (sub-nanosecond) edge followed by a comparatively slow bulk discharge curve. The characteristic of the hand/metal ESD current waveform is a function of the approach speed, the voltage, the geometry of the electrode, and the relative humidity. The equivalent circuit for such a situation is shown in Figure 45.18(c). The capacitance CD (the typical human body capacitance is around 150 pF) is charged via a high resistance up to the electrostatic voltage V. The actual value of V will vary as the charging and leakage paths change with the environmental
Chapter | 45 EMC
813
Figure 45.18 The electrostatic discharge; (a) The triboelectric series; (b) expected charge voltage (IEC 801–2); (c) equivalent circuit and current waveform.
circumstances and movements of the subject. When a discharge is initiated, the free space capacitance Cs, which is directly across the discharge point, produces an initial current peak, the value of which is determined only by the local circuit stray impedance, while the main discharge current is limited by the body’s bulk inductance and resistance ZD. The resultant sub-nanosecond transient equalizing current of several tens of amps follows a complex route to ground through the equipment and is very likely to upset digital circuit operation if it passes through the circuit tracks. The paths are defined more by stray capacitance, case bonding, and track or wiring inductance than by the designer’s intended circuit. The high magnetic field associated with the current can induce transient voltages in nearby conductors that are not actually in the path of the current. Even if not discharged directly to the equipment, a nearby discharge such as to a metal desk or chair will generate an intense radiated field which will couple into unprotected equipment. ESD Protection Measures When the equipment is housed in a metallic enclosure this itself can be used to guide the ESD current around the internal circuitry. Apertures or seams in the enclosure will act as high-impedance barriers
to the current, and transient fields will occur around them, so they must be minimized. All metallic covers and panels must be bonded together with a low impedance connection (30MHz) protection will be enough, then a thin conductive coating deposited on plastic is adequate. Will a Shield Be Necessary? Shielding is often an expensive and difficult-to-implement design decision, because many other factors (aesthetic, tooling, accessibility) work against it. A decision on whether or not to shield should be made as early as possible in the project. Section 45.4.2 showed that interference coupling is via interface cables and direct induction to/from the PCB. You should be able to calculate to a rough order of magnitude the fields generated by PCB tracks and compare these to the desired emission limit (see Section 45.3.2.2). If the limit is exceeded at this point and the PCB layout cannot be improved, then shielding is essential. Shielding does not of itself affect common-mode cable coupling, and so if this is expected to be the dominant coupling path, a full shield may not be necessary. It does establish a “clean” reference for decoupling common-mode currents to, but it is also possible to do this with a large area ground plate if the layout is planned carefully.
45.4.3.1 Shielding Theory An a.c. electric field impinging on a conductive wall of infinite extent will induce a current flow in that surface of the wall, which in turn will generate a reflected wave of the opposite sense. This is necessary in order to satisfy the boundary conditions at the wall, where the electric field must approach zero. The reflected wave amplitude determines the reflection loss of the wall. Because shielding walls have finite conductivity, part of this current flow penetrates into the wall, and a fraction of it will appear on the opposite side of the wall, where it will generate its own field (Figure 45.87). The ratio of the impinging to the transmitted fields is one measure of the shielding effectiveness of the wall. The thicker the wall, the greater the attenuation of the current through it. This absorption loss depends on
the number of “skin depths” through the wall. The skin depth (defined by equation (45.17)) is an expression of the electromagnetic property which tends to confine a.c. current flow to the surface of a conductor, becoming less as frequency, conductivity, or permeability increases. Fields are attenuated by 8.6 dB (1/e) for each skin depth of penetration. Typically, skin depth in aluminum at 30 MHz is 0.015 mm. The skin depth d (meters) is given by:
d = ^ r $ F $ n $ v h–0.5
(45.17)
where F is frequency (MHz), n is material permeability, and v is material conductivity. Shielding Effectiveness Shielding effectiveness of a solid conductive barrier describes the ratio between the field strength without the barrier in place to that when it is present. It can be expressed as the sum (in decibels) of reflection (R), absorption (A), and re-reflection (B) losses (Figure 45.88):
SE = R + A + B
(45.18)
The reflection loss (R) depends on the ratio of wave impedance to barrier impedance. The concept of wave impedance has been described in Section 45.2.1.3. The impedance of the barrier is a function of its conductivity and permeability, and of frequency. Materials of high conductivity such as copper and aluminum have a higher E-field reflection loss than do lower-conductivity materials such as steel. Reflection losses decrease with increasing frequency for the E field (electric) and increase for the H field (magnetic). In the near field, closer than m/2r, the distance between source and barrier also affects the reflection loss. Near to the source, the electric field impedance is high, and the reflection loss is also correspondingly high. Conversely, the magnetic field impedance is low, and the reflection loss is low. When the barrier is far enough away to be in the far field, the impinging wave is a plane wave, the wave impedance is constant,
Figure 45.87 Variation of current density with wall thickness.
860
PART | VI Automation and Control Systems Figure 45.88 Shielding effectiveness versus frequency for copper sheet.
and the distance is immaterial. Refer back to Figure 45.7 for an illustration of the distinction between near and far fields. The re-reflection loss B is insignificant in most cases where absorption loss A is greater than 10 dB, but becomes important for thin barriers at low frequencies. Absorption Loss (A) depends on the barrier thickness and its skin depth and is the same whether the field is electric, magnetic, or plane wave. The skin depth, in turn, depends on the properties of the barrier material; in contrast to reflection loss, steel offers higher absorption than copper of the same thickness. At high frequencies, as Figure 45.88 shows, it becomes the dominant term, increasing exponentially with the square root of the frequency. Shielding against Magnetic Fields at Low Frequencies is for all intents and purposes impossible with purely conductive materials. This is because the reflection loss to an impinging magnetic field (RH) depends on the mismatch of the field impedance to barrier impedance. The low field impedance is well matched to the low barrier impedance, and the field is transmitted through the barrier without significant attenuation or absorption. A high-permeability material such as mumetal or its derivatives can give low-frequency magnetic shielding by concentrating the field within the bulk of the material, but this is a different mechanism from that discussed above, and it is normally only viable for sensitive individual components such as CRTs or transformers.
45.4.3.2 The Effect of Apertures The curves of shielding effectiveness shown in Figure 45.88 suggest that upwards of 200 dB attenuation is easily
achievable using reasonable thicknesses of common materials. In fact, the practical shielding effectiveness is not determined by material characteristics but is limited by necessary apertures and discontinuities in the shielding. Apertures are required for ventilation, for control and interface access, and for viewing indicators. For most practical purposes shielding effectiveness (SE) is determined by the apertures. There are different theories for determining SE due to apertures. The simplest (Figure 45.89) assumes that SE is directly proportional to the ratio of longest aperture dimension and frequency, with zero SE when m = 2L: SE = 20 log (m/2L). Thus the SE increases linearly with decreasing frequency up to the maximum determined by the barrier material, with a greater degradation for larger apertures. A correction factor can be applied for the aspect ratio of slot-shaped apertures. Another theory, which assumes that small apertures radiate as a combination of electric and magnetic dipoles, predicts a constant SE degradation with frequency in the near field and a degradation proportional to F2 in the far field. This theory predicts a SE dependence on the cube of the aperture dimension, and also on the distance of the measuring point from the aperture. Neither theory accords really well with observations. However, as an example based on the simpler theory, for frequencies up to 1 GHz (the present upper limit for radiated emissions standards) and a minimum shielding of 20 dB, the maximum hole size allowable is 1.6 cm. Windows and Ventilation Slots Viewing windows normally involve a large open area in the shield, and you have to cover the window with a transparent conductive material, which must make good continuous contact with the surrounding screen, or accept the penalty of shielding at lower
Chapter | 45 EMC
frequencies only. You can obtain shielded window components which are laminated with fine blackened copper mesh, or which are coated with an extremely thin film of gold. In either case, there is a trade-off in viewing quality over a clear window, due to reduced light transmission (between 60 percent and 80 percent) and diffraction effects of the mesh. Screening effectiveness of a transparent conductive coating is significantly less than a solid shield, since the coating will have a resistance of a few ohms per square, and attenuation will be entirely due to reflection loss. This is not the case with a mesh, but shielding effectiveness of better than 40–50 dB may be irrelevant anyway because of the effect of other apertures. Using a Subenclosure An alternative method which allows a clear window to be retained is to shield behind the display with a subshield (Figure 45.90), which must, of course, make good all-around contact with the main panel.
Figure 45.89 Shielding effectiveness degradation due to apertures.
861
The electrical connections to the display must be filtered to preserve the shield’s integrity, and the display itself is unshielded and must therefore not be susceptible nor contain emitting sources. This alternative is frequently easier and cheaper than shielded windows. Mesh and Honeycomb Ventilation holes can be covered with a perforated mesh screen, or the conductive panel may itself be perforated. If individual equally sized perforations are spaced close together (hole spacing <m/2) then the reduction in shielding over a single hole is approximately proportional to the square root of the number of holes. Thus a mesh of 100 4-mm holes would have a shielding effectiveness 20 dB worse than a single 4-mm hole. Two similar apertures spaced greater than a half-wavelength apart do not suffer any significant extra shielding reduction. You can if necessary gain improved shielding of vents, at the expense of thickness and weight, by using “honeycomb” panels in which the honeycomb pattern functions as a waveguide below cut-off (Figure 45.91). In this technique the shield thickness is several times that of the width of each individual aperture. A common thickness/width ratio is 4:1, which offers an intrinsic shielding effectiveness of over 100 dB. This method can also be used to conduct insulated control spindles (not conductive ones!) through a panel. The Effect of Seams An electromagnetic shield is normally made from several panels joined together at seams. Unfortunately, when you join two sheets the electrical conductivity across the joint is imperfect. This may be because of distortion, so that surfaces do not mate perfectly, or because of painting, anodizing, or corrosion, so that an insulating layer is present on one or both metal surfaces. Consequently, the shielding effectiveness is reduced by seams almost as much as it is by apertures (Figure 45.92). The ratio of the fastener spacing d to the seam gap h is high enough to improve the shielding over that of a large aperture
Figure 45.90 Alternative ways of shielding a display window.
862
by 10–20 dB. The problem is especially serious for hinged front panels, doors, and removable hatches that form part of a screened enclosure. It is mitigated to some extent if the conductive sheets overlap, since this forms a capacitor which provides a partial current path at high frequencies. Figure 45.93 shows preferred ways to improve joint conductivity. If these are not available to you then you will need to use extra hardware as outlined in the next section. Seam and Aperture Orientation The effect of a joint discontinuity is to force shield current to flow around the discontinuity. If the current flowing in the shield were undisturbed then the field within the shielded area would be minimized, but as the current is diverted, so a localized discontinuity occurs, and this creates a field coupling path through the shield. The shielding effectiveness graph shown in Figure 45.89 assumes a worst-case orientation of current flow. A long aperture or narrow seam will have a greater
PART | VI Automation and Control Systems
effect on current flowing at right angles to it than on parallel current flow. (Antenna designers will recognize that this describes a slot antenna, the reciprocal of a dipole.) This effect can be exploited if you can control the orientation of susceptible or emissive conductors within the shielded environment (Figure 45.94). The practical implication of this is that if all critical internal conductors are within the same plane, such as on a PCB, then long apertures and seams in the shield should be aligned parallel to this plane rather than perpendicular to it. Generally, you are likely to obtain an advantage of no more than 10 dB by this trick, since the geometry of internal conductors is never exactly planar. Similarly, cables or wires, if they must be routed near to the shield, should be run along or parallel to apertures rather than across them. But because the leakage field coupling due to joint discontinuities is large near the discontinuity, internal cables should preferably not be routed near apertures or seams. Apertures on different surfaces or seams at different orientations can be treated separately since they radiate in different directions.
45.4.3.3 Shielding Hardware Many manufacturers offer various materials for improving the conductivity of joints in conductive panels. Such materials can be useful if properly applied, but they must be used with an awareness of the principles discussed above, and their expense will often rule them out for cost-sensitive applications except as a last resort.
Figure 45.91 Mesh panels and the waveguide below cut-off.
Gaskets and Contact Strip Shielding effectiveness can be improved by reducing the spacing of fasteners between different panels. If you need effectiveness to the upper limit of 1 GHz, then the necessary spacing becomes unrealistically small when you consider maintenance and accessibility. Figure 45.92 Seams between enclosure panels. The size of d determines the shielding effectiveness, modified by the seam gap h.
Figure 45.93 Cross-sections of joints for good conductivity.
Chapter | 45 EMC
863
Figure 45.94 Current loop versus aperture orientation.
In these cases the conductive path between two panels or flanges can be improved by using any of the several brands of conductive gasket, knitted wire mesh, or finger strip that are available. The purpose of these components is to be sandwiched in between the mating surfaces to ensure continuous contact across the joint, so that shield current is not diverted (Figure 45.95). Their effectiveness depends entirely on how well they can match the impedance of the joint to that of the bulk shield material. A number of factors should be borne in mind when selecting a conductive gasket or finger material: 1. Conductivity: This should be of the same order of magnitude as the panel material. 2. Ease of mounting: Gaskets should normally be mounted in channels machined or cast in the housing, and the correct dimensioning of these channels is important to maintain an adequate contact pressure without overtightening. Finger strip can be mounted by adhesive tape, welding, riveting, soldering, fasteners, or by just clipping in place. The right method depends on the direction of contact pressure. 3. Galvanic compatibility with the host: To reduce corrosion, the gasket metal and its housing should be close together and preferably of the same group within the electrochemical series (Table 45.8). The housing material should be conductively finished: alochrome or alodine for aluminum, nickel or tin plate for steel. 4. Environmental performance: Conductive elastomers will offer combined electrical and environmental protection, but may be affected by moisture, fungus, weathering, or heat. If you choose to use separate environmental and conductive gaskets, the environmental seal should be placed outside the conductive gasket and mounting holes. Conductive Coatings Many electronic products are enclosed in plastic cases for aesthetic or cost reasons. These can be made to provide a degree of electromagnetic shielding by covering one or both sides with a conductive coating
(Rankin, 1986). Normally this involves both a molding supplier and a coating supplier. Conductively filled plastic composites can also be used to obtain a marginal degree of shielding (around 20 dB); it is debatable whether the extra material cost justifies such an approach, considering that better shielding performance can be offered by conductive coating at lower overall cost (Bush, 1989). Conductive fillers affect the mechanical and aesthetic properties of the plastic, but their major advantage is that no further treatment of the molded part is needed. Another problem is that the molding process may leave a “resin-rich” surface which is not conductive, so that the conductivity across seams and joints is not ensured. As a further alternative, metallized fabrics are now becoming available which can be incorporated into some designs of compression molding. Shielding Performance The same dimensional considerations apply to apertures and seams as for metal shields. Thin coatings will be almost as effective against electric fields at high frequencies as solid metal cases, but are ineffective against magnetic fields. The major shielding mechanism is reflection loss (Figure 45.88) since absorption is negligible except at very high frequencies, and re-reflection (B) will tend to reduce the overall reflection losses. The higher the resistivity of the coating, the less its efficiency. For this reason conductive paints, which have a resistivity of around 1 X/square, are poorer shields than the various types of metallization (see Table 45.9) which offer resistivities below 0.1 X/square. Enclosure Design Resistivity will depend on the thickness of the coating, which in turn is affected by factors such as the shape and sharpness of the molding—coatings will adhere less to, and abrade more easily from, sharp edges and corners than rounded ones. Ribs, dividing walls, and other mold features that exist inside most enclosures make application of sprayed-on coatings (such as conductive paint or zinc arc spray) harder and favor the electroless plating methods. Where coatings must cover such features, the molding
864
PART | VI Automation and Control Systems Figure 45.95 Usage of gaskets and finger strip. (a) Conductive elastomer gaskets. (b) Beryllium copper finger strip.
Table 45.8 The electrochemical series: corrosion occurs when ions move from the more anodic metal to the more cathodic, facilitated by an electrolytic transport medium such as moisture or salts Anodic—most easily corroded $ Group I Magnesium
Group II
Group III
Group IV
Group V
Aluminum + alloys Zinc
Carbon steel
Chromium
Iron
Nickel
Galvanized iron
Cadmium
Tin, solder
Copper + alloys
Lead
Silver
Brass
Palladium
Stainless steel
Platinum Gold $ Cathodic least easily corroded
design should include generous radii, no sharp corners, adequate space between ribs, and no deep or narrow crevices. Coating Properties Environmental factors, particularly abrasion resistance and adhesion, are critical in the selection of the correct coating. Major quality considerations are: 1. Will the coating peel or flake off into the electrical circuitry?
2. Will the shielding effectiveness be consistent from part to part? 3. Will the coating maintain its shielding effectiveness over the life of the product? Adhesion is a function of thermal or mechanical stresses, and is checked by a non-destructive tape or destructive cross-hatch test. Typically, the removal of any coating as an immediate result of tape application constitutes a test
865
Chapter | 45 EMC
failure. During and at completion of thermal cycling or humidity testing, small flakes at the lattice edges after a cross-hatch test should not exceed 15 percent of the total coating removal. Electrical properties should not change after repeated temperature/humidity cycling within the parameters agreed with the molding and coating supplies. Resistance measurements should be taken from the farthest distances of the test piece and also on surfaces critical to the shielding performance, especially mating and grounding areas. Table 45.9 compares the features of the more commonly available conductive coatings (others are possible but are more expensive and little used). These will give shielding effectiveness in the range of 30–70 dB if properly applied. It is difficult to compare shielding effectiveness figures given by different manufacturers unless the methods used to perform their shielding effectiveness tests are very clearly specified; different methods do not give comparable results. Also, laboratory test methods do not necessarily correlate with the performance of a practical enclosure for a commercial product.
45.5 The regulatory framework 45.5.1 Customer Requirements Historically, the EMC aspects of instrumentation have largely been driven by the customer’s need for a reliable instrument. An instrument must function correctly in whatever environment it is supplied for, and one aspect of that environment is electromagnetic interference. Therefore, a technically alert customer will specify that correct operation must be maintained under given conditions of interference. Several large organizations have over the past decades developed their own standard requirements, sometimes based on published International Standards and sometimes not. These requirements are enforced contractually. Internally developed requirements are usually a response to known or anticipated problems that are special to the
environment in which the equipment will operate, for example, the railway track-side environment is particularly aggressive with respect to transient interference, and instrumentation for railways must meet specifications laid down by the Railway Industries Association. Wherever possible, use is made of test methods developed and published internationally, since these assure the widest availability and greatest cost-effectiveness of the test program. Until recently, customer requirements in the instrumentation sector have been almost invariably for immunity only; no control has been placed on radio frequency or supply harmonic emissions. The concern has been mainly that instrument accuracy and operation remains unaffected by interference coupled in from an inevitably noisy environment. Because the environment is so aggressive, the instrument’s own emissions are insignificant. Historical exceptions to this have generally been in the military, aerospace, automotive, and naval sectors, where equipment has had to share platforms with radio receivers, and the proximity of the two has called for tight control over radio frequency emissions. With the increasing penetration of mobile communications into all areas, this need has become more widespread. Nevertheless, control of emissions is generally left to legislation. Until the arrival of the EMC Directive, in most cases the specification for immunity of instruments has been left to the customer. A particular exception to this has been in instruments for legal metrology, in which field the immunity requirements are now covered by the Directive on non-automatic weighing instruments (90/384/EEC). The EMC Directive now sets minimum requirements for immunity (via the generic immunity standard) which apply to all equipment, including instrumentation. However, these requirements are not particularly onerous, and the customer is still at liberty to impose his or her own specifications over and above the legal minimum.
45.5.2 The EMC Directive Of the various aims of the creation of the Single Market, the free movement of goods between European states is
TABLE 45.9 Comparison of conductive coating techniques Cost (£/m2)
E-field shielding
Thickness
Adhesion
Scratch resistance
Maskable
Comments
Conductive paint (nickel, copper)
5–5
Poor/ average
0.05 mm
Poor
Poor
Yes
Suitable for prototyping
Zinc arc spray (zinc)
5–10
Average/ good
0.1–0.15 mm Depends on surface preparation
Good
Yes
Rough surface, inconsistent
Electroless plate (copper, nickel)
10–15
Average/ good
1–2 nm
Good
Poor
No
Cheapter if entire part plated
Vacuum metallization (aluminum)
10–15
Average
2–5 n
Depends on surface preparation
Poor
Yes
Poor environmental qualities
866
fundamental. All member states impose standards and obligations on the manufacture of goods in the interests of quality, safety, consumer protection, and so forth. Because of detailed differences in procedures and requirements, these act as technical barriers to trade, fragmenting the European market and increasing costs because manufacturers have to modify their products for different national markets. For many years the EC tried to remove these barriers by proposing directives which gave the detailed requirements that products had to satisfy before they could be freely marketed throughout the Community, but this proved difficult because of the detailed nature of each directive and the need for unanimity before it could be adopted. In 1985 the Council of Ministers adopted a resolution setting out a “New Approach to Technical Harmonization and Standards.” Under the “new approach,” directives are limited to setting out the essential requirements which must be satisfied before products may be marketed anywhere within the EC. The technical detail is provided by standards drawn up by the European standards bodies CEN, CENELEC, and ETSI. Compliance with these standards will demonstrate compliance with the essential requirements of each directive. All products covered by each directive must meet its essential requirements, but all products which do comply, and are labeled as such, may be circulated freely within the Community; no member state can refuse them entry on technical grounds. Decisions on new approach directives are taken by qualified majority voting, eliminating the need for unanimity and speeding up the process of adoption. The EMC Directive is possibly the most significant and wide-ranging of the new approach directives.
45.5.2.1 Background to the Legislation In the U.K., previous legislation on EMC has been limited in scope to radio communications. Section 10 of the Wireless Telegraphy Act 1949 enables regulations to be made for the purpose of controlling both radio and non-radio equipment which might interfere with radio communications. These regulations have taken the form of various statutory instruments (SIs) which cover interference emissions from spark ignition systems, electromedical apparatus, radio frequency heating, household appliances, fluorescent lights, and CB radio. The SIs invoke British Standards which are closely aligned with international and European standards. This previous legislation is not comparable in scope to the EMC Directive, which covers far more than just interference to radio equipment, and extends to include immunity as well as emissions.
45.5.2.2 Scope and Requirements The EMC Directive, 89/336/EEC, was adopted in 1989 to come into force on 1 January 1992. A subsequent amending directive, 92/31/EEC, extended the original transitional
PART | VI Automation and Control Systems
period to 31 December 1995. The EMC Directive applies to apparatus which is liable to cause electromagnetic disturbance or which is itself liable to be affected by such disturbance. “Apparatus” is defined as all electrical and electronic appliances, equipment, and installations. Essentially, anything which is powered by electricity is covered, regardless of whether the power source is the public supply line-voltage, a battery source, or a specialized supply. An electromagnetic disturbance is any electromagnetic phenomenon which may degrade performance, without regard to frequency or method of coupling. Thus radiated emissions as well as those conducted along cables, and immunity from electromagnetic fields, line-voltage disturbances, conducted transients and radio frequency, electrostatic discharge, and lightning surges are all covered. No specific phenomena are excluded from the Directive’s scope. Essential Requirements The essential requirements of the Directive (Article 4) are that the apparatus shall be so constructed that: 1. The electromagnetic disturbance it generates does not exceed a level allowing radio and telecommunications equipment and other apparatus to operate as intended. 2. The apparatus has an adequate level of intrinsic immunity to electromagnetic disturbance to enable it to operate as intended. The intention is to protect the operation not only of radio and telecommunications equipment, but also of any equipment which might be susceptible to electromagnetic disturbances, such as information technology or control equipment. At the same time, all equipment must be able to function correctly in whatever environment it might reasonably be expected to occupy. Sale and Use of Products The Directive applies to all apparatus that is placed on the market or taken into service. The definitions of these two conditions do not appear within the text of the Directive but have been the subject of an interpretative document issued by the Commission (DTI 1991). The “market” means the market in any or all of the EC member states; products which are found to comply within one state are automatically deemed to comply within all others. “Placing on the market” means the first making available of the product within the EC, so that the Directive covers not only new products manufactured within the EC but also both new and used products imported from a third country. Products sold secondhand within the EC are outside its scope. Where a product passes through a chain of distribution before reaching the final user, it is the passing of the product from the manufacturer into the distribution chain which constitutes placing on the market. If the product is manufactured in or imported into the EC for subsequent export to a third country, it has not been placed on the market.
Chapter | 45 EMC
The Directive applies to each individual item of a product type, regardless of when it was designed, and whether it is a one-off or a high-volume product. Thus items from a product line that was launched at any time before 1996 must comply with the provisions of the Directive after 1 January 1996. Put another way, there is no “grandfather clause” which exempts designs that were current before the Directive took effect. However, products already in use before 1 January 1996 do not have to comply retrospectively. “Taking into service” means the first use of a product in the EC by its final user. If the product is used without being placed on the market, if, for example, the manufacturer is also the end user, then the protection requirements of the Directive still apply. This means that sanctions are still available in each member state to prevent the product from being used if it does not comply with the essential requirements or if it causes an actual or potential interference problem. On the other hand, it should not need to go through the conformity assessment procedures to demonstrate compliance (Article 10, which describes these procedures, makes no mention of taking into service). Thus an item of special test gear built up by a laboratory technician for use within the company’s design department must still be designed to conform to EMC standards, but should not need to follow the procedure for applying the CE mark. If the manufacturer resides outside the EC, then the responsibility for certifying compliance with the Directive rests with the person placing the product on the market for the first time within the EC, i.e., the manufacturer’s authorized representative or the importer. Any person who produces a new finished product from already existing finished products, such as a system builder, is considered to be the manufacturer of the new finished product.
45.5.2.3 Systems, Installations, and Components An area of concern to system builders is how the Directive applies to two or more separate pieces of apparatus sold together or installed and operating together. It is clear that the Directive applies in principle to systems and installations. The Commission’s interpretative document (DTI 1991) defines a system as several items of apparatus combined to fulfill a specific objective and intended to be placed on the market as a single functional unit. An installation is several combined items of apparatus or systems put together at a given place to fulfill a specific objective but not intended to be placed on the market as a single functional unit. Therefore a typical system would be a personal computer workstation comprising the PC, monitor, keyboard, printer, and any other peripherals. If the units were to be sold separately they would have to be tested and certified separately; if they were to be sold as a single package, then they would have to be tested and certified as a package. Any other combination of items of apparatus, not initially intended to be placed on the market together, is considered to
867
be not a system but an installation. Examples of this would appear to be computer suites, telephone exchanges, electricity substations or television studios. Each item of apparatus in the installation is subject to the provisions of the Directive individually, under the specified installation conditions. As far as it goes, this interpretation is useful, in that it allows testing and certification of installations to proceed on the basis that each component of the installation will meet the requirements on its own. The difficulty of testing large installations in situ against standards that were never designed for them is largely avoided. Also, if an installation uses large numbers of similar or identical components, then only one of these needs to be actually tested. Large Systems The definition unfortunately does not help system builders who will be “placing on the market” (i.e., supplying to their customer on contract) a single installation, made up of separate items of apparatus but actually sold as one functional unit. Many industrial, commercial, and public utility contracts fall into this category. According to the published interpretation, the overall installation would be regarded as a system and therefore should comply as a package. As it stands at present, there are no standards which specifically cover large systems, i.e., ones for which testing on a test site is impractical, although some emissions standards do allow measurements in situ. Neither are there any provisions for large systems in the immunity standards. Therefore the only compliance route available to system builders is the technical construction file (see Section 45.5.2.5), but guidance as to how to interpret the Directive’s essential requirements in these cases is lacking. The principal dilemma of applying the Directive to complete installations is that to make legally relevant tests is difficult, but the nature of EMC phenomena is such that to test only the constituent parts without reference to their interconnection is largely meaningless. Components The question of when a “component” (which is not within the scope of the Directive) becomes “apparatus” (which is) remains problematical. The Commission’s interpretative document defines a component to be “any item which is used in the composition of an apparatus and which is not itself an apparatus with an intrinsic function intended for the final consumer.” Thus individual small parts such as ICs and resistors are definitely outside the Directive. A component may be more complex provided that it does not have an intrinsic function and its only purpose is to be incorporated inside an apparatus, but the manufacturer of such a component must indicate to the equipment manufacturer how to use and incorporate it. The distinction is important for manufacturers of boardlevel products and other subassemblies which may appear to have an intrinsic function and are marketed separately, yet cannot be used separately from the apparatus in which they will be installed. The Commission has indicated that it regards subassemblies which are to be tested as part of
868
a larger apparatus as outside the Directive’s scope. On the other hand, subassemblies such as plug-in cards which are supplied by a third party to be inserted by the user, should be tested and certified separately.
45.5.2.4 The CE Mark and the Declaration of Conformity The manufacturer or his authorized representative is required to attest that the protection requirements of the Directive have been met. This requires two things: 1. That he issues a declaration of conformity which must be kept available to the enforcement authority for 10 years following the placing of the apparatus on the market. 2. That he affixes the CE mark to the apparatus, or to its packaging, instructions, or guarantee certificate. A separate Directive concerning the affixing and use of the CE mark has been published (EEC 1993). The mark consists of the letters CE as shown in Figure 45.96. The mark should be at least 5 mm in height and be affixed “visibly, legibly and indelibly,” but its method of fixture is not otherwise specified. Affixing this mark indicates conformity not only with the EMC Directive but also with the requirements of any other new approach Directives relevant to the product—for instance, electrical machinery with the CE mark indicates compliance both with the Machinery Directive and the EMC Directive. The EC declaration of conformity is required whether the manufacturer self-certifies to harmonized standards or follows the technical file route (Section 45.5.2.5). It must include the following components:
PART | VI Automation and Control Systems
harmonized standards (Section 45.5.3). Harmonized standards are those CENE-LEC or ETSI standards which have been announced in the Official Journal of the European Communities (OJEC). In the U.K. these are published as dual-numbered BS and EN standards. The potential advantage of certifying against standards from the manufacturer’s point of view is that there is no mandatory requirement for testing by an independent test house. The only requirement is that the manufacturer makes a declaration of conformity (see Section 45.5.2.4) which references the standards against which compliance is claimed. Of course, the manufacturer will normally need to test the product to assure himself that it actually does meet the requirements of the standards, but this could be done inhouse. Many firms will not have sufficient expertise or facilities in-house to do this testing, and will therefore have no choice but to take the product to an independent test house. But the long-term aim ought to be to integrate the EMC design and test expertise within the rest of the development or quality department, and to decide which standards apply to the product range, so that the prospect of self-certification for EMC is no more daunting than the responsibility of functionally testing a product before shipping it.
Self-Certification The route which is expected to be followed by most manufacturers is self-certification to
The Technical Construction File The second route available to achieve compliance is for the manufacturer or importer to generate a technical construction file (TCF). This is to be held at the disposal of the relevant competent authorities as soon as the apparatus is placed on the market and for ten years after the last item has been supplied. The Directive specifies that the TCF should describe the apparatus, set out the procedures used to ensure conformity with the protection requirements, and should contain a technical report or certificate obtained from a competent body. This last requirement means that, in contrast to the self-certification route, the involvement of a third party (who must have been appointed by the national authorities) is mandatory. The purpose of the TCF route is to allow compliance with the essential requirements of the Directive to be demonstrated when harmonized or agreed national standards do not exist, or exist only in part, or if the manufacturer chooses not to apply existing standards for his own reasons. Since the generic standards are intended to cover the first two of these cases, the likely usage of this route will be under the following circumstances:
Figure 45.96 The CE mark.
1. When existing standards cannot be applied because of the nature of the apparatus or because it incorporates advanced technology which is beyond their breadth of concept. 2. When testing would be impractical because of the size or extent of the apparatus, or because of the existence of many fundamentally similar installations. 3. When the apparatus is so simple that it is clear that no testing is necessary.
1. A description of the apparatus to which it refers. 2. A reference to the specifications under which conformity is declared, and where appropriate, to the national measures implemented to ensure conformity. 3. An identification of the signatory empowered to bind the manufacturer or his authorized representative. 4. Where appropriate, reference to the EC type examination certificate (for radiocommunications apparatus only).
45.5.2.5 Compliance with the Directive
869
Chapter | 45 EMC
4. When the apparatus has already been tested to standards that have not been harmonized or agreed but which are nevertheless believed to meet the essential requirements.
45.5.3 Standards Relating to the EMC Directive 45.5.3.1 Product Specific Standards Where they are available, the preferred method of complying with the EMC Directive is to comply with the requirements of harmonized product or product-family EMC standards. These are drawn up by CENELEC, and their reference numbers are published in the Official Journal of the European Communities. There are no such standards for instrumentation in general at the time of writing. Several standards for specific classes of instrumentation are in preparation. Meanwhile, it may be possible to apply one or more of the already published harmonized standards. Table 45.10 lists those which may be applicable within the instrumentation
sector. The decision to use any of these standards should be based on a careful reading of its scope.
45.5.3.2 The Generic Standards There are many industry sectors for which no productspecific standards have been developed. This is especially so for immunity. In order to fill this gap wherever possible, CENELEC has given a high priority to developing the generic standards. These are standards with a wide application, not related to any particular product or product family, and are intended to represent the essential requirements of the Directive. They are divided into two standards, one for immunity and one for emissions, each of which has separate parts for different environment classes (Table 45.10). Where a relevant product-specific standard does exist, this takes precedence over the generic standard. It will be common, though, for a particular product to be covered by one product standard for line-voltage harmonic emissions, another for radio frequency emissions and the generic standard for immunity. All these standards must be satisfied before compliance with the Directive can be claimed. Other mixed combinations will occur until a comprehensive range
TABLE 45.10 Relevant standards for instrumentation Reference
Scope
Requirements
Generic Part 1: residential, commercial, and light industry
Tests as EN55022 Class B, EN55014, EN60555-2,3 when applicable
Emissions EN50081
Generic Part 2: industrial
Tests as EN55011
EN55011
Industrial, scientific, and medical apparatus
Conducted r.f. 150 kHz–30 MHz, radiated r.f. 30–1000 MHz
EN55014
Motor operated and thermal appliances for household and similar purposes
Conducted r.f. 150 kHz–30 MHz, disturbance power 30–300 MHz, discontinuous disturbances
EN55022
Information technology equipment
Conducted r.f. 150 kHz–30 MHz, radiated r.f. 30–1000 MHz
EN60555
Supply system disturbances caused by household appliances and similar equipment
Part 2: mains harmonic currents Part 3: flicker
Generic Part 1: residential, commercial, and light industry
Tests as IEC801 Parts 2, 3, and 4 where applicable
Immunity EN50082
Generic Part 2: industrial
Tests as IEC801 Parts 2, 3, and 4 where applicable
EN55024
Information technology equipment
Part 2: electrostatic discharge (draft) Part 3: radiated r.f. (draft) Other parts in preparation
IEC801-X (IEC1000–4-X)
Originally intended for industrial process measurement and control equipment.
EN60801-2: Part 2: electrostatic discharge ENV50140: Part 3: radiated r.f.
Now denoted a “basic” standard
Part 4: electrical fast transients Part 5: surge (draft) ENV50141 Part 6: conducted f.f.
IEC-1000-4-X
EMC testing and measurement techniques (not relevant for EMC Directive)
Part 8: power frequency magnetic field Part 9: pulse magnetic field Part 10: damped oscillatory magnetic field
870
of product standards has been developed—a process which will take several years. Environmental Classes The distinction between environmental classes is based on the electromagnetic conditions that obtain in general throughout the specified environments. The inclusion of the “light industrial” environments (workshops, laboratories, and service centers) in Class 1 has been the subject of some controversy, but studies have shown that there is no significant difference between the electromagnetic conditions at residential, commercial, and light industrial locations. Equipment for the Class 2 “industrial” environment is considered to be connected to a dedicated transformer or special power source, in contrast to the Class 1 environment which is considered to be supplied from the public line-voltage network. Referenced Standards The tests defined in the generic standards are based only on internationally approved, already existing standards. For each electromagnetic phenomenon a test procedure given by such a standard is referenced, and a single test level or limit is laid down. No new tests are defined in the body of any generic standard. Since the referenced standards are undergoing revision to incorporate new tests, these are noted in an “informative annex” in each generic standard. The purpose of this is to warn users of those requirements that will become mandatory in the future, when a new standard or the revision to the referenced standard is agreed and published.
References Barlow, S. J., Improving the Line-voltage Transient Immunity of Microprocessor Equipment, Cambridge Consultants, IEE Colloquium, Interference and Design for EMC in Microprocessor Based Systems, London (1990). Bond, A. E. J., “Implementation of the EMC Directive in the U.K.,” DTI, IEE 8th International Conference on EMC, 21–24 September 1992, Edinburgh, IEE Conference Publication No. 362. BS 6656 Guide to prevention of inadvertent ignition of flammable atmospheres by radio frequency radiation. BS 6657 Guide for prevention of inadvertent initiation of electro-explosive devices by radio frequency radiation. BS 6839: Part 1 Line-voltage signalling equipment: specification for communication and interference limits and measurements (EN 50065: Part 1). Bush, D. R., “A simple way of evaluating the shielding effectiveness of small enclosures,” 8th Symposium on EMC, Zurich, 5–7 March 1989. Catherwood, M., “Designing for EMC with HCMOS microcontrollers,” Motorola Application Note AN1050 (1989). Cathey, W. T., and R. M. Keith, “Coupling reduction in twisted wire,” International Symposium on EMC, IEEE, Boulder, CO, August 1981. Chernisk, S., “A review of transients and their means of suppression,” Motorola Application Note AN-843 (1982). CISP R23 Determination of limits for industrial, scientific and medical equipment.
PART | VI Automation and Control Systems Cowdell, R. B., “Unscrambling the mysteries about twisted wire,” International Symposium on EMC, IEEE, San Diego, 9–11 October 1979. Crane, L. F., and Srebranig, S. F., “Common mode filter inductor analysis,” Coilcraft Data Bulletin (1985). DTI, Electromagnetic Compatibility: European Commission Explanatory Document on Council Directive 89/336 EEC, Department of Trade & Industry (November 1991). EEC, “Council Directive of 3rd May 1989 on the approximation of the laws of the Member States relating to Electromagnetic Compatibility (89/336/EEC),” Off. J. Eur. Commun., L 139 (23 May 1989) (Amended by Directives 92/31/EEC and 93/68/EEC). EEC, “Council Directive 92/31/EEC (Transitional period),” Off. J. Eur. Commun., L 126 (12 May 1992). EEC, “Council Directive 93/68/EEC (the CE marking Directive),” Off. J. Eur. Commun., L 220 (30 August 1993). EN 55011 Limits and methods of measurement of radio disturbance characteristics of industrial, scientific and medical (ISM) radio-frequency equipment. EN 55022 Limits and methods of measurement of radio interference characteristics of information technology equipment. EN 60555 Disturbances in supply systems caused by household appliances and similar electrical equipment. EN 50081 Electromagnetic compatibility—generic emission standard. EN 50082 Electromagnetic compatibility—generic immunity standard. pr EN 55024 Immunity requirements for information technology equipment German, R. F., “Use of a ground grid to reduce printed circuit board radiation,” 6th Symposium on EMC, Zurich, 5–7 March 1985. Gilbert, M. J., “Design innovations address advanced CMOS logic noise considerations,” National Semiconductor Application Note AN-690 (1990). Goedbloed, J. J., “Transients in low-voltage supply networks,” IEEE Trans. Electromag. Compatibility, EMC-29, 104–115 (1987). Goedbloed, J. J., and W. A. Pasmooij, “Characterization of transient and CW disturbances induced in telephone subscriber lines,” IEE 7th International Conference on EMC, York, 211–218, 28–31 August 1990. Hayt, W. H., Engineering Electromagnetics, 5th ed., McGraw-Hill, New York (1988). Hensman, G. O., “EMC related to the public electricity supply network,” EMC 89—Product Design for Electromagnetic Compatibility (ERA Technology Seminar Proceedings 89–0001), Electricity Council (1989). Howell, E. K., “How switches produce electrical noise,” IEEE Trans. Electromag. Compatibility, EMC-21, 162–170 (1979). IEC801 (BS 6667) Electromagnetic compatibility for industrial-process measurement and control equipment. IEC 1000 Electromagnetic compatibility. Jones, J. W. E., “Achieving compatibility in interunit wiring,” 6th Symposium on EMC, Zurich, 5–7 March 1985. Kay, R., “Co-ordination of IEC standards on EMC and the importance of participating in standards work,” IEE 7th International Conference on EMC, 28–31 August 1990, York, IEC, 1–6 (1990). Marshall, R. C., “Convenient current-injection immunity testing,” IEE 7th International Conference on EMC, York, 28–31 August, 1990, 173–176 (1990). Mazda, F. F. (ed.), Electronic Engineer’s Reference Book, 5th ed., Butterworth-Heinemann, Oxford (1983). Morrison, R., Grounding and Shielding Techniques in Instrumentation, 3d ed., Wiley, Chichester, U.K. (1986).
871
Chapter | 45 EMC Motorola, “Transmission line effects in PCB applications,” Application Note AN1051 (1990). Ott, H. W., “Ground—a path for current flow,” International Symposium on EMC, IEEE, San Diego, 9–11 October 1979. Ott, H. W., “Digital circuit grounding and interconnection,” International Symposium on EMC, IEEE, Boulder, Colo., August 1981. Ott, H. W., “Controlling EMI by proper printed wiring board layout,” 6th Symposium on EMC, Zurich, 5–7 March 1985. Ott, H. W., Noise Reduction Techniques in Electronic Systems, 2d ed., Wiley, Chichester, U.K. (1988). Palmgren, C. M., “Shielded flat cables for EMI and ESD reduction,” International Symposium on EMC, IEEE, Boulder, Colo., August 1981. Paul, C. R., “Effect of pigtails on crosstalk to braided-shield cables,” IEEE Trans. Electromag. Compatibility, EMC-22, No. 3 (1980). Paul, C. R., “A comparison of the contributions of common-mode and differential-mode currents in radiated emissions,” IEEE Transactions on Electromagnetic Compatibility, EMC-31, No. 2 (1989). Rankin, I., “Screening plastics enclosures,” Suppression Components, Filters and Screening for EMC (ERA Technology Seminar Proceedings 86–0006) (1986). Smith, A. A., Coupling of External Electromagnetic Fields to Transmission Lines, Wiley, Chichester, U.K. (1977).
Staggs, D. M., “Designing for electrostatic discharge immunity,” 8th Symposium on EMC, Zurich, March 1989. Sutu, Y.-H., and J. J. Whalen, “Demodulation RFI in inverting and noninverting operational amplifier circuits,” 6th Symposium on EMC, Zurich, March 1985. Swainson, A. J. G., “Radiated emission, susceptibility and crosstalk control on ground plane printed circuit boards,” IEEE 7th International Conference on EMC, York, 28–31 August 1990, 37–41. Tront, J. G., “RFI susceptibility evaluation of VLSI logic circuits,” 9th Symposium on EMC, Zurich, March 1991. Whalen, J. J., “Determining EMI in microelectronics—a review of the past decade,” 6th Symposium on EMC, Zurich, 5–7 March 1985. Williams, T., The Circuit Designer’s Companion, Butterworth-Heinemann, Oxford (1991). Wimmer, M., “The development of EMI design guidelines for switched mode power supplies—examples and case studies,” 5th International Conference on EMC, York (IERE Conference Publication No. 71) (1986).
Further Reading Morgan, D., Handbook for EMC Testing and Measurement, Institution of Electrical Engineers (1994).
This page intentionally left blank
Appendix A
General Instrumentation Books For the most part, individual technical chapters of the Instrumentation Reference Book give references for further reading to books with a particular relevance to the topic of that chapter. Here we list some more general books that each give an overview of a wide range of instrumentation subjects. To help the reader decide whether any particular book will help with a particular problem, we include a table of contents for each of these books. Abnormal Situation Consortium, ASM Consortium Guidelines: Effective Operator Display Design, ASM Consortium, Phoenix, AZ (2008). Introduction Preventing and Responding to Abnormal Situations Guideline Detail and Examples Guidelines Conformance Examples Aftermatter and Appendices Battikha, N., The Condensed Handbook of Measurement and Control, ISA Press, Research Triangle Park, NC (3rd ed., 2006). Symbols Measurement Control Loops Control Valves Tables for Unit Conversion Corrosion Guide Enclosure Ratings Resources Berge, Jonas, Software for Automation: Architecture, Integration, and Security, ISA Press, Research Triangle Park, NC (2005). Introduction and Overview Benefits, Savings and Doubts Setup Configuration and Scripting Enterprise Integration and System Migration Troubleshooting Application Engineering and Design Management and Administration Safety, Availability and Security ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00047-4
Bolton, W., Industrial Control and Instrumentation, Longman, Harlow, U.K. (1991). Measurement systems Control systems Transducers Signal conditioning and processing Controllers Correction units Data display Measurement systems Control systems Bolton, W., Instrumentation and Process Measurements, Longman, Harlow, U.K. (1991). Basic instrument systems Sensing elements Signal converters Displays Pressure measurement Measurement of level and density Measurement of flow Measurement of temperature Maintenance Considine, D. M., Industrial Instruments and Controls Handbook, McGraw-Hill, New York (1993). Introductory review Control system fundamentals Controllers Process variables—field instrumentation Geometric and motion sensors Physicochemical and analytical systems Control communications Operator interface Valves, servos, motors and robots Corripio, A. B., Tuning of Industrial Control Systems, 2nd ed., ISA Press, Research Triangle Park, NC (2001). Feedback controllers Open-loop characterization of process dynamics How to select feedback controller modes How to tune feedback controllers Computer feedback control 873
874
Tuning cascade control systems d-forward, ratio, multivariable, adaptive, and self-tuning control Dally, J. and W. Riley, Instrumentation for Engineering Measurements, 2nd ed., Wiley, New York (1993). Applications of electronic systems Analysis of circuits Analog recording instruments Digital recording systems Sensors for transducers Signal conditioning circuits Resistance-type strain gauges Force, torque and pressure measurements Displacement, velocity and acceleration measurements Analysis of vibrating systems Temperature measurements Fluid flow measurements Statistical methods (Note: Summary and problems appear in every chapter) Dieck, R. H., Measurement Uncertainty, 4th ed., ISA Press, Research Triangle Park, NC (2007). Fundamentals of measurement uncertainty analysis The measurement uncertainty model How to do it summary Uncertainty (error) propagation Weighting method for multiple results Applied considerations Presentation of results Figliola, R. S. and D. E. Beasley, Theory and Design for Mechanical Measurements, Wiley, New York (1991). Basic concepts of measurement methods Static and dynamic characteristics of signals Measurement system behavior Probability and statistics Uncertainty analysis Electrical devices, signal processing and data acquisition Temperature measurements Pressure and velocity measurements Flow measurements Metrology, displacement and motion measurements Strain measurement Finkelstein, L. and K. T. V. Grattan, Concise Encyclopedia of Measurement and Instrumentation, Pergamon, Oxford (1993). General theoretical principles of measurement and instrumentation Instrument and instrument systems in relation to their life cycles Instrument system elements and general technology Measurement information systems classified by measurand Applications History of measurement and instrumentation
Appendix | A General Instrumentation Books
Gifford, Charlie (editor and contributing author), The Hitchhiker’s Guide to Manufacturing Operations Management: ISA95 Best Practices Book 1.0, ISA Press (2007). ISA95 best practices and business case evolve through manufacturing application An overview and comparison of ISA95 and OAGIS OAGIS, ISA95 and related manufacturing integration standards, a survey ISA95—as-is/to-be study Manufacturing information systems—ISA88/95 based functional definitions ISA95 implementation best practices, workflow descriptions using B2MML ISA95 based operations and KPI metrics assessment and analysis ISA95 the SAP Enterprise-Plant link to achieve adaptive manufacturing analysis ISA95 based change management Gillum, Donald R., Industrial Pressure, Level and Density Measurement, 2nd ed., ISA Press, Research Triangle Park, NC (2009). Introduction to Measurements Pressure Measurement and Calibration Principles Pressure Transducers and Pressure Gages Transmitters and Transmission Systems Level Measurement Theory and Visual Measurement Techniques Hydrostatic Head Level Measurement Electrical Level Measurement Liquid Density Measurement Hydrostatic Tank Gaging Instrument Selection and Applications Deadweight Gage Calibration Pressure Instruments Form ISA20.40a1 Gruhn, P. and H. Cheddie, Safety Instrumented Systems, 2nd ed., ISA Press, Research Triangle Park, NC (2005). Introduction Design life cycle Risk Process control vs. Safety control Protection layers Developing the safety requirement specifications Developing the safety integrity level Choosing a technology Initial system evaluations Issues relating to field devices Engineering a system Installing a system Functional testing Managing changes to a system Justification for a safety system SIS design checklist Case study
875
Appendix | A General Instrumentation Books
Horne, D. F., Measuring Systems and Transducers for Industrial Applications, Institute of Physics Publishing, London (1988). Optical and infra-red transmitting systems Photogrammetry and remote earth sensing Microwave positioning and communication systems Seismic field and seabed surveying and measurement of levels Hughes, T. A., Programmable Controllers, 4th ed., ISA Press, Research Triangle Park, NC (2005). Introduction Numbering systems and binary codes Digital logic fundamentals Electrical and electronic fundamentals Input/output systems Memory and storage devices Ladder logic programming High-level programming languages Data communication systems System design and applications Installation, maintenance and troubleshooting ISA, Dictionary of Measurement and Control, 3rd ed., ISA Press, Research Triangle Park, NC (1999). Liptak, B., Instrument Engineer’s Handbook 4th ed., Chilton, U.S.; ISA Press, Butterworth-Heinemann, U.K. and rest of world (2003). Volume One: Process Measurement and Analysis 1. Instrument terminology and performance 2. Flow measurement 3. Level measurement 4. Temperature measurement 5. Pressure measurement 6. Density measurement 7. Safety, weight and miscellaneous sensors 8. Analytical instrumentation 9. Appendix Volume Two: Process Control 1. Control theory 2. Controllers, transmitters converters and relays 3. Control centers, panels and displays 4. Control valves, on-off and throttling 5. Regulators and other throttling devices 6. PLCs and other logic devices 7. DCS and computer based systems 8. Process control systems 9. Appendix Love, Jonathan, Process Automation Handbook: A Guide to Theory and Practice, Springer Verlag, London (2007). Technology and Practice Instrumentation Final Control Elements Conventional Control Strategies Process Control Schemes
Digital Control Systems Control Technology Management of Automation Projects Theory and Technique Maths and Control Theory Plant and Process Dynamics Simulation Advanced Process Automation Advanced Process Control Magison, Ernest C., Electrical Instruments in Hazardous Locations, 4th ed., ISA Press, Research Triangle Park, NC (1998). Historical background and perspective Combustion and explosion fundamentals Classification of hazardous locations and combustible materials Practice and principles of hazard reduction practice Explosion proof enclosures Reduction of hazard by pressurization Encapsulation, sealing and immersion Increased safety, type of protection e Ignition of gases and vapors by electrical means Intrinsically safe and nonincendive systems Design and evaluation of intrinsically safe apparatus, intrinsically safe systems and nonincendive systems Ignition by optical sources Dust hazards Human safety Degree of protection by enclosures Mandel, J., Evaluation and Control of Measurements, Marcel Dekker, New York (1991). Measurement and statistics Basic statistical concepts Precision and accuracy: the central limit theorem, weighting Sources of variability Linear functions of a single variable Linear functions of several variables Structured two-way tables of measurements A fit of high precision two-way data A general treatment of two-way structured data Interlaboratory studies Control charts Comparison of alternative methods Data analysis: past, present and future Marshall, Perry S., and Rinaldi, John S., Industrial Ethernet, 2nd ed., ISA Press, Research Triangle Park, NC (2005). What Is Industrial Ethernet? A Brief Tutorial on Digital Communication Ethernet Hardware Basics Ethernet Protocol and Addressing Basic Ethernet Building Blocks Network Health, Monitoring and System Maintenance Installation, Troubleshooting and Maintenance Tips
876
Basic Precautions for Network Security Power over Ethernet (PoE) Wireless Ethernet Nachtigal, C. L., Instrumentation and Control: Fundamentals and Applications, Wiley, New York (1990). Introduction to the handbook Systems engineering concepts Dynamic systems analysis Instrument statics Input and output characteristics Electronic devices and data conversion Grounding and cabling techniques Bridge transducers Position, velocity and acceleration measurement Force, torque and pressure measurement Temperature and flow transducers Signal processing and transmission Data acquisition and display systems Closed-loop control system analysis Control system performance modification Servoactuators for closed-loop control Controller design General purpose control devices State-space methods for dynamic systems analysis Control system design using state-space methods Scholten, Bianca, The Road to Integration: A guide to applying the ISA95 Standard in manufacturing, ISA Press, Research Triangle Park, NC (2007). Getting acquainted with ISA95 Applying ISA95 as an Analysis Tool Understanding and Applying the ISA95 Object Models Applying ISA95 to Vertical Integration Sherman, R. E., Analytical Instrumentation, ISA Press, Research Triangle Park, NC (1996). Introduction to this technology Typical analyzer application justifications Interfacing analyzers with systems Specification and purchasing of analyzers Calibration considerations Training aspects Spc/sqc for analyzers Personnel and organizational issues Validation of process analyzers Sample conditioning systems Component specific analyzers Electrochemical analyzers Compositional analyzers, spectroscopic analyzers Physical property Spitzer, D. W., Flow Measurement, 3rd ed., ISA Press, Research Triangle Park, NC (2004). Physical properties of fluids Fundamentals of flow measurement Signal handling
Appendix | A General Instrumentation Books
Field calibration Installation and maintenance Differential pressure flowmeters Magnetic flowmeters Mass flowmeters—open channel flow measurement Oscillatory flowmeters Positive displacement flowmeters Target flowmeters Thermal mass flowmeters and controllers Tracer dilution measurement; turbine flowmeters Ultrasonic flowmeters Variable area flowmeters Insertion (sampling) flow measurement Custody transfer measurement Sanitary flowmeters Metrology, standards, and specifications Spitzer, D. W., Regulatory and Advanced Regulatory Control: Application Techniques, ISA Press, Research Triangle Park, NC (1994). Introduction Manual Control Field Measurement Devices Controllers Final Control Elements Field Equipment Tuning Regulatory Controller Features PID Control Controller Tuning Regulatory Control Loop Pairing The Limitations of Regulatory Control Advanced Regulatory Control Tools Advanced Regulatory Control Design Applying Advanced Regulatory Control Sydenham, P. H., N. H. Hancock, and R. Thorn, Introduction to Measurement Science and Engineering, Wiley, Chichester (1992). Introduction Fundamental concepts Signals and information The information machine Modeling and measurement system Handling and processing information Creating measurement systems Selecting and testing of instrumentation Trevathan, Vernon L., ed., A Guide to the Automation Body of Knowledge, 2nd ed., ISA Press, Research Triangle Park, NC (2006). Process Instrumentation Analytical Instrumentation Continuous Control Control Valves Analog Communications Control System Documentation
Appendix | A General Instrumentation Books
Control Equipment Discrete Input/Output Devices and General Manufacturing Measurements Discrete and Sequencing Control Motor and Drive Control Motion Control Process Modeling Advanced Process Control Industrial Networks Manufacturing Execution Systems and Business Integration System and Network Security
877
Operator Interface Data Management Software Custom Software Operator Training Checkout, System Testing and Startup Troubleshooting Maintenance, Long term Support and System Management Automation Benefits and Project Justifications Project Management and Execution Interpersonal Skills for Automation Professionals
This page intentionally left blank
Appendix B
Professional Societies and Associations American Association for Laboratory Accreditation (A2LA), 656 Quince Orchard Rd., Gaithersburg, MD 20878-1409. 301-670-1377. A2LA is a nonprofit, scientific membership organization dedicated to the formal recognition of testing and calibration organizations that have achieved a demonstrated level of competence. The American Automatic Control Council (AACC) is an association of the control systems divisions of eight member societies in the United States. American Electronics Association (AEA), 5201 Great America Pkwy, Suite 520, Santa Clara, CA 95054. 408-987-4200. The AEA is a high-tech trade association representing all segments of the electronics industry.
American Society of Test Engineers (ASTE), P.O. Box 389, Nutting Lake, MA 01865-0389. 508-765-0087. The ASTE is dedicated to promoting test engineering as a profession. ASM International for management of materials. Association of Independent Scientific, Engineering, and Testing Firms (ACIL), 1659 K St., N.W., Suite 400, Washington, DC 20006. 202-887-5872. ACIL is a national association of third-party scientific and engineering laboratory testing and R and D companies serving industry and the public through programs of education and advocacy. The Automatic Meter Reading Association Advancing utility technology internationally.
The American Institute of Physics includes access to their publications.
British Institute of Non-Destructive Testing, 1 Spencer Parade, Northampton, NN1 5 AA, U.K. 01604 30124.
American National Standards Institute (ANSI), 11 W. 42nd St., New York, NY 10036. 212-642-4900.
British Society for Strain Measurement, Dept of Civil Engineering, University of Surrey, Guild-ford GU2 5XH, U.K. 01483 509214.
ASHRAE, the American Society of Heating, Refrigerating, and Air-Conditioning Engineers. American Society for Nondestructive Testing (ASNT), 1711 Arlingate Lane, P.O. Box 28518, Columbus, OH 43228-0518. 614-274-6003. ASNT promotes the discipline of nondestructive testing (NDT) as a profession, facilities NDT research, and the application of NDT technology, and provides its 10,000 members with a forum for exchange of NDT information. ASNT also provides NDT educational materials and training programs. American Society for Quality Control (ASQC), 611 E. Wisconsin Ave, Milwaukee, W1 53202. 414-272-8575. American Society for Testing and Materials (ASTM), 1916 Race St., Philadelphia, PA 19103. 215-299-5400. ASTM is an international society of 35,000 members (representatives of industry, government, academia, and the consumer) who work to develop high-quality, voluntary technical standards for materials, products, systems, and services.
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00048-6
British Standards Institution, 2 Park Street, London, W1A 2BS, U.K. 0171 629 9000. Canadian Standards Association (CSA), 178 Rexdale Blvd., Rexdale, Ontario M9W 1R3, Canada. 416-7474007. China Instrument and Control Society (CIS), based in Beijing. The Computer Society is the leading provider of technical information and services to the world’s computing professionals. Electronic Industries Association (EIA), 2001 Pennsylvania Ave. N.W., Washington, D.C. 20006. 202-457-4900. The Embedded Software Association (ESOFTA) provides its members with assorted marketing and communications services, a framework for member-initiated standards activities, and a forum for software creator and user communications.
879
880
The European Union Control Association (EUCA) In 1990, a number of prominent members of the systems and control community from countries of the European Union decided to set up an organization, EUCA, the main purpose of which is to promote initiatives aiming at enhancing scientific exchanges, disseminating information, coordinating research networks, and technology transfer in the field of systems and control within the Union. The Fabless Semiconductor Association (FSA) mission is to stimulate technology and foundry capacity by communicating the future needs of the fabless semiconductor segment in terms of quantity and technology; to provide interactive forums for the mutual benefit of all FSA members; and to be a strong, united voice on vital issues affecting the future growth of fabless semiconductor companies. The Federation of the Electronics Industry is a useful body and a source for information on standards such as the European Directives on EMC (the CE Mark). GAMBICA Association Ltd., Leicester House, 8 Leicester Street, London, WC2H 7BN, U.K. 0171 4370678. GAMBICA is the trade association of the British Instrumentation Control and Automation Industry. IMEKO (International Measurement Confederation) Forum for advancements in measurement science and technology.
Appendix | B Professional Societies and Associations
The International Electrotechnical Commission (IEC) is the worldwide standards organization charged with development and control of global standards. 3, rue de Varembé, P.O. Box 131, CH-1211, Geneva 20, Switzerland. The Institute of Instrumentation and Control Australia is the professional body serving those involved in the field of instrumentation and control in Australia. The Institute of Measurement and Control is a Britishbased organization, with some international branches in Ireland and Hong Kong, for instance. Institution of Mechanical Engineers, 1 Birdcage Walk, London, SW1H 9JJ, U.K. 0171 222 7899. ISA-The International Society of Automation, formerly the Instrument Society of America and the Instrumentation Systems and Automation Society, P.O. Box 12277, Research Triangle Park, NC 27709. 919-549-8411. The ISA is an international society of more than 49,000 professionals involved in instrumentation, measurement, and control. The ISA conducts training programs, publishes a variety of literature, and organizes an annual conference and exhibition of instrumentation and control. The ISA page, in constant development, is now an excellent resource. ISA was formed some 50 years ago and boasts almost 30,000 members. International Electronics Packaging Society (IEPS), P.O. Box 43, Wheaton, IL 60189–0043. 708-260-1044.
The Industrial Automation Open Networking Alliance (IAONA) is for industrial automation leaders committed to the advancement of open networking from sensing devices to the boardroom via Internet- and Ethernetbased networks.
International Federation of Automatic Control, founded in September 1957, is a multinational federation of National Member Organizations (NMOs), each one representing the engineering and scientific societies concerned with automatic control in its own country.
The Institute of Electrical and Electronics Engineers (IEEE) is the world’s largest technical professional society. A nonprofit organization, which promotes the development and application of electrotechnology and allied sciences for the benefit of humanity, the advancement of the profession, and the wellbeing of its members.
International Frequency Sensor Association (IFSA) The main aim of IFSA is to provide a forum for academicians, researchers, and engineers from industry to present and discuss the latest research results, experiences, and future trends in the area of design and application of different sensors with digital, frequency (period), time interval, or duty-cycle output. Very fast advances in IC technologies have brought new challenges in the physical design of integrated sensors and micro-sensors. This development is essential for developing measurement science and technology in this millennium.
Institute of Environmental Sciences (IES), 940 E. Northwest Hwy., Mount Prospect, IL 60056. 708-255-1561. Institute of Measurement and Control, 87 Gower Street, London, WC1E 6AA, U.K. 0171 387 4949. The Institute of Physics (IOP) gives some information on publications and has some interesting links. Institute of Quality Assurance, 8-10 Grosvenor Gardens, London, SW1W 0DQ, U.K. 0171 730 7154. Institution of Chemical Engineers, 12 Gayfere Street, London, SW1P 3HP, U.K. 0171 222 2681. The Institution of Electrical Engineers (IEE) lists all its services and books, with brief reviews, conference proceedings, etc.
The International Instrument Users’ Association has been set up “for cooperative instrument evaluations by member-users and manufacturers.” International Organization for Standardization (ISO), 1 rue de Varembe, CH-1211, Geneva 20, Switzerland, +41-22-749-01-11. The ISO promotes standardization and related activities with a view to facilitating the international exchange of goods and services and to developing cooperation in the spheres of intellectual, scientific, technological, and economic activity.
Appendix | B Professional Societies and Associations
The SPIE—The International Society for Optical Engineering, formed in 1955 as the Society for PhotoOptical Instrumentation Engineers, serves more than 11,000 professionals in the field throughout the world, as does the Optical Society of America. The former is perhaps more applications and engineering oriented, the latter more in the field of research—although both areas overlap these days. Another site of interest to optical engineers is the Laser Institute of America. International Telecommunications Union (CCITT), Place des Nations, 1121 Geneva 20, Switzerland. +4122-99-51-11.
881
Optical Society of America (OSA), 2010 Massachusetts Ave., N.W., Washington, D.C. 20036. 202-223-8130. Precision Measurements Association (PMA), 3685 Motor Ave., Suite 240, Los Angeles, CA 90034. 310-287-0941. Process Industry Practices (PIP) is a consortium of process industry owners and engineering construction contractors who serve the industry. Reliability Analysis Center, IIT Research Institute, 201 Mill St., Rome, NY 13440-6916. 315-337-0900. Process engineers will be interested in the site of The Royal Society of Chemistry.
Japan Electric Measuring Instruments Manufacturers’ Association (JEMIMA), 1-9-10, Torano-mon, Minato-ku, Tokyo 103, Japan. +81-3-3502-0601. JEMIMA was established in 1935 as a nonprofit industrial organization authorized by the Japanese government, JEMIMA is devoted to a variety of activities: cooperation with the government, providing statistics about the electronics industry, and sponsoring local and overseas exhibitions.
SAE, 400 Commonwealth Dr., Warrendale, PA 15096-0001. 412-776-4841.
The Low Power Radio Association (LPRA) is an association for companies involved in deregulated radio anywhere in the world. It is believed to be the only such association for users and manufacturers of low-power radio devices in the deregulated frequency bands.
Society for Information Display (SID), 1526 Brookhollow Dr., Suite 82, Santa Ana, CA 92705-5421. 714-5451526. SID is a nonprofit international society devoted to the advancement of display technology, manufacturing, integration, and applications.
National Conference of Standards Laboratories (NCSL), 1800 30th St., Suite 305B, Boulder, CO 80301-1032. 303-440-3339.
The Society of Manufacturing Engineers has developed the Global Manufacturing Network to help users find information on manufacturers, suppliers, etc.
National Institute of Standards and Technology (NIST), Publications and program inquiries, Gaithersburg, MD 20899. 301-975-3058. As a non-regulatory agency of the U.S. Department of Commerce Technology Administration, NIST promotes U.S. economic growth by working with industry to develop and apply technology, measurements, and standards.
Society of Women Engineers (SWE), 120 Wall St., New York, NY 10005. 212-509-9577.
National ISO 9000 Support Group, 9964 Cherry Valley, Bldg. 2, Caledonia, MI 49316. 616-891-9114. The National ISO 9000 Support Group is a nonprofit network of companies that serves as a clearinghouse of ISO 9000 information and certified assessors. The group provides full ISO 9000 implementation support for 150 per year. National Technical Information Service (NTIS), 5285 Port Royal Road, Springfield, VA 22161. 703-487-4812. The NTIS is a self-supporting agency of the U.S. Department of Commerce and is the central source for public sale of U.S. government-sponsored scientific, technical, engineering, and business-related information.
Semiconductor Equipment and Materials International (SEMI), 805 E. Middlefield Road, Mountain View, CA 94043. 415-964-5111. Semiconductor Industry Association (SIA), 4300 Stevens Creek Blvd., Suite 271, San Jose, CA 95129. 408-246-2711.
Software Engineering Institute, Carnegie-Mellon University, Pittsburgh, PA 15213-3890. 412-269-5800. TAPPI is the technical association for the pulp and paper industry. Telecommunications Industry Association (TIA), 2001 Pennsylvania Ave., N.W., Suite 800, Washington, D.C. 20006-1813. 202-457-4912. Underwriters Laboratories, 333 Pfingsten Road, Northbrook, IL 60062. 708-272-8800. Underwriters Laboratories (UL) is an independent, nonprofit certification organization that has evaluated products in the interest of public safety for 100 years. VDI/VDE-GMA Society for Measurement and Automatic Control in Germany. World Batch Forum offers a noncommercial venue for the dissemination of information batch engineers need.
This page intentionally left blank
Appendix C
The Institute of Measurement and Control Role and objectives The science of measurement is very old and has advanced steadily in the precision with which measurements may be made and by the variety and sophistication of the methods available. In the last century the rate of advance was very rapid, stimulated in particular by the needs of industry. Control engineering has a much more recent origin: with the advent of the complex requirements, such as those of the process and aerospace industries, there has been a veritable explosion of new theory and application during the last 50 years. In this period there has been a correspondingly rapid increase in the number of people working in these fields. The theory and application of measurement and control characteristically require a multidisciplinary approach and so do not fit into any of the single disciplinary professional institutes. The Institute brings together thinkers and practitioners from the many disciplines which have a common interest in measurement and control. It organizes meetings, seminars, exhibitions, and national and international conferences on a large number of topics. It has a very strong local section activity, providing opportunities for interchange of experience and for introducing advances in theory and application. It provides qualifications in a rapidly growing profession and is one of the few chartered engineering institutions which qualifies incorporated engineers and engineering technicians as well as chartered engineers. In its members’ journal, Measurement and Control, the Institute publishes practical technical articles, product and business news, and information on technical advances; in the newsletter, Interface, the activities of the Institute, its members, and the engineering profession in general are reported. In addition the Institute provides a whole range of learned and other publications. The objects of the Institute, expressed in the Royal Charter, are: “To promote for the public benefit, by all available means, the general advancement of the science and practice of measurement and control technology and its application.” ©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00049-8
To further its objects the Institute acts as a qualifying body, conferring membership only on those whose qualifications comply with the Institute’s standards. It acts as a learned society by disseminating and advancing the knowledge of measurement and control and its application at all levels. It is the academic and professional body for the profession, requiring members to observe a code of conduct.
History Like many professional bodies, the Institute of Measurement and Control arose through the need for a group of likeminded people to meet and exchange ideas. They first met at the Waldorf Hotel in London during October 1943, and a society of instrument technology was proposed. The Institute was founded in May 1944 as the Society of Instrument Technology (SIT) to cater to the growing body of instrument technologists whose interests transcended the fields of existing institutions. During the late 1940s and the 1950s the Society progressed steadily. By 1960 the number of members had grown to over 2500 and local sections had been formed in the main industrial areas in the United Kingdom. Control engineering, as opposed to measurement, began to be recognized as a distinct discipline only after the establishment of SIT. The evidence of the relationship between the two topics stimulated the formation of a control section of SIT, and the large and enthusiastic participation in that section’s first meeting more than vindicated its creation. In 1957 the importance of the computer was acknowledged through the formation of a data processing section, created to serve the large and growing interest in data handling related to process control, a combination outside the scope of any other learned society. By 1965 there were four specialized sections concerned with measurement technology, control technology, systems engineering, and automation. At that time it was realized that in a field developing as rapidly as that of measurement and control, a more flexible structure would be required to
883
884
deal with the steadily advancing and changing interests of the Institute’s members. Consequently, a national technical committee was set up overseeing the work of panels which at present include: a physical measurements panel, a systems and control technology panel, a systems and management panel, an industrial analytical panel, an educational activities panel, and a standards policy panel. Since 1986, the work of the national technical committee has been taken over by a learned society board, to which, in addition to the above technical panels, the publications executive committees report. Members who have particular interests in specialized fields are encouraged to set up new panels within the framework of the Institute through which their work can be advanced at a professional level. In 1975 the Institute was confirmed as a representative body of the United Kingdom for those engaged in the science and practice of measurement and control technology through the granting, by the Privy Council, of a Royal Charter of incorporation.
Qualifications The Institute influences educational courses from the broadly based Full Technological Certificate of the City and Guilds of the London Institute, through BTEC and SCOTVC2 certificates and diploma to the first degrees as the Engineering Council-authorized course-accreditation institution. Standards of courses are maintained by the education training and qualification committee and its subsidiary accreditation council.
Chartered status for individuals Corporate members of the Institute, those with the grade of fellow or member, all bear the title “chartered measurement and control technologist.” In addition, those with appropriate engineering qualifications can be registered by the Institute on the chartered engineers section of the register maintained by the Engineering Council, thus becoming chartered engineers (CEng). Registration as a European engineer (EurIng) with FEANI (European Federation of National Engineering Associations) is also possible for CEng members of the Institute, through the Institute.
Incorporated engineers and engineering technicians Licentiates and associates of the Institute may also be entered on the Engineering Council’s register by the Institute as incorporated engineers and engineering technicians, IEng and Eng Tech respectively. Both titles are becoming increasingly recognized as significant qualifications in
Appendix | C The Institute of Measurement and Control
British industry and there is additionally a route for IEng registration with FEANI.
Membership The Institute has a wide range of grades of membership available in two basic forms, corporate or non-corporate.
Corporate members Corporate members with accredited UK degrees or equivalent can be nominated by the Institute for registration on the Engineering Council’s register as chartered engineers. They may use the designatory letters “CEng.” All corporate members are entitled to use the exclusive and legally protected title “chartered measurement and control technologist.” There are three classes of corporate membership: honorary fellows (HonFInstMC), fellows, (FInstMC) and members (MInstMC). The following briefly summarizes the requirements for these grades of membership:
Honorary fellow The council of the Institute from time to time invites eminent professional engineers to become honorary fellows. They are fellows or members of the Institute who have achieved exceptionally high distinction in the profession.
Fellows Members over 33 years of age who have carried superior responsibility for at least five years may be elected to fellow. Persons who have achieved eminence through outstanding technical contributions or superior professional responsibility may be directly elected as fellows by the council.
Members Engineers over 25 years of age who have an approved degree or equivalent with at least four years’ professional experience and responsibility, of which two years should be professional training, may be elected as members of the Institute. Exceptionally, there are mature routes for those over 35 years of age who have 15 years’ experience and insufficient academic qualifications. Written submissions and interviews are required. Information and advice is available from the Institute about the appropriate educational qualifications and the mature route. There is a specific syllabus for the Engineering
885
Appendix | C The Institute of Measurement and Control
Council Examination, success in which provides the necessary level of qualification.
Noncorporate members There are seven classes of non-corporate member: companion, graduate, licentiate, associate, student, affiliate, and subscriber. The following briefly summarizes the requirements.
Companions Persons who, in the opinion of Council, have acquired national distinction in an executive capacity in measurement and control and are at least 33 years of age may be elected as companions. There is no particular academic requirement for this class of membership.
Students Students who are at least 16 years of age and following a relevant course of study may be elected as student members of the Institute.
Affiliates Anyone wishing to be associated with the activities of the Institute who is not qualified for other classes of membership may become an affiliate.
Subscribers Companies and organizations concerned with measurement and control may become subscribers.
Graduates
Application for membership
The requirement for graduate membership is an accredited degree or equivalent. Information and advice is available from the Institute about educational qualifications.
Full details of the requirements for each class of membership, including the rules for mature candidates, examinations, and professional training are available from the Institute.
Licentiates
National and international technical events
Persons of at least 23 years of age who have an accredited BTEC or SCOTVEC Higher National Award or equivalent plus five years’ experience, of which two must be approved practical training, may be elected as licentiates. Licentiates can register through the Institute as incorporated engineers. Registration allows the use of the designatory letters “IEng.” Exceptionally, for those who have not achieved the academic qualification, there are mature routes. Candidates must be at least 35 years of age and have 15 years’ experience. Written submissions and interviews are required.
Associates Persons who are least 21 years of age and have attained the academic standards of an accredited BTEC or SCOTVEC National Award or equivalent plus five years’ experience, including two years’ approved practical training, may be elected as associates. Associates can register through the Institute as engineering technicians. Registration allows the use of the designatory letters “EngTech.” Exceptionally, for those who have not achieved the academic qualifications, there are mature routes. Candidates must be at least 35 years of age and have 15 years’ experience. Written submissions and interviews are required.
The Institute organizes a range of technical events—from one-day colloquia to multinational conferences held over several days—either on its own account or on behalf of international federations. The wide nature of the Institute’s technical coverage means that many events are in association with other, more narrowly based, institutions and societies.
Local sections Members meet on a local basis through the very active local sections. There are more than 20 local sections in the UK, with one also covering Ireland and one in Hong Kong. Each local section is represented on the Institute’s council, providing a direct link between the members and the council. Normally, about 200 local section meetings take place annually.
Publications In addition to the monthly journal Measurement and Control and newsletter Interface, the Institute publishes Transactions which contains primary, refereed material. Special issues of the Transactions, covering particular topics, are published within the five issues a year.
886
In addition, the Institute publishes texts, conference proceedings, and information relevant to the profession. There is also the Instrument Engineer’s Yearbook, a main information source for measurement and control practitioners.
Advice and information The Institute plays its part in policy formulation through its representation on such bodies as the Parliamentary and Scientific Committee, the Engineering Council, the Business and Technician Education Council, the British Standards Institution, the United Kingdom-Automatic Control Council, the City and Guilds of London Institute, and numerous other national and local groups and committees.
Awards and prizes The institute has a considerable number of awards and prizes ranging from the high-prestige Sir George Thomson Gold Medal awarded every five years to a person whose contribution to measurement and science has resulted in fundamental improvements in the understanding of the nature of the physical world, to prizes for students in measurement and control on national courses and to school students.
Appendix | C The Institute of Measurement and Control
Government and administration The Institute is governed by its council, which consists of the president, three most recent past presidents, up to four vice-presidents, honorary treasurer, honorary secretary, and 36 ordinary members. The president, vice presidents, honorary treasurer, and honorary secretary are elected by council. Twenty-four ordinary members of the council are elected by regional committees. Twelve ordinary members of the council are nationally elected by all corporate members. Additional non-voting members are co-opted by the council (some chairmen of local sections and at least two noncorporate members). In addition to the council there is a management board and four standing committees which report to the council. These are: the learned society board, education, training, and qualifications committee, local sections committee, and membership committee. The Institute has a full-time secretariat of 12 staff. In 1984 the Institute purchased a building for its headquarters containing committee rooms, a members’ room, and administration and office facilities for the secretariat. —The Institute of Measurement and Control 87 Gower Street London WC1E 6AA, U.K. Tel: 0171 387 4949 Fax: 0171 388 8431
Appendix D
International Society of Automation, Formerly Instrument Society of America ISA was founded in 1945 as the Instrument Society of America to advance the application of instrumentation, computers, and systems of measurement for control of manufacturing and other continuous processes. The Society is a nonprofit educational organization serving more than 49,000 members. In 2008, the Society renamed itself the International Society of Automation, to better reflect its broadened purview of the entire practice of automation, not simply continuous process. ISA is recognized worldwide as the leading professional organization for instrumentation practitioners. Its members include engineers, scientists, technicians, educators, sales engineers, managers, and students who design, use, or sell instrumentation and control systems. Members are affiliated with local sections that are charted by the Society. The sections are grouped into 12 geographic districts in the United States and Canada; nonNorth American members and their sections are affiliated with ISA through ISA International, a nonprofit subsidiary. ISA International was established in 1988 to meet the special needs of instrumentation and control practitioners outside the United States and Canada. The Society provides a wide range of activities and offers members the opportunity for frequent interaction with other instrumentation specialists in their communities. By joining special interest divisions, ISA members share ideas and expertise with their peers throughout the world. These divisions are classified under the Industries and Sciences Department and the Automation and Technology Department. The members of each local section elect delegates to the district council and the council of society delegates. These delegates elect the ISA officers and determine major policies of the Society. ISA’s governing body is the executive board. The board is responsible for enacting policies, programs, and financial affairs. Executive board members are the president, past president, president-elect secretary, treasurer, and district and department vice presidents elected by the Councils of
©2010 Elsevier Inc. All rights reserved. doi: 10.1016/B978-0-7506-8308-1.00050-4
District and Department Vice Presidents. A professional staff manages the daily business of ISA and implements the executive board’s program and policies. Administrative offices are located in Research Triangle Park and Raleigh, North Carolina. The Society held its first major conference, Instrumentation and the University, in Philadelphia, Pennsylvania, in 1945. ISA has become the leading organizer of conferences and exhibitions for measurement and control. The society hosts a large annual instrument and control conference in North America, attracting more than 13,000 people. ISA has cosponsored events with other organizations and regular exhibitions in China and Europe and regularly embraces other conferences within its overall technical program.
Training ISA is a leading training organization and producer of training products and services. This year the Society will reach over 4,000 people through 300 training courses and customized training programs offered internationally. In addition to this direct training, ISA produces electronic packages, videos, and online and offline interactive multimedia instruction.
Standards and practices ISA actively leads in the standardization of instrumentation and control devices under the auspices of the American National Standards Institute (ANSI). The Society regularly issues a compendium of all its standards and practices for measurement and control. The multiple-volume set includes copies of more than 90 ISA standards. Nearly 3,500 people on 140 committees are currently involved in developing more than 80 new ISA standards.
887
888
Appendix | D International Society of Automation, Formerly Instrument Society of America
Publications ISA is a major publisher of books and papers and offers over 600 titles, written by the leading experts in the field. The Society’s first publications included Basic Instrumentation Lecture Notes, in 1960, and the first edition of Standards and Practices for Instrumentation, in 1963. Today ISA publishes
INTECH magazine. Other major publications include ISA Transactions and the ISA Directory of Instrumentation. The Instrument Society of America U.S. Office: P.O. Box 12277 Research Triangle Park, NC 27709 USA Tel: (919) 549 8411 Fax: (919) 549 8288 U.K. Office: P.O. Box 628, Croydon, U.K. CR9 2ZG
Index
A
Abbé refractometer, 515 Abrasion measurement with nuclear techniques, 559 Absolute pressure, 145, 149, 162 Absorption, 534 in UV, visible and IR, 346–348 Absorption coefficient, 550 AC potential difference (AC/PD) in underwater testing, 589 Acceleration accelerometers, 115 calibration, 117 closed-loop, 119 mass motion in, 119 single-and three-axis, 119 measurement, 121–124, 129 Accessibility of instruments, 655–656 Accuracy/precision of measurements, 551 AC/DC conversion, 461–462 Acids, properties in solution, 363 Acoustic emission inspection systems, 580, 581 underwater, 589–590 Acoustic holography, 580 Acoustic radiation, use of, 87–90 Acoustic wave technology, 218 Adaptive control, 24 ADCs. See Analog-to-digital converters Adder unit, 545 Adhesives, 646 Adiabatic expansion, 35 Adsorption, 670 methods, 189 Advanced encryption standard (AES), 258 Advanced gas-cooled reactors (AGR), 777 Advanced process control (APC), 24, 619 characteristics, 628 constraint-pushing strategy, 623 development, 622 goal of, 626 hierarchical approach to, 624–625, 627 Advanced regulatory control (ARC) vs. MPC, 623–624 Air supplies to instruments, 656 Aircraft autopilot controls, 767 Aircraft control systems, 772–773
Alexander Keilland, 784 Alkalis, properties in solution, 363–364 Alpha-detector systems, 534–536 Ambient temperature, 279–280 American military aircraft, position of controls in, 763 Ampere (unit), 439 Ampere’s law, 569 Amplifiers, 543 for piezoelectric sensors, 123–124 Analog circuits coupling capacitance, 841 decoupling, 830 instability, 830 signal isolation, 841 transient and radio frequency immunity, 840–841 Analog coding, 749 Analog instruments, 249 Analog oscilloscopes, 706 Analog redundancy, 758–759 Analog signal, transmission of, 678, 689–690 Analog square root extractor, pneumatic, 729, 730 Analog-to-digital converters (ADCs), 544 dual-ramp, 457–458 precision pulse-width, 458, 460 pulse-width, 458–460 successive-approximation, 456–457 voltage references in, 460 Analysis equipment parts, sampling, 662 Analytical columns in HPLC system, 330 AND logic function, 781 Andersen cascade impactor, 188 Andreasen’s pipette, 183–184 Anemometers, 51 Aneroid barometer, 150, 151 Angled-propeller meters, 46 Anodic stripping voltammetry, 334–335 Antennas, 258 connection, 260–261 directional, 258–259 gains and losses, 259 mounting, 259–260 omnidirectional, 258 reciprocity, 258 selecting, 261
Anticoincidence circuit, 545–546 Antimony electrode, 387–388 Antithixotropy, 70 APC. See Advanced process control Apertures effects of shielding, 860–862 Array detectors, 505–506 Ashcroft Model XLdp low pressure transmitter, 152, 153 Assembly screening, 746 Asset management system, 21 Asset optimization system, 21 Atomic absorption spectroscopy, 351–352 Atomic emission spectroscopy, 349–351 applications, 350–351 Atomic fluorescence spectroscopy, 352–353 Attenuation characteristics of optical fiber, 682–683 Audio rectification, 840 Automatic control valves, 620–621 Automatic microscope size analyzers, 183 Automatic psychrometers, 433 Automation. See Industrial automation
B
Backing-fluorescence gauge, 562, 563 Backplane buses, 828 Backscattering, 534 Balancing units, 545 Bar graph indicator, 700, 702 Barringer remote sensing correlation spectrometer, 348 Base metal thermocouples, 299 Batch controls, 26 Batch processes, 629 Batch-switch system, 728 Bathtub curve for component failure rates, 742 Bayard-Alpert ionization gauge, 172–173 Beam chopping, 506 and phase-sensitive detection, 507 Bearings, instrument, 646 Beer’s law, 347 Bellows elements, 151–152, 160 Bench test, 261 Bench-mounting instruments, 647–649 Bending machines, 644 Bendix oxygen analyzer, 424 Bernoulli’s theorem, 33–34
889
890
Beta particles detection, 536–537 Beta-backscatter gauge, 562 Beta-transmission gauge, differential, 562 Bimetal strip thermometer, 285–286 Bit-error ratio (BER), 255, 256 Blackbody radiation, 306–307, 519 Blondel’s theorem, 468 Bluetooth, 263 Boeing aircraft control, 774 Bonded resistance strain gauges, 93–94 Bourdon tubes, 149–150, 158–160, 167, 278, 279 for resonant wire sensor, 161 spiral and helical, 150 Boxcar detector, 507–508 Boyle’s law, 35 Bragg cell, 194, 206, 207 Bragg gratings, holographic in-fiber, 212–215 optical configuration for, 213 in strain-monitoring applications, 213, 214 structure of, 212, 213 Bragg wavelength, 213, 214 Bragg’s law, 355 Bridge definition of, 474 rectifier, 542 Brinkmann probe colorimeter, 348 Brittle lacquer technique, 99 Brookfield viscometer, 73 BS 9000, 755, 756 BS 9001, 755 BS 9002, 755 BS 5345 classification on hazardous areas, 783 BS 5000/5501 safety of electrical apparatus, 782 Buffer solutions, 378 Built-in test equipment, 754 Bulk ultrasonic scanning, 589 Buoyancy measurements, pneumatic instrumentation for, 721 Buoyancy transducer, 136, 137 Buoyancy transmitters, 136 Business system integration, 26 Bypass meters, 46–47
C
Cables common-mode noise, 806–807 and connectors, 261, 841–842 ferrite-loaded, 847–848 fiber-optic, 681 radiation from, 806, 807, 827 requirements, 657 resonance, 810 return currents, 842 ribbon, 846–847 screening composite tape and braid, 844 connector types, 845 currents, separation, 844 grounding screens, 843, 845 laminated tape, 844 lapped wire, 844 at low frequency, 842–843 magnetic shielding, 842–843
Index pigtail connection, 845–846 at radio frequency, 843–844 single braid, 844 surface transfer impedance, 842–843 types, 844–845 segregation, 658 and returns, 842 testing, 659 thermocouple compensating, 304 twisted pair, 846 types, 658 unscreened, 846–848 Calibration accelerometer, 117 amplitude, 117 of electrical conductivity cells, 367 field, 609 field-system, 609 force, 117–118 formal, 609 of gas analyzer, 426–428 length measurement, 77 of level-measuring systems, 106 of microphones, 609 moisture measurement, 435–436 of neutron-moisture gauges, 557 of radiation thermometer, 312 of radiocarbon time scale, 565 shock, 117 of sound-level meters, 609 strain-gauge load cells, 132–133 system, 21, 609 Calomel electrode, 379, 382–384 Campbell mutual inductor, 443–444 Capacitance length-sensing structures, 87 Capacitance level sensor, 109 Capacitance manometers, 152–154, 167 Capacitance probes, 108–109 Capacitance sensing, 57 Capacitance type pressure sensors, 161, 162 Capacitive strain gauges, 99 Capacitors equivalent circuits of, 477–478, 480 for filtering, 849 chip, 852 feedthrough, 851–852 three-terminal, 851 Capillary tube, 279–280 Capillary viscometer, 71–72 Carbon dating. See Radiocarbon dating Carrier gas, 402, 416 Carrier gas effect, 422 Carrier wave modulation, 687–688 Cascade control, 24 Casting in instrument construction, 645–646 CAT. See Combined actuator transducer Catalytic detector, 411 Catastrophic failure, 743, 765 Cathode ray tubes (CRTs), 574, 575, 704 CCDs. See Charge-coupled devices CE mark, 868 Cell conductance measurement using Wheatstone bridge, 369 Cellular wireless, 263 Celsius temperature scale, 272
CENELEC Electronic Components Committee (CECC) standards, 756, 793, 794 Central fringe identification method, 201 centroid method, 201 two-wavelength beat method, 202 Centrifugal pump, 668 Ceramics in instrument construction, 643 Certified Automation Professional (CAP) classification system, ISA, 5–13 Charge-coupled devices (CCDs), 505 Charge-injection devices (CIDs), 505 Charles’s law, 35 Chemical milling, 645 Chemical plants, hazards in, 782 Chemiluminescence, 349 Cherenkov detectors, 524 Chernobyl nuclear reactor, explosion in, 764, 776, 779 Chip capacitors, 852 Cholesteric compounds, 324 Chromatography definition, 328 gas. See Gas chromatography high-performance liquid. See High-performance liquid chromatography ion. See Ion chromatography paper, 328–329 process. See Process chromatography thin-layer, 328–329 CIDs. See Charge-injection devices Circuit fault analysis, 767–768 Circular chart recorders, 711 Citizen’s band (CB) radio, 256 Clamp-on transit-time flow sensors, 66 Climet method, 189 Cloud chambers, 524 CMOS digital circuits, operating voltage effect on failure rates, 745 Coalescers, 666 Coanda effect meters, 57–58 Coating thickness, nuclear measurement for, 561–562 Coating-fluorescence gauge, 562, 563 Coaxial cables, 260, 261, 680 Coincidence circuit, 545–546 Cold junction compensation in thermocouples, 297–298 Collinear antennas, 258, 259 Color displays, 705–706 Color measurement, 512–514 Colorimetric measurements, 513 Color-temperature meters, 510 Combined actuator transducer (CAT), 652–653 Commissioning instruments. See Instrument installation and commissioning Commissioning system, 21 Common impedance coupling, 801–803 Common ion effect, 378 Common mode capacitors, 852 Common mode faults, 759–760, 779–781 Common mode radiation, 827 track length implications, 827–828 Common mode signals in bridge measurements, 462
891
Index Communication channels, 679–684 radio frequency (rf) transmission, 681 sources and detectors, 683 transmission lines, 679–681 Compact source lamps, 501 Component layout for filtering, 849–850 Components failure rates, 738, 742, 746 effect of operating voltage on, 745 effect of temperature on, 743–744 estimating, 747–748 for filtering, 850–852 sampling, 662 screening, 746 temperature, estimating, 744–745 tolerances, 751–752 Compressible fluids, critical flow of, 36 Computer automated measurement and control (CAMAC), 695–696 Computer integrated manufacturing (CIM) enterprises, modeling, 13–14 standard, 629 Concentric orifice plate, 37 Condensate analyzer, 372–373 Condenser microphone, 596–600 Conducted emissions, 807–808 Conductive coating techniques, 863, 865 Conductivity. See Electrical conductivity Conductivity probes, 111 Conductivity ratio monitors, 373 Cone-and-plate viscometer, 72–73 Confidence level, 746 Confidence limits, 746 Construction of instruments. See Design and construction of instruments Continental Shelf Act 1964, 784 Continuous advanced control, 26 Continuous analog methods, 470 Control methodology, 20 Control modules, 26 Control system faults, 767 Controller, 20 Conventional light source, characteristics, 499 Converters electropneumatic, 734–735 pneumatic-to-current, 730–732 Coolers, 666 Cooling, detector, 506–507 Cooling profile, liquid cast iron, 336 Core saturation, 854 Coriolis effect, 231 Coriolis mass flowmeters, 229–233 Cosmic rays, 563 Couette viscometer, 72 Coulometric instruments, 431–432 Coulter counter, 188 Coupling compliance in vibration sensor, 116 Couplings, 647 Crest factor, 462 Critical flow of compressible fluids, 36 Cross-axis coupling in transducers, 116 Cross-correlation system, 58 CRTs. See Cathode ray tubes Crystals, analyzing, 354 CTs. See Current transformers
Curie temperature, 313, 421 Curie-Weiss law, 421 Current measurement, 462 Current transformers (CTs). See also Voltage transformers (VTs) AC range extension current/ratio error, 453 equivalent circuit, 452 phase-angle error/phase displacement, 453 principle of, 452 ratio errors, 454 Cyber security, ICS, 28 Cyclosizer analyzer, 188
D
D series sensors, 231, 232 DAC. See Digital-to-analog converter Dall tube, 38–39 Damage potential, definition, 765 Damping adjustments, 224 Damping ratio, 115 Daniell cell, EMF of, 379 Data integrity, 245 Data interfacing standards, 693–697 Data loggers, 713 Data sensitivity, 255 Data transmission, 245, 693–697 DataGator flowmeter, 38, 63 DC common-mode rejection, 461 DC Wagner earthing arrangement, 478, 482 Deadband control, 23 Deadweight testers, 147–149 Decision-tree approach, 456 Deer rheometer, 73 Deflecting-vane flowmeter, 50–51 Degassing system in HPLC system, 329 Demodulation, radiofrequency, 840 Density measurement buoyancy methods, 136 gas, 141–143 hydrostatic-head method, 137 liquid, 140–141 radiation methods, 140 using resonant elements, 140–143 weight methods, 135–136 Density transmitters, 236–239 Depletion region, 533 Derating process, 744 Design and construction of instruments electronic instruments, 647–650 elements of construction ceramics, 643 electronic components and printed circuits, 640–642 epoxy resins, 644 interconnections, 642–643 mechanical manufacturing processes, 644–646 metals, 643 plastics and polymers, 644 surface-mounted assemblies, 642 functional components, 646–647 mechanical instruments, 650–653 viewpoint of designer’s, 639–640
Design automation, 753–754 Design software, 25–26 Desorption, 670 Detectors, 502 applications, 533–541 array, 505–506 characteristics of, 502 circuit time constants, 506 classification of, 524 Cherenkov detectors, 524 cloud chambers, 524 gas detectors, 526–528 plastic film detectors, 524 scintillation, 528–532 solid-state, 532–533, 538 thermoluminescent detectors, 525 cooling, 506–507 in HPLC system, 330–331 photomultipliers, 502–503 photovoltaic and photoconductive, 503–504 pyroelectric, 504–505 sources and, 683 techniques, 506–508 x-ray, 559 Deuterium lamps, 501 Dewpoint, 430 instruments, 431 Diaflex, 217 Dial thermometers, 320 Diamagnetics, 421 Diaphragm gauge, 167 Diaphragm meters, 48 Diaphragm motor actuators, pneumatic, 732–733 Diaphragm pressure elements, 150–151, 160, 161 Diaphragm pump, 668 Differential beta-transmission gauge, 562 Differential pressure (DP) devices, 36–37 installation requirements for, 39 pressure loss in, 39 measurement, 37 transmitter methods, 137–138 bubble tubes, 139–140 flanged/extended diaphragm, 139 overflow tank, 138 pressure repeater, 139 pressure seals, 139 wet leg, 138–139 Differential pulse polarography, 333–334 Differential refractometers, 331 Differential transmitters, 226–229 Differential-mode capacitors, 853 Differential-mode radiation, 826–827 Diffraction, x-ray, 355 Diffusion-tube calibrators, 427–428 Digital circuits components, 829–830 decoupling, 829 failure modes, 743 immunity, design for, 833–835 Digital communication, 223 Digital frequency counter, 491
892
Digital multimeters (DMMs) AC voltage and current measurement, 462 DC input stage and guarding, 461 elements of, 462 control, 463 specifications, 464–465 visual display of, 463 Digital multiplexing, 241–242 Digital pressure transducers, 162–163 Digital quartz crystal pressure sensors, 156–158 design and performance requirements, 157 mechanisms, 157, 158 Digital signal transmission, 678, 690 modems, 692–693 Digital transient recorders, 607–608 Digital voltmeters (DVMs) AC voltage and current measurement, 461 DC input stage and guarding, 461 elements of, 462 control, 463 visual display of, 463 Digitally coded systems, 750 Digital-to-analog converter (DAC) most significant bit (MSB) of, 456 N-bit R-2R ladder network, 457 Digitizing oscilloscopes, 708–709 Dilution gauging, 64, 67 Dipole antenna, 258, 259 Dipping refractometers, 515 Direct sequence spread spectrum (DSSS), 257 Direct-indicating analog wattmeters, 465–467 Directional antennas, 258–259 Discharge coefficient, 34 Discharge lamps, 500–501 Discharge-tube gauge, 170 Dispersive infrared analysis, 345–346 Dispersive x-ray fluorescence analysis, 553 Displacement measurement, 120 Display elements, 26 Display techniques, 700 cathode ray tubes, 704 color, 705–706 graphical, 709 light-emitting diodes, 700–702 liquid crystal displays, 702–703 plasma, 703, 704 Disposable glass microfiber element, 665 Dissolved oxygen analyzer, water-sampling system for, 676 Distillation point analyzer, gas oil sampling for, 675–676 DL series sensors, 231, 232 DMMs. See Digital multimeters Doppler anemometry, 202–203 fluid flow, 204–206 particle size, 203–204 vibration monitoring, 206–210 Doppler blood-flow velocity measurement, dual-fiber method for, 205 Doppler flowmeters, 54 Doppler shift, 203, 206, 207 Double 2/3 logic, 777, 778 Double ratio transformer bridge, 485 Double-beam techniques, 510 Double-ended tuning fork force sensor, 157
Index Drilling in mechanical manufacturing, 644 Dry gases, 35–36 DT series sensors, 231, 233 Dual gauges, 557 Dual-beam oscilloscopes, 706 Dual-ramp ADCs, 457–458 Dual-slope integrated-circuit chip set, 459 Dust explosions, 792 DVMs. See Digital voltmeters Dynamic matrix control (DMC), 623 Dynamic viscosity of fluid, 32–33 Dynamometer instruments, 454 operation of, 454 steadystate deflection, 454 torque, 454 Dynamometer wattmeter, 466 connection of, 467–468 correction factors, 466 errors in, 466 FSD, 467 Dynode resistor chains, 544
E
Earth leakage circuit breaker (ELCB), 791, 792 Earthing. See Grounding Eddy-current damping, 445 Eddy-current testing, 570–571, 589 Eductor or aspirator type pumps, 666 Effective radiated power (ERP), 258 Elastic elements, 129 Electret microphones, 600 Electric induction, 802–803 Electrical capacitance sensors, 86–87 Electrical conductivity of liquids, 364–365 measurement alternating current cells with contact electrodes, 365–367 applications of, 372–375 direct measurement method, 369, 370 electrodeless method, 371 multiple-electrode cells, 369–370 temperature compensation, 370–371 Wheatstone bridge method, 369 of pure water, 372 of solutions, 365 Electrical conductivity cells, 365 calibration of, 367 cell constant, 366 cleaning and maintenance of, 368 construction, 367–368 retractable, 368 stainless steel and monel, 369 Electrical energy measurement, 472–473 Electrical impedance, 434 Electrical magnetic inductive processes, 84–86 Electrical sensor instruments, 432 Electrical SI units, 439 Electrochemical analyzers, 393–399 Electrochemical milling, 645 Electrocution risk, 790–791 Electrode potentials Daniell cell, 379 general theory, 378–379 variation with ion activity, 380
Electrodeless method, 371 Electrodes annular graphitic, 369 antimony, 387–388 calomel, 379, 382–384 gas-sensing membrane, 381–382 glass. See Glass electrodes heterogeneous membrane, 381 hydrogen, 387 ion-selective. See Ion-selective electrodes liquid ion exchange, 381 pH, 383, 385 platinized, 368 redox, 382, 390 reference, 382–384 silver/silver chloride, 382 solid-state, 381 thalamid, 384 Electro-explosive devices (EEDs), 800–801 Electrolytes, 365 Electromagnetic compatibility (EMC) analog circuit design, 824–841 control measures, 815 customer requirements of instrumentation, 865 and data security, 801 digital circuit design, 824–841 directive, 865 CE mark and declaration of conformity, 868 compliance with, 868–869 legislation of, 866 sale and use of products, 866–867 scope and requirements, 866–867 standards relating to, 869–870 systems, installations, and components, 867–868 and EEDs, 800–801 emissions, 805–808 equipment in isolation, 798 filtering, 848–858 grounding and layout, 815–824 interference with radio reception, 799 line-voltage coupling, 803, 804 line-voltage signaling, 800 line-voltage supply disturbances, 800 malfunctions, 798 malfunctions vs. spectrum protection, 799 phenomena, 798 regulatory framework for, 865–870 shielding, 858–865 source emissions and victim, 801–805 subsystems within installation, 797–798 susceptibility, 808–814 system partitioning, 815–816 between systems, 797–798 Electromagnetic (EM) energy, 258 Electromagnetic (EM) wave, 253, 803 Electromagnetic flowmeters, 51–54, 63, 233–234 accuracy of, 54 application, 53 field-coil excitation, 52 installation, 54 nonsinusoidal excitation, 53
893
Index principle of operation of, 51 sinusoidal AC excitation, 52–53 Electromagnetic force (EMF), Weston standard cell, 442 Electromagnetic incompatibility, examples of, 797 Electromagnetic interference (EMI), 797, 824 Electromagnetic radiation, use of, 87–90 Electromagnetic spectrum, 581 Electromagnetic velocity probe, 65 Electron capture detector, 409 Electron paramagnetic resonance (EPR) spectroscopy, 356 Electron probe microanalysis, 354 Electronic assemblies, 541–542 Electronic components construction of, 640–642 selection, 755–756 Electronic conduction, 364 Electronic flow controller, operating principle of, 416, 417 Electronic instruments, 299 construction bench-mounting, 647–649 panel mounting, 647 portable mounting, 649–650 rack-mounting, 649 site mounting, 647 Electronic monitoring system, designing, 765 Electronic multimeters, 448 specification, 450–451 Electronic sources, 501 Electronic wattmeters, 469–470 specification, 472 Electropneumatic converters, 734–735 Electrostatic discharge (ESD) interference paths, 834 operator interface, 834 protection measures, 813, 834, 835 waveform, 812–813 Electrostatic hazards, 785–786 Electrostatic voltmeter principle of, 455 torque, 455 Electrostatic wattmeter, torque in, 467 ELITE (CMF) series sensors, 231 Elutriation, centrifugal, 187–188 EMC. See Electromagnetic compatibility Emergency core cooling system, 764 EMI. See Electromagnetic interference Emissivity corrections for radiation thermometer, 309–310 Encapsulation, electronic components, 650 Enclosure design, shielding, 863–864 Engine instruments, layout in Boeing 737–400 aircraft, 763 Enrico Fermi pile, 776 Enterprise transformation, 20-point checklist for, 15–18 Environmental data corporation stack-gas monitoring system, 346, 347 Environmental testing, 748–749 Epoxy resins in instrument construction, 644 Equilibrium relative humidity, 434 Equipment modules, 26
Equivalent circuit capacitors, 477–478 current transformers, 452 four-arm AC bridge, 483 inductors, 477–478 Kelvin double bridge, 478 for resistance, capacitance, and inductance, 479 resistors, 477–478 thermistor system, 470 of transformer ratio bridges, 483 voltage transformers, 453 Error detection, 688–689 Error rates, human operator, 761 ESD. See Electrostatic discharge Expansibility factor, 36 Exponential dilution technique, 428 Exponential failure law, 738–739 Extensional viscosity measurement, 74 Extent of error, 44 Extruding construction material, 645
F
Fabry–Perot sensor cavity, 197, 198, 210–212 Fade margin, 255 Fahrenheit temperature scale, 273–274 Fail-dangerous errors, 765, 767 Fail-safe error, 765, 767 Fail-safe features of railway signaling, 775 Fail-safe systems, designing, 765–766 Failure intensities, 769 modes, 743 probability, 741, 756, 758, 773 types of, 765, 779 Failure density function, 747 Failure effort and mode analysis (FEMA), 17 Failure rates for aircraft controls, 774 by bathtub curve, 742 chemical plant, 783 component, 738, 746 effect of operating voltage, 745 effect of temperature on, 743–744 variation with time, 742–743 dangerous, 777 probability of, 778 for transistors and capacitors, 757 Failure-tolerant systems, 771 Faradaic current, 333 Faraday-effect polarimeter, 517–518 Faraday’s law, 51 Fast Fourier transform analyzers, 607 Fatal accident frequency rate (FAFR), 783 Fatal accident rate (FAR), 783 Fault tree of oil platform, 784 Faults common mode, 759–760, 779–781 estimating number of, 769–770 vs. failures, 769 FC052 ultra-low-pressure transmitter, 152, 153 FDM. See Frequency-division multiplexing Federal Communications Commission (FCC), 256
Feedback amplifiers, temperature effects on, 752 Feedback control, 23, 621 Feed-forward control, 24, 622, 623 Feedthrough capacitors, 851–852 Ferguson spin-line rheometer, 74 Ferrites, 850–851 Ferromagnetics, 421 Fiber optic communication, 681–684 Fiber optic Doppler anemometer (FODA), 203–205 Fiber optic sensors. See Optical fiber sensors FID. See Flame ionization detector Field calibration methods, 23 Field control system (FCS), 249 Field controllers, 617, 618 Field effect transistor (FETs), 386 Fieldbus, 241 analog instruments, 249 concept of, 241 diagnostics for, 249 digital multiplexing, 241–242 field-mounted control, 248 FOUNDATION, 247–248 function and benefits, 247–251 HART protocol, 243–246 layer implementation of, 242 network, 221 sensor validation and, 249 Field-coil excitation, 52 Field-survey radiation instruments, 563 Filter, manual self-cleaning, 666 Filtered connectors, 855 Filtering, 848 circuit effects of, 855 components, 850–852 contact suppression, 857–858 filter configuration, 848–850 I/O, 855 line-voltage, 852–855 transient suppression, 856–857 Final control element, 20 Finishes in instrument construction, 644 Fisher torque tube, 109 Fixed frequency, 256 Flame ionization detector (FID), 406–407, 417 Flame photometric detector (FPD), 409–410 Flammable atmospheres, 791–795, 800 Flapper/nozzle system, 716, 717, 724 Flash point (FP), 782, 783 Float-driven instruments, 107–108 Flow calorimeters, 470 Flow cells, 532 Flow equations, 34–35 to apply to gases, 35–36 for volume rate, 62, 63 Flow measurement, 669–670 dilution method, 560 in open channel by head/area method, 60–63 by velocity/area methods, 63–64 plug method, 561 streamlined, 31 turbulent, 32 Flow nozzle, 38
894
Flowmeters calibration methods for gases, 67–68 for liquids, 66–67 Coriolis mass, 229–233 electromagnetic, 233–234 Foxboro 8000 series, 235 microprocessor-based transmitter and, 229–236 valve as, 635–636 vortex, 234–236 Fluid flow rate, 323 Fluid motion, energy of, 32 Fluids in containers measurements, 551–552 Fluorescence spectroscopy, atomic, 352–353 Fluoroscopic images advantages, 585 characteristics of, 585–586 Fluoroscopy, 585–586 Fluted-spiral-rotor flowmeter. See Rotatingimpeller flowmeter Flux-leakage detection, 569–570 Foil gauges, 94 Force calibration, 117–118 Force measurement methods acceleration measurement, 129 compound lever balance, 128 concepts of, 127 equal-lever balance, 127–128 force-balance, 128–129 hydraulic load cells, 129 piezoelectric transducers, 130 proving rings, 129–130 spring balances, 129 strain-gauge load cells, 130–133 unequal-lever balance, 128 Force-balance controllers, 725–729 Force-balance mechanism, 717 in buoyancy transmitter, 722 for force measurement, 128–129 forces for, 719 in level transmitter, 721 in pressure transmitter, 719, 720 in speed transmitter, 722 in target flow transmitter, 722 for temperature measurement, 718 Force-measuring pressure transmitters, 160–162 Form factor (FF), current waveform and errors, 447, 448 Forward-error correction (FEC) techniques, 256, 688 Fossil-fuel effect, 565 FOUNDATION fieldbus, 247–248 Four-arm AC bridge classification of, 479–480 equivalent circuit of, 483 impedance in, 478–482 for measurement of capacitance and inductance, 481–482 stray capacitances in, 483 Fourier spectrum logic family, 825–826 time domain/frequency domain transform, 824–825
Index Four-quadrant analog multiplier wattmeter, 471 Four-quadrant multiplier, 469 FPD. See Flame photometric detector Free air ionization chamber, 538 Free radiation frequencies, 799 French reactors, 778 Frequency analyzers fast Fourier transform, 607 narrow-band, 606–607 octave band, 604–605 output options with digital, 606 third-octave, 605–606 Frequency bands for industrial wireless applications, 254 Frequency counters, 489–491 Frequency hopping spread spectrum (FHSS), 256–257 Frequency measurements relative accuracy of, 492 relative resolution, 490 using Lissajous figures, 497 Frequency modulation (FM) recording systems, 713 Frequency modulation spectroscope, 577 Frequency ratio measurement, 493 Frequency shift keying (FSK), 228, 229, 692 Frequency telemetry, 678 Frequency transmission, 690 Frequency-division multiplexing (FDM), 684–685 Fritsch Analysette sieve shaker, 181 Frostpoint, 430 Fuel cell oxygen-measuring instruments, 397 Full-scale deflection (FSD), 446–447 of moving-iron instruments, 451 of permanent magnet-moving coil instruments, 447 of thermocouple instruments, 452 Full-wave bridge rectifier, 447, 448 Fullwave rectifier, 542 Fundamental interval, 272 Furnace gas probes, 664 Furnace temperature by radiation thermometer, 312 Fuzzy logic control, rules for, 24
G
Galvanometer, 298, 734 recorders, 711–712 Gamma ray, 581, 582 detection, 537–538 spectroscopy, 357 GAMP4 Guidelines, 26 Gas detectors, 401, 524, 526–528 applications of, 413 catalytic, 411 electron capture, 409 flame ionization detector, 406–407, 417 flame photometric detector, 409–410 helium ionization, 408, 417 photo-ionization detector (PID), 407–408 properties of, 413
semiconductor, 411–412 thermal conductivity, 404–406, 417 ultrasonic, 410–411, 418 explosions, 792 pumps, 666–668 sampling, 661, 672–674 Gas analyzer, 401 Bendix oxygen analyzer, 424 calibration of, 426 dynamic methods, 427–428 static methods, 427 magnetic wind oxygen analyzer, 422–423 magnetodynamic oxygen analyzer, 423–424 measurement principles of, 426 nitrogen oxides analyzer, 425–426 ozone analyzer, 424–425 paramagnetic oxygen analyzers, 421–424 Quincke oxygen analyzer, 423 Servomex oxygen analyzer, 424 Gas chromatography, 402–404 apparatus for, 402 backflush method, 403 heart-cut method, 403 process. See Process chromatography Gas density measurements, 141–143 transducer, 142 Gas proportional counters, 535 radiocarbon dating by, 563–565 Gas-cooled nuclear reactors, 429 Gaseous mixtures preparation of, 426, 427 separation methods of chemical reactions, 402 physical methods, 402 physico-chemical methods, 402 thermal conductivity of, 404 Gas-filled instruments, 281–282 Gaskets and contact strip, shielding, 862–863 Gas-sampling valve, 415 Gas-sensing membrane electrodes, 381–382 Gate meter, 39–42 Gauge factor, 172 Gauges absolute, 165, 166–168 backing-fluorescence, 562, 563 beta-backscatter, 562 coating-fluorescence, 562, 563 differential beta-transmission, 562 dual, 557 foil, 94 level, 559, 560 mechanical, 166–167 neutron-moisture, 557 nonabsolute, 165, 166, 169–173 preferential absorption, 562, 563 properties of, 166 self-balancing level, 110 semiconductor, 94–95 sight, 107 strain. See Strain gauges surface-neutron, 557 wire, 94 Gauging systems, automatic, 91–92 Gaussian distributions, 178–179
895
Index Gaussian fringe envelope, 201 Gear pumps, 668 Geiger counters, 535, 536 Geiger–Mueller detectors, 527–528 Generating a solvent gradient, 329 Geometry of radiation, 533–534 Gilflo primary sensor, 42 Glass electrodes, 381, 384–385 cleaning, 389–390 continuous-flow type of assembly, 388–389 electrical circuits for, 385–387 immersion type, 389–390 Glass thermometers, mercury filled, 274–278 Global support system (GSS), designing, 17 Golay pneumatic detector, 346 Gradient devices in HPLC system, 329 Graphical displays, 709 Graphical recorders, 709–710 Gravimetric calibration method, 67, 68 Gravitrol density meter, 136 Grinding in mechanical manufacturing, 645 Ground plane breaks in, 821, 822 on double-sided PCBs, 821 partial, 821 radiating loop area, 822 vs. track, impedance of, 821 Grounding, 658, 791, 816 configuring I/O, 823–824 current through ground impedance, 816 gridded structure, 820 ground style vs. circuit type, 820–821 hybrid, 817–818 at interfaces, 823 of large system, 818 layout rules, 824 map, 824 multi-point, 817 safety earth, 818 single-point, 816–817 wire impedance, 818, 819 Ground-up testing and training, 26 Gyroscopic/Coriolis mass flowmeters, 59
H
Hacking in wireless system, 257 Hagen-Poiseuille law, 71 Hall-effect technique, 470 Hall-effect wattmeter, 471 Handheld interfaces, 249–250 Handheld terminal (HHT), 233 Hardware, shielding, 862–865 Hardware reliability vs. software reliability, 768–769 Harmonic noise, 255 HART protocol, 243 connection and length limitations in, 246 operating conditions, 245 structure, 244 technical data, 245–246 Head of a weir, 60 Health and Safety at Work Act 1974, 784, 789 Heat, definition, 269 Helium ionization detector, 408–409, 417 Helix meters, 47
Henry’s law, 430 Hersch cell for oxygen measurement, 397, 398 Heterodyne converter counter, 496, 497 Heterodyne interferometry, 194 Heterogeneous membrane electrodes, 381 Hiac automatic particle sizer, 188–189 High-current ammeters, 446 High-frequency impedance measurement, 487–488 High-frequency power measurement, 470–472 High-gain antennas, 259 High-performance liquid chromatography (HPLC) advantages, 329 application, 331 system components, 329–330 High-pressure sample, 672 High-reliability software, 769 High-resistance measurement, 476–477 High-temperature ceramic sensor oxygen probes, 396–397 High-temperature thermometers, 277–278 Holography, 90 acoustic, 580 Honing in mechanical manufacturing, 645 Hooke’s law, 129 Hook-type level indicator, 108 Hot-cathode ionization gauge, 171–172 Hot-metal thermocouples, 303 Hotwire gauges, 169 HPLC. See High-performance liquid chromatography Human operator error rates, 761 features of, 760–762 speech communication, 761 Hydraulic flumes, 62–63 Hydraulic load cells, 129 Hydraulic pressure measurement, 129 Hydrocarbons, sulfur contents measurement in liquid, 557–558 Hydrocracking severity, 622 Hydrogen electrode, 387 Hydrogen ion activity. See pH Hydrolysis, 376, 378 Hydrostatic-head methods for density measurement, 137 Hygrometer, 435 Hysteresis control. See Deadband control Hysteresis effects, 171, 450
I
ICI mond system, principle of, 343 Ideal gas law, 35 departure from, 36 IEEE-488 interface, pin assignments for, 696 IEEE standards, 19 Ignition of explosive and flammable atmospheres, 786 risk reduction, 785 Image indicator (IQI), 583 step/hole type, 584 wire type, 584
Image-intensification systems, 586 Impaction, 188 Impedance for filtering, 848 vs. insertion loss, 853 Impulse lines, 656–657, 659 Incandescent lamps, 500 Indicator devices, 699–700 Induction wattmeter, torque in, 467 Inductive load suppression, 858 Inductively coupled bridges. See Transformer ratio bridges Inductors equivalent circuits of, 477–478, 480 filtering, 849 temperature effects on, 752 Industrial, scientific, and medical (ISM) bands, 254 Industrial automation analysis of security needs of, 28 APA for, 24 asset management, 21 asset optimization, 21 career and career paths in, 4–5 definition, 3 field calibration methods for, 23 life-cycle optimization, 21 measurement methods for, 23 plant optimization, 21 process control for, 23 Purdue model, 14 reliability engineering, 21 scope for, 760 security problem, 27–28 recommendations for, 28 simulation system in, 25–26 standards, 19 Industrial control systems (ICS), cyber security, 28 Industrial environment, 870 Industrial viscometer, 73 Influence errors in vibration sensor, 116 Infrared absorption meter, 343–345 Infrared analyzers dispersive, 345–346 nondispersive, 341–342 Infrared discrete-position level sensor, 111 Infrared instruments, 432, 433 Inlet systems, 359 In-line filters, 665 Inorganic scintillators, 528–531 Insertion loss vs. impedance, 853 Insertion turbine, 65–66 Insertion vortex, 66 Insertion-point velocity calibration, 66–67 Installation design, 20–21 Installing system, 21 Institute of Electrical and Electronics Engineers (IEEE), 262 Institute of Measurement and Control, 883–886 Instrument installation and commissioning cabling, 657–658 grounding, 658 loop testing, 659–660
896
mounting and accessibility, 655–656 piping and cable testing, 659 piping systems, 656–657 plant commissioning, 660 pre-installation testing, 658–659 requirements, 655 storage and protection, 655 Instrument management systems, 250–251 Instrument mounting, 655–656 Integrated circuits, accelerated life tests for, 746 Integration process, 31 Integrators, pneumatic, 729, 730 Intelligent transmitter, 222 definition, 221 developments of, 250 features, 223–224 integration of, 250–251 microprocessor-based and, 222–224 pressure and differential, 226–229, 238 span and zero adjustment, 223 temperature, 224–226 INTELSAT V, 772 Intensity measurement, 508–510 color-temperature meters, 510 photometers, 509 ultraviolet intensity measurements, 509–510 Intensity of magnetization, 421 Interferometric sensing techniques, 193–202 central fringe identification, 201–202 heterodyne interferometry, 194 pseudoheterodyne interferometry, 194–195 white-light interferometry, 195–201 Interferometry, 89–90 Internal energy of fluid in motion, 32 Internal reflux control (IRC), 624 International Bureau of Weights and Measures (BIPM), 440 International Practical Temperature Scale of 1968 (IPTS-68), 273 fixed points of, 274 International Society of Automation (ISA) standards, 19, 887–888 I/O filtering, 855 Ion chromatography, 373–375 Ion-exchange resins, scintillating, 532 Ionic conductivity, 365, 366 Ionization chambers, 526, 534, 536, 537–538 Ionization current, 406 Ionization gauges, 170–173 Ionization of water, 364 Ion-selective electrodes, 380–382 applications of, 392–393 conditioning and storage of, 392 determination of ions by, 390–391 flow cell for, 391 pH and pIon meters, 391 practical arrangements, 391–392 Ion-selective monitor, 392, 393 IPTS. See International Practical Temperature Scale of 1968 ISA S88 batch standard, 629 ISA100.11a standard, 263 Isotopes in gamma radiography, 582 IT security, 28
Index
J
Jamming in wireless system, 257 Josephson effect, 443
K
Karl Fischer titration, 433 Katharometer, 405–406 Kelvin double bridge, 475, 478 Kelvin temperature scale, 272 Kinematic design advantages of, 651 of mechanical instruments, 650–651 Kinematic viscosity of fluid, 32–33 Kinetic energy correction, 71 Kinetic energy of fluid in motion, 32
L
Ladder logic, 618 Laddic, core and windings of, 781 Laminar flow. See Streamlined flow Lapping in mechanical manufacturing, 645 Laser Doppler anemometer, 64 Laser Doppler velocimetry (LDV), 206 Lasers, 501 interferometer, 80, 81, 89 safety precautions, 502 Latent heat, 270 Lateral effect cell, 88 Laurent polarimeter, 516–517 LCDs. See Liquid crystal displays Leak detection with nuclear techniques, 559 LEDs. See Light-emitting diodes Length measurement contacting and noncontacting, 78 derived, 79 electrical capacitance sensors for, 86–87 electrical magnetic inductive processes, 84–86 electrical resistance for, 83–84 electromagnetic and acoustic radiation for, 87–90 electronic, 82–83 mechanical, 81–82 nature of length, 78–79 ranges and methods of, 77 standards and calibration of length, 77 Level controllers, 620 Level measurement calibration of systems, 106 error sources, 104–105 full-range, methods of, 107–110 capacitance probes, 108–109 float-driven instruments, 107–108 force or position balance, 110 microwave and ultrasonic time-transit methods, 109–110 pressure sensing, 109 sight gauges, 107 upthrust buoyancy, 109 instrument installations, 103–104 neutrons for, 560 pneumatic instrumentation for, 721 sensor selection, guidelines, 106 short-range detection, methods of
electrical conductivity, 110–111 infrared, 111 magnetic, 110 radio frequency, 111–112 using x-rays/gamma rays, 559–560 Level recorders, 607 Lever-balance methods compound, 128 equal, 127–128 unequal, 128 Life-cycle optimization, 21 Light industrial environments, 870 Light sources, 499–502 discharge lamps, 500–501 electronic sources, 501 incandescent lamps, 500 lasers, 501–502 Light-emitting diodes (LEDs), 501, 700–702, 764 and p-i-n diode detector, 683 Light-intensity measurement, 508 Linear variable-differential transformer (LVDT), 85, 86 Line-voltage coupling, 803, 804 filters, 852–855 harmonics, 808 signaling, 800 supply disturbances, 800 supply fluctuations, 814 transients, 812 Liquid crystals, 324–325 and gases, 270 pumps, 668–669 sampling, 661, 674–676 Liquid crystal displays (LCDs), 702–703 Liquid density measurement, 140–141 Liquid ion exchange electrodes, 381 Liquid level measurement systems, 239 Liquid manometers, 167 Liquid metal thermocouples, 303 Liquid sealed drum flowmeters, 49–50 Liquid-filled dial thermometers, 278–281 comparison, 284 gas-filled, 281–282 liquid-in-metal, 281 vapor pressure, 282–284 Liquid-in-glass thermometers, 274 Liquid-in-metal thermometers, 281 Liquid-scintillation counting, radiocarbon dating by, 563–565 Lissajous figures for phase and frequency measurement, 497 Lithium fluoride (LiF), 525 Load cells, 651–652 hydraulic, 129 spring mechanism, 652 strain-gauge. See Strain-gauge load cells Local area networks (LANs), 253 Lockheed L 1011–500 airliner, 773 Logic noise immunity dynamic noise margin, 835–836 susceptibility prediction, 836 transients coupling, 836
897
Index Logic voting circuit, 757 Log-normal distributions, 179–180 Loop testing, 659–660 Low-resistance measurement, 475–476 Luft-type infrared gas analyzer, 342
M
Machine health monitoring, 116–117 Mach–Zehnder bulk optic interferometer, 206, 207 Mackereth oxygen sensor assemblies, 397–398 Magnetic field screening, 814 Magnetic flux surface-inspection methods, 569–570 Magnetic induction, 802 Magnetic level indicator, 108 Magnetic moment, 421 Magnetic particle inspection (MPI) in underwater, 588 Magnetic recording, 712–713 Magnetic wind oxygen analyzer, 422–423 Magnetic-reluctance proximity sensor, 85 Magnetization methods, 569 Magnetodynamic oxygen analyzer, 423–424 Magnetoelectric sensors, 86 Magnox gas-cooled reactors, 777 fault sensors in, 777 protection logic, 781 Manipulated variables (MVs), 623 Manometer capacitance, 152–154, 167 with limbs of different diameters, 147 liquid, 167 U-tube, 145 with wet leg connection, 147 Manufacturing execution infrastructure (MEI) plan, 17 Manufacturing Execution Systems Association (MESA), 14 Manufacturing execution systems (MES), 15, 26 Manufacturing operations management (MOM), 15 Mass flow rate, 43 Mass measurement, 561 Mass spectrometer, 357–358 inlet systems, 359 ion sources, 359 principle, 358–359 quadrupole, 361–362 separation of ions, 359–361 time-of-flight, 361 Mass-spring seismic sensors, open-loop, 118–119 Master meter, 67 McLeod gauge, 167–168 Ishii effect, 168 pressure calculation, 168 Mean time between failures (MTBF), 21, 773 and reliability, 737–738 for system assemblies, 747 Measured disturbances, 621–622 Measurement systems, 240 liquid level, 239
Mechanical fail-safe devices, 766 Mechanical gas pump, 667–668 Mechanical manufacturing processes, 644–646 Mechatronics, 4 Melinex, 550 MEMS. See Microelectromechanical systems Mercury-in-glass thermometers, 274–278 Mercury-in-steel thermometer, 278–280 Mercury/mercurous chloride. See Calomel electrode Mesh technologies, 262 Message recovery, 256 Metal film resistors, temperature effects on, 752 Metallic conduction, 364 Meter prover. See Pipe prover Michelson interferometer, 195, 197, 199, 209 Microelectromechanical systems (MEMS), 217–218 pressure sensor, 218 viscosity sensor, 218 Microphones calibration of, 609 condenser, 596–600 electret, 600 at low frequencies, 600 positions for, 612 Microprocessor defense programming, 838 data and memory protection, 839 digital inputs, 839 input data validation and averaging, 838–839 interrupts, 839 reinitialization, 839–840 unused program memory, 839 Microprocessor techniques, 53, 58 Microprocessor watchdog operation, 836 retrigger pulse, 837 testing, 838 timer hardware, 837 Microprocessor-based transmitter definition, 220 features, 222–223 and flowmeters, 229–236 and intelligent transmitter, 222–224 density transmitters, 236–239 pressure and differential transmitter, 226–229, 238 temperature transmitter, 224–226 user experience with, 246–247 measurement systems, 240 liquid level, 239 Microprocessors, 753 Microscope counting, basic methods, 181–182 Microwave instruments, 433–434 Microwave spectroscopy, 355–356 Microwave time-transit methods, 109–110 Microwave-frequency measurement, 495–497 Milling, 644 chemical and electrochemical, 645 Mineral-insulated thermocouples, 302–303 Miniature in-line filter, 666 Mixer unit, 545
Mobile phase in HPLC system, 330 Model 3051C transmitters, 227–229 Model predictive control (MPC), 24, 619 emergence of, 623 licensors of software, 625 problems with, 625–626 recommendations for using, 627 vs. ARC, 623–624 Model RTF9739 transmitter, 231, 233 Modems, 692–693 Modified Wheatstone bridges, 476 Moiré fringe position-sensing methods, 88, 89 Moiré-type fringe pattern, 200 Moisture in gases, 429–430 in liquids and solids, 430–431 Moisture measurement, 348 calibration, 435–436 definitions, 429–430 in gases, 399, 431–433 in liquids, 399, 433–434 by neutrons, 555–557 in solids, 434–435 Moisture meter, 556 Monochromator, 346 Moore’s law, 617 Motion-balance controllers, 717, 723–725 configuration of, 724 for temperature measurement, 718 Moving-iron instruments damping of, 448 errors, 451 friction in, 450 inductance effects in, 450, 451 steady-state deflection, 450 Moving-pointer indicators, 699, 702 Moving-scale indicators, 700, 702 MPC. See Model predictive control MTBF. See Mean time between failures MTL 418 transmitters, 224–225 Muller bridge, 477 Multichannel analyzer (MCA), 544 Multichannel spectrometer, 354 Multidecade ratio transformers, 484 Multimeters, 448 specification, 449 Multiple-electrode cells, 369–370 Mutual-inductance methods, 85
N
Nanotechnology definition of, 217 for pressure transmitters, 217 Narrow-band analyzers, 606–607 National Physical Laboratory (NPL), 440 NDT. See Non-destructive testing Negative temperature coefficient thermistors, 290–291 Nernst equation, 380 Neutron activation, 357, 552–553 Neutron moderation, 435 Neutron sources, 522 Neutron-moisture gauges, calibration, 557 Neutron-proton reactions, 540
898
Neutrons for level measurement, 560 for moisture measurement, 555–557 Neutrons detection, 538–541 Newspeak, 771 Newtonian behavior, 69–70 Nickel resistance thermometers, 289 Nitrogen oxides analyzer, 425–426 Noise labeling, 596 Noise measurement calibration of instruments, 609 for diagnostic purposes, 596 for engineering design/noise-control decisions, 595–596 for evaluating noise effect on human beings, 595 frequency analyzers, 604–607 humidity and rain effect on, 613 instrumentation for acoustic calibrators, 603–604 frequency weighting networks and filters, 600–601 microphones, 596–599 noise-exposure/dose meters, 603 sound-level meters, 601–603 other noises effect on, 613 recorders, 604–608 standards for, 597 temperature effect on, 613 wind effect on, 613 Noise-exposure/dose meters, 603 Noncontacting sensor, 78 Non-destructive testing (NDT) certification of personnel, 590 developments in, 590 purpose of, 567 radiography, 580–586 standards in ultrasonic testing, 590–591 surface inspection, 567–571 ultrasonic. See Ultrasonic non-destructive testing underwater. See Underwater non-destructive testing visual inspection, 568 Nondispersive infrared analyzers, 341–342 Non-dispersive X-ray fluorescence analysis, 554–555 Nonintrusive simulation interfaces, 25 Non-Newtonian behavior, 69–70 Nonsinusoidal excitation, 53 Normal distributions. See Gaussian distributions North Sea oilfields, 783 Nuclear gauging instrumentation, 521 electronics for, 541–546 Nuclear gauging system, 105 Nuclear magnetic resonance spectroscopy, 357 Nuclear measurement techniques materials analysis, 552–559 mechanical measurements, 559–563 optimum time of measurement, 551 Nucleonic belt weigher, 561 Numerical aperture (NA) of optical fiber, 682 Nutating-disc flowmeter, 45
Index
O
Octave band analyzers, 604–605 Ohm, absolute determination, 443–444 Omnidirectional antennas, 258 One-shot time-interval measurement, resolution, 495 On/off control, 23 Open circuit failure, 766 Open loop control. See Feed forward control Open System Interconnection (OSI) reference model, 244 Operational amplifiers, voltage effect on failure rates, 745 Optical fiber, 681–683 in anemometry applications, 202–203 multimode, 192, 197 numerical aperture of, 203 single mode, 192, 195, 197 Optical fiber sensors. See also Doppler anemometry; Interferometric sensing techniques classification, 192 distributed sensor, 192 extrinsic sensor, 192 in-fiber sensing structures, 210–215 intrinsic sensor, 192 low-coherence high temperature, 197 modulation parameters, 192–193 performance criteria, 193 performance-and market related factors, 191 point sensor, 192 principles of, 192–193 two-wavelength intensity modulated, 193 Optical instruments, 499 Optical interferometer, 79, 81 Optical position-sensitive detectors, 87, 88 Optical properties measurement, 514–518 Optical radiation thermometers, 314 Opto-couplers, 841 OR logic function, 781 Organic scintillators, 530, 531 Orifice plate, 37 advantages and disadvantages, 37 Oscillating disc sensing, 57 Oscillatory fluidic flowmeters, types, 56–58 Oscilloscope analog, 706 digitizing, 708–709 dual-beam, 706 frequency and phase measurement, 497 sampling, 708 storage, 707–708 Ostwald viscometer, 71 Oval-gear flowmeter, 46 Oven-stabilized oscillators, 490 Oxygen analyzers, paramagnetic, 421–424. See also specific oxygen analyzers Oxygen probes, high-temperature ceramic sensor, 396–397 Oxygen sensor galvanic Mackereth electrode, 397–398 microfuel cell, 397 polarographic, 395–396 Ozone analyzer, 424–425
P
Paints in instrument construction, 644 Panel mounting instruments, 647, 648 Paper chromatography, 328–329 Paper dielectric capacitors, voltage effect on failure rates, 745 Parallel systems, reliability, 748 Parallel-plate rheometer, 73 Parallel-plate viscometer, 73 Paramagnetic oxygen analyzers, 421–424 Parasitic reactances, 848 Parity-bit coding, 689 Particle sizing Ferêt diameter, 176, 177, 182 Gaussian distributions, 178–179 image shear diameter, 176 log-normal distributions, 179–180 Martin’s diameter, 176 projected area diameter, 176 relative frequency plot, 178 Rosin–Rammler distributions, 180 sampling, 175 statistical mean diameters, 176 Stokes diameters, 176 Particles characterization of, 175–176 measuring size of, 180–183 measuring terminal velocity of, 183–188 methods for characterizing groups of, 178–180 optical effects caused by, 177 shape, 177–178 Passenger lifts, 766 PCB. See Printed circuit board PCM. See Pulse code modulation Pellistor, 411 Peltier effect, 294–295 Penning ionization gauge, 170–171 Period measurement, relative accuracy of, 492 Peristaltic pumps, 669 Permanent magnet-moving coil instruments, 445–448 AC voltage and current measurement, 447 characteristics of, 447 construction, 445 FSD, 446–447 torque in, 445, 447 Permeability of gases, 671 Permeation-tube calibrators, 427–428 pH buffer solutions, 378 of common acids, bases, and salts, 377 common ion effect, 378 electrode, 383, 385 general theory, 375–376 hydrolysis, 376, 378 measurements, 384–390 measuring circuit using field effect transistor, 386 and Na ion error, relationship of, 387 neutralization, 376 scale, 376 standards, 376 pH meters, 381, 387
899
Index Phase measurement using Lissajous figures, 497 Phase shift keying (PSK), 692 Phaselock loop (PLL), 207, 208 Phase-sensitive detection system, 86 Phasor diagram for three-voltmeter method, 465 Phosphors characteristics, 705 Photo-acoustic spectroscopy, 355 Photoconductive detectors, 503–504 Photodiode array, 505 Photoelastic visualization methods, 579 Photoelasticity of strain gauges, 100–101 Photoelectric radiation thermometers, 316 Photography, 587 Photo-ionization detector (PID), 407–408 Photometers, 509 Photomultipliers, 502–503, 532 Photon counting detector, 508 Photopotentiometer, 88 Photosedimentation, 184 Photovoltaic detectors, 503–504 pH-to-current converter, 386 Physical vapor deposition (PVD), 217 Piezoelectric ceramic transducers, 55 Piezoelectric effect, 154, 155 Piezoelectric force sensor oscillator mode for, 157 resonant, 157 Piezoelectric humidity instrument, 433 Piezoelectric (PZT) sensors, 122–123 amplifiers for, 123–124 Piezoelectric transducer, 130 Piezometer ring, 38 Piezoresistive pressure sensors, 155–156 Pin assignments for IEEE-488 interface, 696 for RS-232 standard, 694 for RS-449 standard, 695 PIon meters, 391 Pipe flow, hydraulic conditions for, 33 Pipe joining methods, 671 Pipe prover, 67 Piper Alpha explosion, 784 Piping systems, 656–657, 659 Pirani gauge, 169–170 Pistonphone, 604, 605 Pitot tube for pressure measurement, 64–65 Plain-wire thermocouples, 301 Planck’s radiation law, 518 Plant commissioning, 660 Plant optimization, 21 Plasma displays, 703, 704 Plastic film detectors, 524 Plastic scintillators, 531–532 Plastics in instrument construction, 644 Plating, 335 Platinum resistance thermometers, 287 Pneumatic force-balance pressure transmitters, 159–160 Pneumatic instrumentation analog square root extractor, 729, 730 automatic manual transfer switch, 727, 728 batch-switch system, 727–729 buoyancy measurements, 721
characteristics of, 715 controllers, 723–729 diaphragm motor actuators, 732–733 electropneumatic positioners, 735 flapper/nozzle system, 716, 717, 724 force-balance controllers, 725–729 integrators, 729, 730 level measurements, 721 measurement and control systems, 716–717 motion-balance controllers, 717, 723–725 pneumatic-to-current converters, 730–732 pressure measurement, 718–721 speed, 722 summing unit and dynamic compensator, 729–732 temperature measurement, 717–718 transmission, 722–723 valve positioner, 733, 734 Pneumatic motion-balance pressure transmitters, 159 Pneumatic signals, 656 Pneumatic transmission systems, 677 Point velocity measurement electromagnetic velocity probe, 65 hotwire anemometer, 64 insertion turbine, 65–66 insertion vortex, 66 laser Doppler anemometer, 64 pitot tube, 64–65 propeller current meter, 66 ultrasonic Doppler velocity probe, 66 Poisson distribution, 521 Poisson’s ratio, 93 Polarimeters, 516–518 Polarizing beam splitter (PBS), 206, 207 Polarographic process oxygen analyzer, 395–396 Polarography applications of, 334 differential pulse, 333–334 direct current (DC), 331–332 pulse, 333 sampled DC, 332 single-sweep cathode ray, 332 Polyester capacitor, voltage effect on failure rates, 745 Polymers in instrument construction, 644 Portable instruments, 649–650 Portable thermocouple instruments, 304 Position-sensitive photocells, 87–89 Positive displacement meters applications, 43 fluted-spiral-rotor, 45 liquid sealed drum, 49–50 nutating-disc, 45 oval-gear, 46 principle of measurement, 44 reciprocating piston, 44–45 rotary-piston, 44 rotating-impeller, 50 rotating-vane, 46 sliding-vane, 45–46 Positive temperature coefficient (PTC) thermistors, 291
Positive-displacement pumps, 668–669 Post-deflection acceleration (PDA), 704 Potential drop surface-inspection methods, 570 Potential energy of fluid motion, 32 Potentiometric instruments, 298–299 Power dissipation, 446, 465 in three-phase measurement system, 468–469 Power factor correction (PFC), 808 Power failure, probability of, 768 Power loss, 467 Power management, 265 Power measurement high-frequency, 470–472 three-phase, 468–469 three-voltmeter method of, 465 two-wattmeter method of, 469 Power Reactor Inherently Safe Module (PRISM), 776 Power supplies, 542 high voltage, 543 Power surges, 260 Power-assisted steering, 767, 768 Power-factor measurement, 473–474 Preamplifiers, 116, 543 Precious metal thermocouples, 301 Predictive maintenance, 21 Preferential absorption gauge, 562, 563 Pressure absolute, 145, 149, 162, 165 definition, 145, 165 differential, 145, 162 gauge, 145, 162 units, 165 Pressure energy of fluid motion, 32 Pressure measurement, pneumatic instrumentation for, 718–721 Pressure measurements bellows elements, 151–152, 160 bourdon tubes, 149–150, 158–160 capacitance manometers, 152–154 deadweight testers, 147–149 diaphragm pressure elements, 150–151, 160, 161 digital quartz crystal pressure sensors, 156–158 low pressure range elements, 152 piezoresistive pressure sensors, 155–156 quartz electrostatic pressure sensors, 154–155 Schaffer gauge, 150 strain-gauge pressure sensors, 156 types, 145 units for, 145, 146 Pressure reduction, 662, 670 Pressure sensor feature of, 720 MEMS, 218 white-light interferometry, 198 Pressure tappings, 38, 41 Pressure transducers, 133 Pressure transmitter, 226–229, 238 range limits, 229
900
Pressure transmitters, 158–159 force-measuring, 160–162 nanotechnology for, 217 pneumatic force-balance, 159–160 pneumatic motion-balance, 159 Prevost’s theory, 307 Printed circuit board (PCB) construction of, 640–642 design, 806 ground plane on double-sided, 821 layout, 819–821 radiated emissions, 805–806 track impedance, 819–820 Probes, 664–665 Process chromatography carrier gas, 416 chromatographic data display, 418–419 columns in, 417 components of, 412, 414 controlled temperature enclosures, 417 data-processing systems, 418–419 detectors, 417–418 gas-chromatographic integrators, 419 operation of, 419–421 programmers, 418 sampling system, 414–415 switching and logic steps, 420, 421 Process control, 23–24 Process disturbance measured, 621–622 unmeasured, 620 Process variable (PV), 621 Profibus-PA, 247–248 Programmable automation controller (PAC), 618 Programmable logic controllers (PLCs), 617 Prompt gamma-ray analysis, 553 Propeller current meter, 66 Proportional-integral-derivative (PID) control, 23–24 algorithm, 622, 626 Proton-recoil counters, 540 Proving rings, 129–130 Proximity transducer, 651 Pseudoheterodyne interferometry, 194–195 Pseudo-plasticity, 70 Psychrometers, automatic, 433 Pulse amplitude modulation (PAM), 685 Pulse code modulation (PCM), 685–686 bit error probability of, 688 Pulse code modulation (PCM) techniques, 713 Pulse output, 223 Pulse polarography, 333 Pulse telemetry, 678 Pulsed frequency-modulation spectroscope, 578 Pulsed laser grating method, 213 Pulse-duration modulation (PDM), 686 Pulse-echo spectroscopy, 578 Pulse-height analyzers, 543–544 Pulse-position modulation (PPM), 686 Pulse-width ADCs, 458–460 effect of time-varying input on, 461 Pumps gas, 666–668 in HPLC system, 329–330 liquid, 668–669
Index Purdue model, 14 Pyroelectric detectors, 313, 504–505 Pyrometric cones, 324
Q
Q meter, 488, 489 Quadrupole mass spectrometer, 361–362 Quality factor, 478 Quartz crystal oscillator, 489 characteristics, 492 frequency stability of, 490 instrument, 432–433 Quartz crystals, 157 Quartz electrostatic pressure sensors, 154–155 Quartz spiral gauge, 167 Quincke oxygen analyzer, 423
R
Race hazard phenomenon, 754 Rack-mounting instruments, 649 Radiant emission effect, 316 Radiated coupling, 809 cable loading, 810 cable resonance, 810 cavity resonance, 811 current injection, 810 field generation, 803–804 modes, 805 wave impedance, 804–805 Radiation, 271 acoustic, use of, 87–90 from cables, 806, 807 clock and broadband, 828 for density measurement, 140 electromagnetic, use of, 87–90 errors, 322 from logic circuits, 826–830 from PCB, 805–806 reflected, 348 Radiation pyrometers. See Radiation thermometers Radiation shield, 534 Radiation sources, 522 Radiation surveys, undersea, 563 Radiation thermometers, 306 applications, 318–319 calibration, 312 optical, 314 photoelectric, 316 pyroelectric technique, 313 signal conditioning for, 318 surface, 310–312 total, 309–312 types, 307 Radio bands, 254 Radio frequency (RF) bridges, 487 signals, 254 transmission, 681 Radio interference, 255 Radio noise, 255 Radio spectrum, 254 Radio test, 261 Radio transmitters, AM, 771–772 Radioactive decay, 523–524
Radioactive measurement relations, 550 Radioactive source, health and safety, 525–526 Radiocarbon dating, 563–565 calculation of, 564 statistics of, 564 Radiocarbon time scale, calibration of, 565 Radiofrequency demodulation, 840 Radiography application, 580 fluoroscopic method, 585–586 gamma rays, 581, 582 image-intensification method, 585–586 in non-destructive testing, 581 sensitivity and image indicator, 582–585 xerography, 585 X-rays, 582 Radioisotope calcium monitor, 558–559 Radioisotopes, applications of, 549 Radiometrical surveys, land-based, 563 Railway signaling and control, 774–775 Rankine temperature scale, 273–274 Ratio control, 24 Reactor control principles of, 776–779 requirements for, 776 Reactor protection logic, 781–782 Real-time rigorous optimization (RETRO), 627–628 Reciprocal frequency counters, 490, 491 vs. time graph, 738 Reciprocating piston flowmeter, 44–45 Reciprocating piston pump, 668 Recorders, 701 circular chart, 711 data loggers, 713 digital transient, 607–608 galvanometer, 711–712 graphical, 709–710 level, 607 magnetic, 712–713 strip chart, 710–711 tape, 608 transient/waveform, 713 XY plotters, 607 Rectangular notch, 60–61 Redox electrodes, 382, 390 Redundancy analog, 758–759 level of, 758 with majority voting circuit, 757–758 parallel and series, 786 of railway signalling, 775 three-channel, 775 use of, 756–757, 786 Redundant control system, 767 Redundant power supply switching, 760 Reference electrodes, 382–384 Reference voltage sources, 461 Reflected radiation, 348 Refractive index, 515 Refractometers, 514–516 Abbé refractometer, 515 of solid samples, 515–516 Relay contacts, 766 Relay power supply, failure of, 766
901
Index Relay tripping circuits, 766 Relay-driving circuit, 767 Reliability budgets, 755 conditions for, 737 cost of, 776 definition of, 737 hardware and software, 768–769 and MTBF, 737–738 of oil supply, 784–785 optimum, 739–740 parallel systems, 748 program, 770 of signaling system, 774 of system components, 740–741 and total life cost, 740 Reliability engineering, 21 Remote reading thermometers, 319 Reservoir in HPLC system, 329 Residual chlorine analyzer, 393–395 Resistance measurement, 462 Resistance thermometers, 286–290 connections, 289–290 construction of, 289 nickel, 289 platinum, 287 temperature/resistance relationship of, 287–289 Resistivities, 287 Resistors equivalent circuits of, 477–478 equivalent series/parallel, 480 Resonant elements, density measurement using, 140–143 Resonant wire pressure transmitter, 161 Response time, 224 Reynolds number, 32, 35, 57 RF. See Radio frequency Robotic systems, 775–776 Rod thermostat, 285 Root mean square (RMS) measurement, 462 Rosette of gauges, 95 Rosin–Rammler distributions, 180 Rotameter, 39 Rotary gas meter assemblies, 51 Rotary pumps, 667 Rotary vane pumps, 669 Rotary-piston flowmeter, 44 Rotating mechanical meters for gases, 48–51 for liquids, 43–48 Rotating-impeller flowmeter, 45, 50 Rotating-vane flowmeter, 46, 51 Routing algorithms, 262 Royal Air Force Institute of Aviation Medicine (IAM), 763 RS-232 standard, pin assignments for, 694 RS-449 standard, pin assignments for, 695 RS-232/RS-485 interfaces, 163 Rubidium oscillator, 489
S
Safe failure, 779 Safety earthing and bonding, 791
electrocution risk, 790–791 flameproof, 794 flammable atmospheres, 791–795 in hazardous area, 792, 793, 795 intrinsic, 794 in non-hazardous area, 792, 793, 795 standards for, 790, 792 Safety features, robotic systems, 775–776 Safety procedures, 764–765 Salt-in-crude-oil monitor, 375 Sample disposal, 662 Sample probe, 662, 664 Sample systems components, 662 coalescers, 666 coolers, 666 filters, 665–666 gas pumps, 666–668 liquid pumps, 668–669 pressure-reduction stage in, 670 probes, 664–665 construction, 663 flow measurement, 669–670 Sample-conditioning system, 662 Sampled DC polarography, 332 Sample-transport system, 662 Sampling, 661 analysis equipment parts, 662 gas oil, 675–676 mixed-phase, 662 representative, 661–662 steam, 673, 674 time lags, 662–663 Sampling oscilloscopes, 708 Sampling wattmeter, 469–471 Satellite links, 772 Scattering, 534 Scattering coefficient, 177 Schaffer pressure gauge, 150 Schlieren visualization methods, 579 Scintillation counters, 524, 535 Scintillation detectors, 528–532 Sealers, 543 Sedimentation, 183 cumulative methods centrifugal methods, 187 decanting, 186 sedimentation balance, 184–185 sedimentation columns, 185–186 two-layer methods, 186 incremental methods Andreasen’s pipette, 183–184 density-measuring methods, 184 photosedimentation, 184 Seebeck effect, 294 Segmental orifice plate, 37 Seismic sensors, mass-spring, 118 open-loop, 118–119 Self-absorption, 534 Self-balancing level gauge, 110 Self-tuning controllers, 24 Semiautomatic microscope size analyzers, 182–183 Semiconductor, temperature measurement, 291–293
Semiconductor detector. See Solid-state detectors Semiconductor gauges, 94–95 Sensor, 220 basics of, 20–21 validation, 249 Sensor instrumentation, fiber optics in. See Optical fiber sensors Sensor/transmitter combination, 20 Sequence control, 26 Series mode rejection (SMR), 458 Servo accelerometers, 119 Servomex oxygen analyzer, 424 Shear strain, 93 Shear thickening, 70 Shear viscosity measurement, 71–73 Shear-thinning behavior, 69 Sheathed thermocouples, 302 Shielding, 858 apertures effects, 860–862 coating properties, 864–865 with conductive coating, 863, 865 enclosure design, 863–864 hardware, 862–865 mesh and honeycomb, 861 performance, 863 seam and aperture orientation, 862 seams effects on, 861 theory, 859–860 use of subenclosure, 861 windows and ventilation slots, 860 Shielding effectiveness (SE), 859–860 Shock calibration, 117 measurement of, 124 Shop-floor viscometers, 73–74 Short circuit failure, 766 Shunt leakage resistance, 476 Shunt meter. See Bypass meters Shuttle ball sensing, 57 SI units base units, 439–440 electrical, 439 Sieving, 180–181 Sight gauges, 107 Sight-glass level indicator, 107 Signal coding, 749–750 Signal multiplexing, 684–685 Signal processing techniques, 841 Signaling system, reliability of, 774 Signal-to-noise ratio (SNR), 255 Silicon junction diode, 291–292, 504 Silver/silver chloride electrode, 382 Simulation system in industrial automation, 25–26 selection, 26 Single side-band modulation (SSB), 687 Single-beam technique, 512 Single-shot time interval measurements, 492, 493 Single-sweep cathode ray polarography, 332 Sinusoidal AC excitation, 52–53 Site mounting instruments, 647 Size analysis methods, 180–182 Sliding-contact sensors, 83
902
Sliding-vane flowmeter, 45–46 Small-volume sample probe, 664 Smart grid technology, 28 Smart transmitter., 221 Smart valves, 635 Smith bridge, 477 Sneak circuits, 754–755 Snell’s law, 350 Soap-film burette, 68 Soft X-ray effect, 172–173 Software reliability vs. hardware reliability, 768–769 Solar radiation, 519 Solid, sampling, 661 Solid expansion, 270, 271, 285–286 Solid-state amplifiers, 772 Solid-state detectors, 411–412, 524, 532–533, 538 Solid-state electrodes, 381 Sound nature of, 593–594 power determination, 611–612 by sound intensity, 612–613 Sound energy level (SEL), 602, 603 Sound pressure level, 593, 594. See also Noise measurement calibrator, operating principle of, 604 measurement of, 609–610 instrumentation for, 596–604 sound power determination, 611–613 space averaging, 611 statistical distribution and percentiles, 611 time averaging, 610–611 Sound source/field, quantities characterizing, 594 Sound waves, velocity of propagation, 594–595 Sound-intensity analyzers, 608–609 Sound-level meters calibration, 609 integrating, 601–603, 611 statistical, 603 time weighting in, 610 types of, 601 Spatial fringe pattern, 198, 199 SPD. See Spectral power distribution Specific heat capacity, 269 Specific heats of gas, 35 Spectral power distribution (SPD), 512 Spectra-Tek Sentinel Flow Computer, 240 Spectrometer, 563 multichannel, 354 Spectrophotometer, 346, 510–512 Spectroradiometers, 512 Spectroscopy absorption and reflection techniques chemiluminescence, 349 infrared, 341–346 radiation, reflected, 348–349 visible and ultraviolet, 346–348 atomic absorption, 351–352 atomic emission, 349–351 electron paramagnetic resonance, 356 gamma ray, 357 microwave, 355–356
Index nuclear magnetic resonance, 357 photo-acoustic, 355 x-ray fluorescence, 353–355 Speech communication, human operator, 761 Sphere fast-neutron detector, 541 Spread spectrum, 256–257 Spring balances method, 129 Spring torque motor, 108 Springs, 647 Square-wave excitation, 53 Stabilizing collector current using emitter resistor, 752 Stable capacitors, temperature effects on, 752 Standard resistors, 443 Standards for instrumentation, 869 and specifications for wireless, 263 Static calorimetric techniques, 470 Static electricity, elimination of, 565 Stationary phases in HPLC system, 330 Statistics of counting, 521–524 nonrandom errors, 523 Status information, 224 Statutory instruments (SIs), 866 Steam ejection probe, 665 Steam sampling for conductivity, 673, 674 Steam-injection probe, 672 Steel thermometer, mercury filled, 278–280 Stefan–Boltzmann law, 307 Storage oscilloscopes, 707–708 Strain, 93 Strain gauge transducer, 751 Strain gauges bonded resistance, 93–95 capacitive, 99 characteristics of, 95–96 circuits for, 98 cross-sensitivity, 96 foil, 94 installation of, 96–97 for measuring residual stress, 95 photoelasticity of, 100–101 rosette, 95 semiconductor, 94–95 surveys of surfaces, 99–100 temperature sensitivity, 96 vibrating wire, 98–99 wire, 94 Strain sensing, 57 Strain-gauge load cells applications, 132 calibration, 132–133 design, 130–131 selection and installation, 131–132 Strain-gauge pressure sensors, 156 Stray capacitances in four-arm AC bridge, 483 Stray impedances in AC bridges, 480–482 effect on transformer ratio bridges, 486 Streamlined flow, 32 velocity profile, 32 Strip chart recorders methods, 710 specifications for, 711
Structured programming, 770–771 ADA, 771 flowcharts for, 770 Newspeak, 771 Successive-approximation ADCs, 456–457 Sulfur contents measurement in liquid hydrocarbons, 557–558 Superconducting quantum interferometric detector (SQUID), 443 Surface contact thermocouples, 303 Surface mount technology (SMT), 822–823 Surface radiation thermometer, 310–312 Surface temperature measurement, 323 Surface transfer impedance (STI) of cables screening, 844–845 Surface-inspection methods eddy-current, 570–571 magnetic flux, 569–570 potential drop, 570 visual, 568–569 Surface-neutron gauges, 557 Swirlmeter, 57 Switching power supply, 830 capacitive coupling, 831–832 capacitive screening, 832 differential mode interference, 832 magnetic component construction, 831 output noise, 832–833 radiation from di/dt loop, 831 Synchros, 86 System components failure rates of, 745 of reliability, 740–741 System design basics of, 20–21 performance margins in, 750 System interfaces, 262–263 System management, 262 System parameter tolerance, coping with, 751 Systematic error (SE), 492 Systems Analysis and Design Methodology (SSADM), 770
T
Tape recorders, 608 Target flow transmitter, 721 Target flowmeter, 42–43 TCD. See Thermal conductivity detector TDM. See Time-division multiplexing Technical construction file (TCF), 868 Telemetry, 677 in instrumentation, 678 radio frequency, 681 Telemetry transmitting system, reliability diagram of, 748 Temperature compensated oscillator, 490 Temperature controllers, 320 Temperature measurement, 269 computer-compatible, 320 considerations, 319 definitions, 269 instruments, 269 pneumatic instrumentation for, 717–718 realization of, 274 semiconductor, 291–293
903
Index sensor location considerations, 320–324 surface, 323 techniques, 276 electrical, 286–293 gas-filled instruments, 281–282 liquid crystals, 324–325 liquid-filled dial thermometers, 278–281 liquid-in-glass thermometers, 274–278 pyrometric cones, 324 radiation thermometers, 306–319 solid expansion, 285–286 temperature-sensitive pigments, 324 thermal imaging, 325 thermocouples, 293–306 turbine blade temperatures, 325–326 vapor pressure thermometers, 282–284 in vessels, 320 Temperature scales, 272 Celsius, 272 comparison, 275 Fahrenheit and Rankine, 273–274 IPTS-68, 273 thermodynamic, 272–273 Temperature sensitivity, strain gauges, 96 Temperature sensors, 60 Temperature transmitter, 224–226 Temperature-sensing integrated circuits, 292–293 Temperature-sensitive pigments, 324 Temporal fringe method, 196 Terminal velocity, 176–177 measurement in particles, 183–188 Thalamid electrodes, 384 Thermal analysis, 335–339 Thermal conductivity, 270 Thermal conductivity detector (TCD), 404–406, 417 multistream chromatograph with, 416 Thermal conductivity gauges, 169–170 Thermal EMF noise, 255 Thermal expansion, 270 Thermal imaging, 325 techniques, 518–519 Thermal mass flowmeters, 60 Thermal neutrons, 521 Thermal sensing, 57 Thermal stability of organic materials, 337 Thermistor, 406 gauge, 170 Thermistor power meter, 470 RF power, 470, 471 Thermistor system, equivalent circuit of, 470 Thermistors, 290–291 Thermocouple gauge, 169 Thermocouple instruments, 454–455 Thermocouple wattmeter, 467 Thermocouples, 293–306 accuracy, 305 availability, 301 British Standards, 300 circuit, 298–299 cold junction compensation in, 297–298 construction, 301–306 materials, 299–301 Thermodynamic temperature scale, 272–273
Thermoelectric diagram, 295–296 Thermoelectric effects, 293–299 Thermoelectric EMFs, addition of, 296–297 Thermoelectric inversion, 296 Thermoluminescence (TL), 525 Thermoluminescent detectors, 525 Thermometer bulb, 279 Thermometer pockets, 322–323 Thermometers bimetal strip, 285–286 gas-filled, 281–282 high-temperature, 277–278 liquid-filled, 278–281 liquid-in-glass, 274 liquid-in-metal, 281 mercury-in-glass, 274–278 radiation, 306–319 remote reading, 319 resistance, 286–290 vapor pressure, 282–284 Thermo-oxidative stability of organic materials, 337 Thermopiles, 303, 309 Thermostats, 320 Thermowells, 322–323 Thin-layer chromatography, 328–329 apparatus for, 329 Third-octave analyzers, 605–606 Thixotropy, 70, 72 Thompson-Lampard capacitor, 444 Thomson effect, 295 Three Mile Island (TMI-2) reactor, 776 accident, common mode failure, 780 Three-phase power measurement, 468–469 Three-terminal capacitors, 851 Three-voltmeter method of power measurement, 465 Three-winding voltage transformer, 485 Time constant, 224 Time lags, sampling, 662–663 Time-base error (TBE), 492 Time-division multiplexing (TDM), 684–685 Time-division multiplication wattmeter, 471 Time-interval averaging (TIA), 492–495 Time-lapse pulse holography method, 90 Time-of-flight mass spectrometer, 361 Time-of-flight methods, 89 Time-transit methods, microwave and ultrasonic, 109–110 Total energy of fluid motion, 32 Total radiation thermometer, 309–312 Tracer measurement method, 67 Transducers, 129 buoyancy, 136, 137 combined actuator, 652–653 cross-axis coupling, 116 gas density, 142 liquid density, 141 piezoelectric, 130 pressure, 133 proximity, 651 Transfer oscillator counter, 496, 497 Transformer ratio bridges, 482 advantage, 482 autobalancing, 487
balance condition of, 486 configurations, 484–486 effect of stray impedances on, 486 equivalent circuit of, 483 loading effect, 484 sensitivity of, 486–487 in unbalanced condition, 486–487 windings, 483 Transient suppression device, 856–857 Transients, 811–812 coupling mode, 812 interference paths, 833–834 protection, 834 on signal lines, 812 Transient/waveform recorders, 713 Transistor accelerated life tests for, 745 dissipation, 745 junction temperature, 744–745 stabilizing current in, 753 Transistor-transistor logic (TTL) circuit, 750 Transit-time methods, 89–90 Transmission analog signal, 678, 689–690 data, 693–697 digital signal, 678, 690–697 frequency, 690 radio frequency, 681 Transmission lines, 679–681 distributed primary constants of, 679 ringing on, 828–829 Transmissive flowmeters principle of operation, 54–55 velocity measurement, 56 Transmitters, 219, 220 temperature, 319 Traveling-wave tube (TWT), 772 Triangular notch, 61–62 Triboelectric charge, 812 Trigger error (TE), 491 Tristimulus colorimeters, 513 Troubleshooting tips, 261 True mass-flow measurement methods, 58–60 fluid-momentum, 58–59 pressure differential, 59–60 Tube joining methods, 672 Tungsten lamp, 500 Turbidity-nephelometer, 434 Turbine blade temperatures, 325–326 Turbine current meter, 63 Turbine meters, 47–48, 51 Turbulent flow, 32 velocity profile, 32 Two-dimensional statistical mean diameters, 176 Two-terminal passive network, 465
U
U.K. primary standards DC and low-frequency, 441 of resistance, 443 RF and microwave, 441–442 of voltage, 442 monitoring absolute value, 443 Ultrasonic detector, 410–411, 418 Ultrasonic Doppler velocity probe, 66
904
Ultrasonic flaw detectors, 574 Ultrasonic flow measurement method, 64 Ultrasonic flowmeters, types of, 54–56 Ultrasonic non-destructive testing, 588 acoustic emission, 580, 581 automated testing, 580 equipment controls and visual presentation, 574–575 principles of, 571–574 probe construction, 576 underwater, 588 Ultrasonic scanning, bulk, 589 Ultrasonic sensing, 57 Ultrasonic spectroscopy, 577–578 applications of, 578–579 Ultrasonic time-transit methods, 109–110 Ultraviolet intensity measurements, 509–510 Undersea radiation surveys, 563 Underwater non-destructive testing, 586–587 AC potential difference (AC/PD), 589 acoustic emission, 589–590 bulk ultrasonic scanning, 589 corrosion protection, 588 diver operations and communication, 587 eddy-current, 589 magnetic particle inspection, 588 photography, 587 ultrasonics, 588 visual examination, 587 Uninterrupted power supply (UPS), 759 Units, electronic, 544–546 Universal asynchronous receiver transmitters (UARTS), 690, 691 Universal timer/counters, 489 specifications, 494 Universal transformer bridge, 485 Unmeasured disturbance, 620 U.S. Nuclear Regulatory Commission (USNRC), 541 User-friendly design, 762–764 U-tube manometer, 145
V
Vacuum measurements absolute gauges, 165, 166–168 ionization gauges, 170–173 liquid manometers, 167 McLeod gauge, 167–168 mechanical gauges, 166–167 methods, 165 nonabsolute gauges, 165, 166, 169–173 thermal conductivity gauges, 169–170 Vacuum spectrographs, 350 Valves, control characteristics, 633–634 as flowmeters, 635–636 loop tuning, 634–635 positioning, 635 rangeability of, 634 smart, 635 types of, 631–633
Index Vapor pressure methods, 433 Vapor pressure thermometers, 282–284 liquids used in, 283 Vaporization, 670 Variable-orifice meter, 39–42, 669 Variable-reluctance methods, 85 VDUs. See Visual display units Velocity of approach factor, 34 measurement of, 120–121 sensor, 121 Velocity profile, 32 Velometers. See Deflecting-vane flowmeter Venturi flume, 62 Venturi nozzle, 38 Venturi tube advantages and disadvantages, 38 application, 38 components, 37 pressure tappings in, 38 Verifiable integrated processor for enhanced reliability (VIPER), 753 Vestigial side-band modulation (VSM), 687 Vibration parameters, frequency spectrum and magnitude of, 114 physical considerations, 113–116 wire strain gauge, 94, 98–99 Vibration measurement, areas of application, 116–117 Vibration monitoring frequency modulated laser diode, 209–210 heterodyne modulation, 206–208 pseudoheterodyne modulation, 208–209 using Bragg cell frequency shifter, 206 using phaselock loop demodulator, 206, 207 Vibration sensor practical problems of installation, 116 reference grade, 208 Vibrometer, 120 Viscometer, 69 Brookfield, 73 capillary, 71–72 cone-and-plate, 72–73 Couette, 72 industrial, 73 Ostwald, 71 parallel-plate, 73 shop-floor, 73–74 Viscosity measurement accuracy and range, 74–75 extensional, 74 kinetic-energy correction, 71 online, 74 shear, 71–73 under temperature and pressure, 74 Viscosity sensor, MEMS, 218 Visual display units (VDUs), 704, 709 Visual surface-inspection methods, 568–569 Voltage dependent resistor (VDR), 854
Voltage derating, 745 Voltage regulator, 542 Voltage surges. See Power surges Voltage transformers (VTs) AC range extension equivalent circuit, 453 phase-angle error/phase displacement, 453, 454 phasor diagram, 453 ratio errors, 454 voltage/ratio error, 453, 454 Voltage-controlled oscillator (VCO), 55, 198 Voltage-doubling circuit, 542 Volume flow rate, 43 Volume rate, flow equation for, 62 Volumetric calibration method, 67 Vortex flowmeter, 234–236 calibration factor for, 57 installation parameters for, 57 principle of operation, 56 Vortices, sensing methods for, 57 Voting operation, 773
W
Wagner earthing arrangement, 478, 482, 483 Wallmark, 88 Water purity and conductivity, 372 Water-displacement method, 68 Water-sampling system, 676 Water-wash probe, 664–665 Watson image-shearing eyepiece, 182 Watt-hour meter phasor diagram of eddy currents in, 473 fluxes, 473 torque in, 472–473 Wave impedance, 804–805 Wavelengths and color, 510–514 of thermal radiation, 272 transmission, 305 vs. radiant emission effect, 316 Wear measurement with nuclear techniques, 559 Wear-out phase, failure rate, 742, 747 Weight-loss curves, 336 Weirs, 60–62 Westinghouse instrument, 586 Weston mercury cadmium cells, 442–443 Wet gases density, 36 humidity, 36 Wheatstone bridge cell conductance measurement by, 369 modified, 476 resistance measurement, 474 self-heating in, 475 sensitivity, 474 three-lead measurements using, 476 with three-terminal high resistances, 478 unbalanced mode, 475
905
Index Whip antennas, 259 White-light interferometry, 195–201 electronically scanned method, 198–201 for force measurement, 200 pressure sensor, 198 temporally scanned method, 196–198 Wien’s laws, 307 WiFi 802.11, 262 WiFi Wired Equivalent Privacy (WEP) encryption, 257 WiMax, 263 Wire strain gauge, 94, 98–99 Wireless cellular, 263 communication, 254 functionality and applications, 264 history of, 253–254 information, 254 LANs, 253
mesh technologies, 262 planning for, 263–265 reliability, 256 standards and specifications for, 263 system security, 257–258 WirelessHART, 263 Wire-wound resistors, temperature effects, 752 Wollaston prism, 200 World Batch Forum (WBF), 630
X
Xerography, 585 X-ray fluorescence analysis coating and backing measurement by, 562 dispersive, 553 non-dispersive, 554–555 sources for, 554
X-ray fluorescence spectroscopy, 353–355 X-ray sedimentation, 184 X-rays, 582 detector, 559 diffraction, 355 fluorescence gauge, 562, 563 XY plotters, 607 x–y recorders, 712
Y
“Y” strainers, 665 Yagi antenna, 258–259 Yield stress, 70 Yokogawa vortex flowmeter, 234, 236
Z
Zeiss–Endter analyzer, 183 Zener diode, 460, 542 ZigBee, 263
This page intentionally left blank