David J. Whitehouse University of Warwick Coventry, UK
Boca Raton London New York
CRC Press is an imprint of the Tayl...
85 downloads
1266 Views
27MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
David J. Whitehouse University of Warwick Coventry, UK
Boca Raton London New York
CRC Press is an imprint of the Taylor & Francis Group, an informa business
A TA Y L O R & F R A N C I S B O O K
CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2011 by Taylor and Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number: 978-1-4200-8201-2 (Hardback) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Whitehouse, D. J. (David J.) Handbook of surface and nanometrology / by David J. Whitehouse. -- 2nd ed. p. cm. Rev. ed. of: Handbook of surface metrology. c1994. Includes bibliographical references and index. ISBN 978-1-4200-8201-2 1. Surfaces (Technology)--Measurement. I. Whitehouse, D. J. (David J.). Handbook of surface metrology. II. Title. TA418.7.W47 2011 620’.440287--dc22 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com
2010014776
This book is dedicated to my wife Ruth who has steered me through many computer problems, mainly of my own making, and cajoled, encouraged, and threatened me in equal measure to carry on when my willpower flagged.
Contents Preface........................................................................................................................................................................................xxi Acknowledgments.....................................................................................................................................................................xxiii Chapter 1. Introduction—Surface and Nanometrology............................................................................................................. 1 1.1 1.2 1.3 1.4 1.5
General.......................................................................................................................................................... 1 Surface Metrology......................................................................................................................................... 1 Background to Surface Metrology................................................................................................................ 1 Nanometrology.............................................................................................................................................. 2 Book Structure............................................................................................................................................... 2
Chapter 2. Characterization....................................................................................................................................................... 5 2.1 2.2
2.3 2.4
The Nature of Surfaces.................................................................................................................................. 5 Surface Geometry Assessment and Parameters............................................................................................ 7 2.2.1 General—Roughness Review.......................................................................................................... 7 2.2.1.1 Profile Parameters (ISO 25178 Part 2 and 4278).............................................................. 9 2.2.1.2 Reference Lines.............................................................................................................. 16 2.2.1.3 Filtering Methods for Surface Metrology....................................................................... 21 2.2.1.4 Morphological Filtering.................................................................................................. 34 2.2.1.5 Intrinsic Filtering............................................................................................................ 41 2.2.1.6 Summary........................................................................................................................ 41 2.2.2 Statistical Parameters and Random Process Analysis of Surface Roughness............................... 41 2.2.2.1 General........................................................................................................................... 41 2.2.2.2 Amplitude Probability Density Function........................................................................ 42 2.2.2.3 Random Process Analysis Applied to Surfaces............................................................. 44 2.2.2.4 Areal Texture Parameters, Isotropy and Lay (Continuous)............................................ 59 2.2.2.5 Discrete Characterization............................................................................................... 65 2.2.2.6 Assessment of Isotropy and Lay..................................................................................... 68 2.2.3 Methods of Characterization Using Amplitude Information......................................................... 71 2.2.3.1 Amplitude and Hybrid Parameters................................................................................. 71 2.2.3.2 Skew and Kurtosis.......................................................................................................... 71 2.2.3.3 Beta Function.................................................................................................................. 71 2.2.3.4 Fourier Characteristic Function...................................................................................... 72 2.2.3.5 Chebychev Function and Log Normal€Function............................................................ 73 2.2.3.6 Variations on Material Ratio Curve╯+╯Evaluation Procedures....................................... 74 2.2.4 Characterization Using Lateral Spatial Information...................................................................... 79 2.2.4.1 Time Series Analysis Methods of Characterization....................................................... 79 2.2.4.2 Transform Methods Based on Fourier............................................................................ 81 2.2.4.3 Space–Frequency Transforms........................................................................................ 82 2.2.4.4 Fractals........................................................................................................................... 93 2.2.5 Surface Texture and Non-Linear Dynamics.................................................................................. 98 2.2.5.1 Poincare Model and Chaos............................................................................................. 98 2.2.5.2 Stochastic Resonance...................................................................................................... 99 Waviness.................................................................................................................................................... 100 Errors of Form........................................................................................................................................... 106 2.4.1 Introduction.................................................................................................................................. 106 2.4.2 Straightness and Related Topics................................................................................................... 107 2.4.2.1 ISO /TS 12780-1, 2003, Vocabulary and Parameters of Straightness and 12780-2, 2003, Specification Operators, Give the International Standard Position Regarding Straightness and Its Measurement.............................................................. 107 2.4.2.2 Generalized Probe Configurations............................................................................... 108 2.4.2.3 Assessments and Classification.................................................................................... 109 v
vi
Contents
2.4.3
Flatness..........................................................................................................................................110 2.4.3.1 ISO /TS 12781-1,2003, Vocabulary and Parameters of Flatness and ISO/TS 12781-2, 2003, Specification Operators Give the International Standards Position on Flatness.....................................................................................................................110 2.4.3.2 General..........................................................................................................................110 2.4.3.3 Assessment....................................................................................................................111 2.4.4 Roundness.....................................................................................................................................114 2.4.4.1 ISO/TS 12181-1, 2003, Vocabulary and Parameters and ISO/TS 12181-2, 2003, Specification Operators Give the International Standards Current Position on Roundness....................................................................114 2.4.4.2 General..........................................................................................................................114 2.4.4.3 Measurement and Characterization...............................................................................115 2.4.4.4 General Comments on Radial, Diametral, and Angular Variations............................ 124 2.4.4.5 Roundness Assessment................................................................................................. 126 2.4.4.6 Roundness Filtering and Other Topics......................................................................... 133 2.4.4.7 Roundness Assessment Using Intrinsic Datum............................................................ 135 2.4.4.8 Eccentricity and Concentricity..................................................................................... 138 2.4.4.9 Squareness.................................................................................................................... 139 2.4.4.10 Curvature Assessment Measurement from Roundness Data....................................... 139 2.4.4.11 Radial Slope Estimation............................................................................................... 144 2.4.4.12 Assessment of Ovality and Other Shapes..................................................................... 145 2.4.5 Three-Dimensional Shape Assessment........................................................................................ 146 2.4.5.1 Sphericity...................................................................................................................... 146 2.4.6 Cylindricity and Conicity............................................................................................................. 150 2.4.6.1 Standards ISO/TS 1280-1, 2003, Vocabulary and Parameters of Cylindrical Form, ISO/TS 1280-2, 2003, Specification Operators................................................. 150 2.4.6.2 General......................................................................................................................... 150 2.4.6.3 Methods of Specifying Cylindricity............................................................................. 151 2.4.6.4 Reference Figures for Cylinder Measurement.............................................................. 156 2.4.6.5 Conicity......................................................................................................................... 160 2.4.7 Complex Surfaces..........................................................................................................................161 2.4.7.1 Aspherics.......................................................................................................................161 2.4.7.2 Free-Form Geometry.....................................................................................................161 2.5 Characterization of Defects on the Surface............................................................................................... 164 2.5.1 General ISO 8785 Surface Defects.............................................................................................. 164 2.5.2 Dimensional Characteristics of Defects....................................................................................... 165 2.5.3 Type Shapes of Defect.................................................................................................................. 165 2.6 Discussion.................................................................................................................................................. 165 References............................................................................................................................................................ 167 Chapter 3. Processing, Operations, and Simulations..............................................................................................................171 Comment...............................................................................................................................................................171 3.1 Digital Methods..........................................................................................................................................171 3.1.1 Sampling........................................................................................................................................171 3.1.2 Quantization..................................................................................................................................173 3.1.3 Effect of Computer Word Length..................................................................................................174 3.1.4 Numerical Analysis—The Digital Model.....................................................................................175 3.1.4.1 Differentiation...............................................................................................................175 3.1.4.2 Integration......................................................................................................................176 3.1.4.3 Interpolation and Extrapolation.....................................................................................176 3.2 Discrete (Digital) Properties of Random Surfaces.................................................................................... 177 3.2.1 Some Numerical Problems Encountered in Surface Metrology.................................................. 177 3.2.2 Definitions of a Peak and Density of Peaks................................................................................. 177 3.2.3 Effect of Quantization on Peak Parameters..................................................................................178 3.2.4 Effect of Numerical Analysis on Peak Parameters.......................................................................178
vii
Contents
3.2.5 3.2.6
3.3
3.4
3.5 3.6
3.7
Effect of Sample Interval on the Peak Density Value.................................................................. 180 Digital Evaluation of Other Profile Peak Parameters.................................................................. 182 3.2.6.1 Peak Height Measurement............................................................................................ 182 3.2.6.2 Peak Curvature............................................................................................................. 183 3.2.6.3 Profile Slopes................................................................................................................ 184 3.2.7 Summary of Profile Digital Analysis Problems........................................................................... 186 3.2.8 Areal (3D) Filtering and Parameters............................................................................................ 186 3.2.9 Digital Areal (3D) Measurement of Surface Roughness Parameters........................................... 188 3.2.9.1 General......................................................................................................................... 188 3.2.9.2 The Expected Summit Density and the Distributions of Summit Height and Curvature...................................................................................................................... 189 3.2.9.3 The Effect of the Sample Interval and Limiting Results.............................................. 190 3.2.10 Patterns of Sampling and Their Effect on Discrete Properties (Comparison of Three-, Four-, Five- and Seven-Point Analysis of Surfaces)..................................................................... 192 3.2.10.1 Four-Point Sampling Scheme in a Plane...................................................................... 192 3.2.10.2 The Hexagonal Grid in the Trigonal Symmetry Case.................................................. 193 3.2.10.3 The Effect of Sampling Interval and Limiting Results on Sample Patterns............................................................................................................ 195 3.2.11 Discussion.................................................................................................................................... 198 Digital Form of Statistical Analysis Parameters....................................................................................... 199 3.3.1 Amplitude Probability Density Function..................................................................................... 199 3.3.2 Moments of the Amplitude Probability Density Function........................................................... 201 3.3.3 Autocorrelation Function............................................................................................................. 201 3.3.4 Autocorrelation Measurement Using the Fast Fourier Transform............................................... 202 3.3.5 Power Spectral Density................................................................................................................ 202 Digital Estimation of Reference Lines for Surface Metrology................................................................. 203 3.4.1 General......................................................................................................................................... 203 3.4.2 Convolution Filtering................................................................................................................... 204 3.4.2.1 Repeated Convolutions................................................................................................. 205 3.4.3 Box Functions............................................................................................................................... 206 3.4.4 Effect of Truncation..................................................................................................................... 207 3.4.5 Alternative Methods of Computation........................................................................................... 208 3.4.5.1 Overlap Methods.......................................................................................................... 208 3.4.5.2 Equal-Weight Methods................................................................................................. 208 3.4.6 Recursive Filters........................................................................................................................... 209 3.4.6.1 The Discrete Transfer Function.................................................................................... 209 3.4.6.2 An Example...................................................................................................................211 3.4.7 Use of the Fast Fourier Transform in Surface Metrology Filtering............................................. 212 3.4.7.1 Areal Case.................................................................................................................... 212 Examples of Numerical Problems In Straightness and Flatness............................................................... 213 Algorithms..................................................................................................................................................214 3.6.1 Differences between Surface and Dimensional Metrology and Related Subjects........................214 3.6.1.1 Least-Squares Evaluation of Geometric Elements........................................................214 3.6.1.2 Optimization................................................................................................................. 215 3.6.1.3 Linear Least Squares.................................................................................................... 215 3.6.1.4 Eigenvectors and Singular Value Decomposition........................................................ 215 3.6.2 Best-Fit Shapes............................................................................................................................. 215 3.6.2.1 Planes............................................................................................................................ 215 3.6.2.2 Circles and Spheres........................................................................................................216 3.6.2.3 Cylinders and Cones......................................................................................................217 3.6.3 Other Methods.............................................................................................................................. 221 3.6.3.1 Minimum Zone Method............................................................................................... 221 3.6.3.2 Minimax Methods—Constrained Optimization.......................................................... 221 3.6.3.3 Simplex Methods.......................................................................................................... 222 Basic Concepts In Linear Programing...................................................................................................... 223 3.7.1 General......................................................................................................................................... 223
viii
Contents
3.7.2
Dual Linear Programs in Surface Metrology.............................................................................. 223 3.7.2.1 Minimum Radius Circumscribing Limaçon................................................................ 224 3.7.3 Minimum Zone, Straight Lines, and Planes................................................................................ 225 3.7.4 Minimax Problems....................................................................................................................... 227 3.7.4.1 General Algorithmic Approach.................................................................................... 227 3.7.4.2 Definitions.................................................................................................................... 227 3.8 Fourier Transforms and the Fast Fourier Transform................................................................................. 228 3.8.1 General Properties........................................................................................................................ 228 3.8.2 Fast Fourier Transform................................................................................................................. 229 3.8.2.1 Analytic Form............................................................................................................... 229 3.8.2.2 Practical Realization..................................................................................................... 231 3.8.3 General Considerations of Properties.......................................................................................... 232 3.8.3.1 Fourier Series of Real Data.......................................................................................... 233 3.8.4 Applications of Fourier Transforms in Surface Metrology.......................................................... 233 3.8.4.1 Fourier Transform for Non-Recursive Filtering........................................................... 233 3.8.4.2 Power Spectral Analysis............................................................................................... 234 3.8.4.3 Correlation.................................................................................................................... 234 3.8.4.4 Other Convolutions....................................................................................................... 234 3.8.4.5 Interpolation................................................................................................................. 234 3.8.4.6 Other Analysis in Roughness....................................................................................... 234 3.8.4.7 Roundness Analysis...................................................................................................... 234 3.9 Transformations In Surface Metrology..................................................................................................... 235 3.9.1 General......................................................................................................................................... 235 3.9.2 Hartley Transform........................................................................................................................ 235 3.9.3 Walsh Functions–Square Wave Functions–Hadamard................................................................ 236 3.10 Space–Frequency Functions...................................................................................................................... 237 3.10.1 General......................................................................................................................................... 237 3.10.2 Ambiguity Function..................................................................................................................... 237 3.10.3 Discrete Ambiguity Function (DAF)........................................................................................... 238 3.10.3.1 Discrete Ambiguity Function Computation................................................................. 238 3.10.4 Wigner Distribution Function W (x, ω)........................................................................................ 239 3.10.4.1 Properties...................................................................................................................... 239 3.10.4.2 Analytic Signals............................................................................................................ 239 3.10.4.3 Moments....................................................................................................................... 239 3.10.4.4 Digital Wigner Distribution Applied to Surfaces......................................................... 240 3.10.4.5 Examples of Wigner Distribution: Application to Signals—Waviness........................ 241 3.10.5 Comparison of the Fourier Transform, the Ambiguity Function, and the Wigner Distribution................................................................................................................................... 242 3.10.6 Gabor Transform.......................................................................................................................... 243 3.10.7 Wavelets in Surface Metrology.................................................................................................... 244 3.11 Surface Generation.................................................................................................................................... 244 3.11.1 Profile Generation........................................................................................................................ 244 3.11.2 Areal Surface Generation............................................................................................................. 246 3.12 Atomistic Considerations and Simulations............................................................................................... 248 3.12.1 General......................................................................................................................................... 248 3.12.1.1 Microscopic Mechanics................................................................................................ 248 3.12.1.2 Macroscopic Propagating Surfaces.............................................................................. 248 3.12.2 Mobile Cellular Automata MCA.................................................................................................. 249 3.12.3 Simulation Considerations........................................................................................................... 251 3.12.4 Molecular Dynamics.................................................................................................................... 251 3.13 Summary................................................................................................................................................... 252 References............................................................................................................................................................ 253
ix
Contents
Chapter 4. Measurement Techniques..................................................................................................................................... 255 4.1 4.2
4.3
Background................................................................................................................................................ 255 4.1.1 Some Early Dates of Importance in the Metrology and Production of Surfaces......................... 255 4.1.2 Specification................................................................................................................................. 257 Measurement Systems Stylus—Micro...................................................................................................... 257 4.2.1 The System................................................................................................................................... 257 4.2.1.1 Stylus Characteristics................................................................................................... 258 4.2.2 Tactile Considerations.................................................................................................................. 258 4.2.2.1 General......................................................................................................................... 258 4.2.2.2 Tip Dimension.............................................................................................................. 258 4.2.2.3 Stylus Angle.................................................................................................................. 260 4.2.2.4 Stylus Pressure-Static Case, Compliance, Stiffness, and Maximum Pressure............ 260 4.2.2.5 Elastic/Plastic Behavior................................................................................................ 261 4.2.2.6 Stylus and Skid Damage Prevention Index................................................................... 262 4.2.2.7 Pick-Up Dynamics and “Trackability”......................................................................... 263 4.2.2.8 Unwanted Resonances in Metrology Instruments........................................................ 265 4.2.2.9 Conclusions about Mechanical Pick-Ups of Instruments Using the Conventional Approach....................................................................................................................... 267 4.2.3 Relationship between Static and Dynamic Forces for Different Types of Surface...................... 268 4.2.3.1 Reaction due to Periodic Surface.................................................................................. 268 4.2.3.2 Reaction due to Random Surfaces................................................................................ 268 4.2.3.3 Statistical Properties of the Reaction and Their Significance: Autocorrelation Function and Power Spectrum of R(t)........................................................................... 269 4.2.3.4 System Properties and Their Relationship to the Surface: Damping and Energy Loss............................................................................................................................... 270 4.2.3.5 Integrated Damping: System Optimization for Random Surface................................ 270 4.2.3.6 Alternative Stylus Systems and Effect on Reaction/Random Surface......................... 271 4.2.3.7 Criteria for Scanning Surface Instruments................................................................... 272 4.2.3.8 Forms of the Pick-Up Equation.................................................................................... 272 4.2.4 Mode of Measurement................................................................................................................. 274 4.2.4.1 Topography Measurement............................................................................................ 274 4.2.4.2 Force Measurement...................................................................................................... 275 4.2.4.3 Open- and Closed-Loop Considerations...................................................................... 276 4.2.4.4 Spatial Domain Instruments......................................................................................... 277 4.2.5 Other Stylus Configurations......................................................................................................... 278 4.2.5.1 High-Speed Area Tracking Stylus (a Micro Equivalent of the Atomic Scanning Probe Family)............................................................................................................... 278 4.2.5.2 Multi-Function Stylus Systems..................................................................................... 279 4.2.5.3 Pick-Up and Transducer System................................................................................... 280 4.2.6 Metrology and Various Mechanical Issues.................................................................................. 281 4.2.6.1 Generation of Reference Surface.................................................................................. 282 4.2.6.2 Intrinsic Datum—Generation and Properties of the Skid Datum................................ 284 4.2.6.3 Stylus Instruments Where the Stylus Is Used as an Integrating Filter......................... 289 4.2.6.4 Space Limitations of “References” Used in Roundness Measurement........................ 290 4.2.7 Areal (3D) Mapping of Surfaces Using Stylus Methods.............................................................. 294 4.2.7.1 General Problem........................................................................................................... 294 4.2.7.2 Mapping........................................................................................................................ 295 4.2.7.3 Criteria for Areal Mapping........................................................................................... 295 4.2.7.4 Contour and Other Maps of Surfaces........................................................................... 301 Measuring Instruments Stylus—Nano/Atomic Scale............................................................................... 301 4.3.1 Scanning Probe Microscopes (SPM) [or (SXM) for Wider Variants]......................................... 301 4.3.1.1 History.......................................................................................................................... 301 4.3.1.2 Background................................................................................................................... 303
x
Contents
4.3.2
4.4
General Characteristics................................................................................................................ 304 4.3.2.1 The Tip.......................................................................................................................... 305 4.3.2.2 The Cantilever.............................................................................................................. 309 4.3.2.3 Simple Scanning Systems for SPM...............................................................................310 4.3.2.4 Traceability....................................................................................................................311 4.3.2.5 General Comments........................................................................................................311 4.3.2.6 Other Scanning Microscopes........................................................................................312 4.3.3 Operation and Theory of the Scanning Probe Microscope (SPM)...............................................313 4.3.3.1 Scanning Tunneling Microscope (STM).......................................................................313 4.3.3.2 The Atomic Force Microscope......................................................................................316 4.3.4 Interactions....................................................................................................................................318 4.3.4.1 Tip–Sample....................................................................................................................318 4.3.4.2 Cantilever—Sample (e.g., Tapping Mode).....................................................................319 Optical Techniques.................................................................................................................................... 322 4.4.1 General......................................................................................................................................... 322 4.4.1.1 Comparison between Stylus and Optical Methods....................................................... 322 4.4.1.2 Properties of the Focused Spot..................................................................................... 325 4.4.2 Optical Followers......................................................................................................................... 326 4.4.3 Hybrid Microscopes..................................................................................................................... 330 4.4.3.1 Confocal Microscopes.................................................................................................. 330 4.4.3.2 Near-Field Scanning Optical Microscopy (NSOM)..................................................... 333 4.4.3.3 Heterodyne Confocal Systems...................................................................................... 333 4.4.4 Oblique Angle Methods............................................................................................................... 335 4.4.5 Interference Methods................................................................................................................... 336 4.4.5.1 Phase Detection Systems.............................................................................................. 336 4.4.5.2 Spatial and Temporal Coherence.................................................................................. 336 4.4.5.3 Interferometry and Surface Metrology......................................................................... 336 4.4.5.4 Heterodyne Methods..................................................................................................... 340 4.4.5.5 Other Methods in Interferometry................................................................................. 342 4.4.5.6 White Light Interferometry—Short Coherence Interferometry................................... 343 4.4.5.7 White Light Interferometer—Thin Transparent Film Measurement........................... 346 4.4.5.8 Absolute Distance Methods with Multi-Wavelength Interferometry........................... 349 4.4.6 Moiré Methods..............................................................................................................................351 4.4.6.1 General..........................................................................................................................351 4.4.6.2 Strain Measurement...................................................................................................... 352 4.4.6.3 Moiré Contouring......................................................................................................... 352 4.4.6.4 Shadow Moiré............................................................................................................... 352 4.4.6.5 Projection Moiré........................................................................................................... 352 4.4.6.6 Summary...................................................................................................................... 353 4.4.7 Holographic Techniques............................................................................................................... 353 4.4.7.1 Introduction.................................................................................................................. 353 4.4.7.2 Computer Generated Holograms.................................................................................. 355 4.4.7.3 Conoscopic Holography................................................................................................ 356 4.4.7.4 Holographic Interferometry.......................................................................................... 357 4.4.8 Speckle Methods.......................................................................................................................... 357 4.4.9 Diffraction Methods..................................................................................................................... 365 4.4.9.1 General......................................................................................................................... 365 4.4.9.2 Powder and Chip Geometry......................................................................................... 372 4.4.9.3 Vector Theory............................................................................................................... 373 4.4.10 Scatterometers (Glossmeters)....................................................................................................... 373 4.4.11 Scanning and Miniaturization...................................................................................................... 377 4.4.11.1 Scanning System........................................................................................................... 377 4.4.11.2 Remote and Miniature System..................................................................................... 378 4.4.12 Flaw Detection by Optical Means................................................................................................ 380 4.4.12.1 General Note................................................................................................................. 380 4.4.12.2 Transform Plane Methods............................................................................................. 380
xi
Contents
4.4.12.3 Scanning for Flaws....................................................................................................... 382 4.4.12.4 “Whole-Field” Measurement........................................................................................ 383 4.4.13 Comparison of Optical and Stylus Trends................................................................................... 384 4.4.13.1 General Engineering Surface Metrology...................................................................... 384 4.4.13.2 General Optical Comparison........................................................................................ 385 4.5 Capacitance and Other Techniques........................................................................................................... 385 4.5.1 Areal Assessment......................................................................................................................... 385 4.5.2 Scanning Capacitative Microscopes............................................................................................ 386 4.5.3 Capacitance Proximity Gauge...................................................................................................... 387 4.5.4 Inductance.................................................................................................................................... 387 4.5.5 Impedance Technique—Skin Effect............................................................................................ 388 4.5.6 Other Methods.............................................................................................................................. 388 4.5.6.1 General......................................................................................................................... 388 4.5.6.2 Friction Devices............................................................................................................ 388 4.5.6.3 Liquid Methods............................................................................................................. 388 4.5.6.4 Pneumatic Methods...................................................................................................... 388 4.5.6.5 Thermal Method........................................................................................................... 389 4.5.6.6 Ultrasonics.................................................................................................................... 389 4.5.7 Summary...................................................................................................................................... 392 4.6 Electron Microscopy, Photon Microscopy, Raman Spectromertry........................................................... 392 4.6.1 General......................................................................................................................................... 392 4.6.2 Scanning Electron Microscope (SEM)........................................................................................ 394 4.6.3 Energy Dispersive X-Ray Spectrometer....................................................................................... 397 4.6.4 Transmission Electron Microscope (TEM)................................................................................. 397 4.6.5 Photon Tunneling Microscopy (PTM)......................................................................................... 399 4.6.6 Raman Spectroscopy.................................................................................................................... 400 4.7 Comparison of Techniques—General Summary...................................................................................... 401 4.8 Some Design Considerations..................................................................................................................... 403 4.8.1 Design Criteria for Instrumentation............................................................................................. 403 4.8.2 Kinematics................................................................................................................................... 404 4.8.3 Pseudo-Kinematic Design............................................................................................................ 406 4.8.4 Mobility........................................................................................................................................ 407 4.8.5 Linear Hinge Mechanisms........................................................................................................... 407 4.8.6 Angular Motion Flexures............................................................................................................. 409 4.8.7 Force and Measurement Loops.....................................................................................................410 4.8.7.1 Metrology Loop.............................................................................................................410 4.8.8 Instrument Capability Improvement.............................................................................................412 4.8.9 Alignment Errors..........................................................................................................................413 4.8.10 Abbé Errors...................................................................................................................................414 4.8.11 Other Mechanical Considerations.................................................................................................415 4.8.12 Systematic Errors and Non-Linearities.........................................................................................415 4.8.13 Material Selection.........................................................................................................................416 4.8.14 Noise..............................................................................................................................................417 4.8.14.1 Noise Position............................................................................................................... 420 4.8.14.2 Probe System Possibilities............................................................................................ 420 4.8.14.3 Instrument Electrical and Electronic Noise.................................................................. 420 4.8.15 Replication.................................................................................................................................... 421 References............................................................................................................................................................ 422 Chapter 5. Standardization–Traceability–Uncertainty.......................................................................................................... 429 5.1 5.2 5.3
Introduction............................................................................................................................................... 429 Nature of Errors......................................................................................................................................... 429 5.2.1 Systematic Errors......................................................................................................................... 429 5.2.2 Random Errors............................................................................................................................. 430 Deterministic Or Systematic Error Model................................................................................................ 430 5.3.1 Sensitivity..................................................................................................................................... 430
xii
Contents
5.3.2 Readability................................................................................................................................... 430 5.3.3 Calibration.................................................................................................................................... 430 5.4 Basic Components of Accuracy Evaluation.............................................................................................. 430 5.4.1 Factors Affecting the Calibration Standard................................................................................. 430 5.4.2 Factors Affecting the Workpiece................................................................................................. 430 5.4.3 Factors Affecting the Person........................................................................................................ 430 5.4.4 Factors Affecting the Environment.............................................................................................. 430 5.4.5 Classification of Calibration Standards.........................................................................................431 5.4.6 Calibration Chain..........................................................................................................................431 5.4.7 Time Checks to Carry Out Procedure..........................................................................................431 5.5 Basic Error Theory for a System................................................................................................................431 5.6 Propagation of Errors................................................................................................................................ 432 5.6.1 Deterministic Errors..................................................................................................................... 432 5.6.2 Random Errors............................................................................................................................. 432 5.7 Some Useful Statistical Tests for Surface Metrology................................................................................ 433 5.7.1 Confidence Intervals for Any Parameter...................................................................................... 433 5.7.2 Tests for The Mean Value of a Surface—The Student t Test....................................................... 434 5.7.3 Tests for The Standard Deviation—The χ2 Test........................................................................... 435 5.7.4 Goodness of Fit............................................................................................................................ 435 5.7.5 Tests for Variance—The F Test.................................................................................................... 435 5.7.6 Tests of Measurements against Limits—16% Rule...................................................................... 435 5.7.7 Measurement of Relevance—Factorial Design............................................................................ 436 5.7.7.1 The Design.................................................................................................................... 436 5.7.7.2 The Interactions............................................................................................................ 437 5.7.8 Lines of Regression...................................................................................................................... 438 5.7.9 Methods of Discrimination.......................................................................................................... 438 5.8 Uncertainty in Instruments—Calibration in General............................................................................... 439 5.9 Calibration of Stylus Instruments.............................................................................................................. 440 5.9.1 Stylus Calibration......................................................................................................................... 441 5.9.1.1 Cleaning........................................................................................................................ 441 5.9.1.2 Use of Microscopes...................................................................................................... 441 5.9.1.3 Use of Artifacts............................................................................................................. 442 5.9.1.4 Stylus Force Measurement............................................................................................ 443 5.9.2 Calibration of Vertical Amplification for Standard Instruments................................................. 444 5.9.2.1 Gauge Block Method.................................................................................................... 444 5.9.2.2 Reason’s Lever Arm..................................................................................................... 444 5.9.2.3 Sine Bar Method........................................................................................................... 444 5.9.2.4 Accuracy Considerations.............................................................................................. 445 5.9.2.5 Comparison between Stylus and Optical Step Height Measurement for Standard Instruments.................................................................................................... 445 5.9.3 Some Practical Standards (Artifacts) and ISO Equivalents......................................................... 447 5.9.3.1 Workshop Standards..................................................................................................... 447 5.9.3.2 ISO Standards for Instrument Calibration.................................................................... 447 5.9.4 Calibration of Transmission Characteristics (Temporal Standards)............................................ 450 5.9.5 Filter Calibration Standards......................................................................................................... 452 5.9.6 Step Height for Ultra Precision and Scanning Probe Microscopes............................................. 454 5.9.7 X-Ray Methods—Step Height..................................................................................................... 454 5.9.8 X-Rays—Angle Measurement..................................................................................................... 457 5.9.9 Traceability and Uncertainties of Nanoscale Surface Instrument “Metrological” Instruments.......................................................................................................... 458 5.9.9.1 M 3 NIST Instrument..................................................................................................... 459 5.9.9.2 Uncertainties General and Nanosurf IV NPL.............................................................. 459 5.10 Calibration of Form Instruments............................................................................................................... 462 5.10.1 Magnitude.................................................................................................................................... 462 5.10.1.1 Magnitude of Diametral Change for Tilted Cylinder................................................... 462 5.10.1.2 Shafts and Holes Engaged by a Sharp Stylus............................................................... 463
xiii
Contents
5.10.1.3 Shaft Engaged by a Hatchet.......................................................................................... 463 5.10.1.4 Hole Engaged by a Hatchet........................................................................................... 463 5.10.2 Separation of Errors—Calibration of Roundness and Form....................................................... 463 5.10.3 General Errors due to Motion...................................................................................................... 467 5.10.3.1 Radial Motion............................................................................................................... 468 5.10.3.2 Face Motion.................................................................................................................. 468 5.10.3.3 Error Motion—General Case....................................................................................... 468 5.10.3.4 Fundamental and Residual Error Motion..................................................................... 469 5.10.3.5 Error Motion versus Run-Out (or TIR)......................................................................... 470 5.10.3.6 Fixed Sensitive Direction Measurements..................................................................... 470 5.10.3.7 Considerations on the Use of the Two-Gauge-Head System for a Fixed Sensitive Direction........................................................................................................ 470 5.10.3.8 Other Radial Error Methods......................................................................................... 471 5.11 Variability of Surface Parameters............................................................................................................. 472 5.12 Gps System—International and National Standards................................................................................ 474 5.12.1 General......................................................................................................................................... 474 5.12.2 Geometrical Product Specification (GPS).................................................................................... 475 5.12.3 Chain of Standards within the GPS............................................................................................. 475 5.12.3.1 Explanation................................................................................................................... 475 5.12.3.2 Specifics within Typical Box........................................................................................ 476 5.12.3.3 Position in Matrix......................................................................................................... 477 5.12.3.4 Proposed Extension to Matrix...................................................................................... 478 5.12.3.5 Duality Principle........................................................................................................... 480 5.12.4 Surface Standardization—Background....................................................................................... 480 5.12.5 Role of Technical Specification Documents................................................................................ 482 5.12.6 Selected List of International Standards Applicable to Surface Roughness Measurement: Methods; Parameters; Instruments; Comparison Specimens...................................................... 483 5.12.7 International Standards (Equivalents, Identicals, and Similars)..................................................... 483 5.12.8 Category Theory in the Use of Standards and Other Specifications in Manufacture, in General, and in Surface Texture, in Particular............................................................................. 485 5.13 Specification on Drawings......................................................................................................................... 486 5.13.1 Surface Roughness....................................................................................................................... 486 5.13.2 Indications Generally—Multiple Symbols................................................................................... 486 5.13.3 Reading the Symbols.................................................................................................................... 487 5.13.4 General and Other Points............................................................................................................. 487 5.14 Summary................................................................................................................................................... 487 References............................................................................................................................................................ 489 Chapter 6. Surfaces and Manufacture.................................................................................................................................... 493 6.1 6.2 6.3
Introduction............................................................................................................................................... 493 Manufacturing Processes.......................................................................................................................... 493 6.2.1 General......................................................................................................................................... 493 Cutting....................................................................................................................................................... 493 6.3.1 Turning......................................................................................................................................... 493 6.3.1.1 General......................................................................................................................... 493 6.3.1.2 Finish Machining.......................................................................................................... 494 6.3.1.3 Effect of Tool Geometry—Theoretical Surface Finish................................................ 495 6.3.1.4 Other Surface Roughness Effects in Finish Machining............................................... 499 6.3.1.5 Tool Wear...................................................................................................................... 500 6.3.1.6 Chip Formation............................................................................................................. 501 6.3.2 Diamond Turning......................................................................................................................... 501 6.3.3 Milling and Broaching................................................................................................................. 502 6.3.3.1 General......................................................................................................................... 502 6.3.3.2 Surface Roughness....................................................................................................... 503
xiv
Contents
6.3.4
6.4
6.5
6.6
6.7
Dry Cutting.................................................................................................................................. 505 6.3.4.1 General......................................................................................................................... 505 6.3.4.2 Cutting Mechanisms..................................................................................................... 505 Abrasive Processes.................................................................................................................................... 508 6.4.1 General......................................................................................................................................... 508 6.4.2 Types of Grinding.........................................................................................................................511 6.4.3 Comments on Grinding................................................................................................................ 512 6.4.3.1 Nature of the Grinding Process.................................................................................... 512 6.4.4 Centerless Grinding .....................................................................................................................513 6.4.4.1 General..........................................................................................................................513 6.4.4.2 Important Parameters for Roughness and Roundness...................................................513 6.4.4.3 Roundness Considerations . ..........................................................................................514 6.4.5 Cylindrical Grinding.................................................................................................................... 515 6.4.5.1 Spark-Out...................................................................................................................... 515 6.4.5.2 Elastic Effects................................................................................................................516 6.4.6 Texture Generated in Grinding.....................................................................................................516 6.4.6.1 Chatter...........................................................................................................................517 6.4.7 Other Types of Grinding...............................................................................................................518 6.4.7.1 Comments on Grinding.................................................................................................519 6.4.7.2 Slow Grinding................................................................................................................519 6.4.8 Theoretical Comments on Roughness and Grinding................................................................... 520 6.4.9 Honing.......................................................................................................................................... 523 6.4.10 Polishing and Lapping.................................................................................................................. 524 Unconventional Processes......................................................................................................................... 525 6.5.1 General......................................................................................................................................... 525 6.5.2 Ultrasonic Machining................................................................................................................... 526 6.5.3 Magnetic Float Polishing............................................................................................................. 527 6.5.4 Physical and Chemical Machining............................................................................................... 527 6.5.4.1 Electrochemical Machining (ECM)............................................................................. 527 6.5.4.2 Electrolytic Grinding.................................................................................................... 528 6.5.4.3 Electrodischarge Machining (EDM)............................................................................ 528 Forming Processes.................................................................................................................................... 528 6.6.1 General......................................................................................................................................... 528 6.6.2 Surface Texture and the Plastic Deformation Processes.............................................................. 529 6.6.3 Friction and Surface Texture in Material Movement....................................................................531 6.6.4 Ballizing....................................................................................................................................... 532 Effect of Scale of Size in Manufacture: Macro to Nano to Atomic Processes......................................... 532 6.7.1 General......................................................................................................................................... 532 6.7.2 Nanocutting.................................................................................................................................. 533 6.7.2.1 Mechanism of Nanocutting.......................................................................................... 533 6.7.3 Nanomilling................................................................................................................................. 533 6.7.3.1 Scaling Experiments..................................................................................................... 534 6.7.3.2 General Ball Nose Milling........................................................................................... 536 6.7.3.3 Surface Finish............................................................................................................... 537 6.7.4 Nanofinishing by Grinding.......................................................................................................... 538 6.7.4.1 Comments on Nanogrinding........................................................................................ 538 6.7.4.2 Brittle Materials and Ductile Grinding........................................................................ 540 6.7.5 Micropolishing............................................................................................................................. 542 6.7.5.1 Elastic Emission Machining......................................................................................... 542 6.7.5.2 Mechanical–Chemical Machining............................................................................... 543 6.7.6 Microforming............................................................................................................................... 544 6.7.7 Three Dimensional Micromachining........................................................................................... 545 6.7.8 Atomic-Scale Machining............................................................................................................. 546 6.7.8.1 General......................................................................................................................... 546 6.7.8.2 Electron Beam Methods............................................................................................... 546 6.7.8.3 Ion Beam Machining.................................................................................................... 547
Contents
xv
6.7.8.4 Focused Ion Beam—String and Level Set Approach................................................... 550 6.7.8.5 Collimated (Shower) Methods—Ion and Electron Beam Techniques..........................551 6.7.8.6 General Comment on Atomic-Type Processes............................................................. 552 6.7.8.7 Molecular Beam Epitaxy.............................................................................................. 552 6.8 Structured Surface Manufacture............................................................................................................... 553 6.8.1 Distinction between Conventional and Structured Surfaces........................................................ 553 6.8.2 Structured Surfaces Definitions................................................................................................... 554 6.8.3 Macro Examples........................................................................................................................... 555 6.8.4 Micromachining of Structured Surfaces...................................................................................... 558 6.8.4.1 General......................................................................................................................... 558 6.8.4.2 Some Specific Methods of Micro-Machining Structured Surfaces............................. 558 6.8.4.3 Hard Milling/Diamond Machining.............................................................................. 559 6.8.4.4 Machining Restriction Classification........................................................................... 560 6.8.4.5 Diamond Micro Chiseling............................................................................................ 561 6.8.4.6 Other Factors Including Roughness Scaling................................................................ 563 6.8.4.7 Polishing of Micromolds.............................................................................................. 564 6.8.5 Energy Assisted Micromachining................................................................................................ 565 6.8.5.1 Radiation-Texturing Lasers.......................................................................................... 565 6.8.5.2 Ion Beam Microtexturing............................................................................................. 567 6.8.6 Some Other Methods of Micro and Nanostructuring.................................................................. 568 6.8.6.1 Hierarchical Structuring............................................................................................... 568 6.8.6.2 Plasma Structuring....................................................................................................... 569 6.8.7 Structuring of Micro-Lens Arrays............................................................................................... 569 6.8.7.1 General Theory............................................................................................................. 569 6.8.7.2 Mother Lens Issues....................................................................................................... 571 6.8.7.3 Typical Fabrication of Microarrays.............................................................................. 572 6.8.7.4 Fabrication of Micro Arrays—All Liquid Method....................................................... 572 6.8.8 Pattern Transfer—Use of Stamps................................................................................................. 573 6.8.9 Self Assembly of Structured Components, Bio Assembly........................................................... 575 6.8.10 Chemical Production of Shapes and Forms-Fractals................................................................... 577 6.8.11 Anisotropic Chemical Etching of Material for Structure............................................................ 579 6.8.12 Use of Microscopes for Structuring Surfaces.............................................................................. 580 6.8.13 General Nano-Scale Patterning.................................................................................................... 583 6.9 Manufacture of Free-Form Surfaces......................................................................................................... 583 6.9.1 Optical Complex Surfaces............................................................................................................ 583 6.9.2 Ball End Milling.......................................................................................................................... 583 6.9.3 Micro End-Milling....................................................................................................................... 586 6.9.4 Free Form Polishing..................................................................................................................... 587 6.9.5 Hybrid Example........................................................................................................................... 589 6.10 Mathematical Processing of Manufacture-Finite Element Analysis (Fe), Md, Nurbs.............................. 590 6.10.1 General......................................................................................................................................... 590 6.10.2 Finite Element Analysis (FE)....................................................................................................... 590 6.10.3 Molecular Dynamics.................................................................................................................... 590 6.10.4 Multi-Scale Dynamics.................................................................................................................. 591 6.10.5 NURBS........................................................................................................................................ 593 6.11 The Subsurface and the Interface.............................................................................................................. 594 6.11.1 General......................................................................................................................................... 594 6.11.2 Brittle/Ductile Transition in Nano-Metric Machining................................................................. 595 6.11.3 Kinematic Considerations............................................................................................................ 596 6.12 Surface Integrity........................................................................................................................................ 596 6.12.1 Surface Effects Resulting from the Machining Process.............................................................. 596 6.12.2 Surface Alterations....................................................................................................................... 597 6.12.3 Residual Stress............................................................................................................................. 597 6.12.3.1 General......................................................................................................................... 597 6.12.3.2 Grinding........................................................................................................................ 599 6.12.3.3 Turning......................................................................................................................... 601
xvi
Contents
6.12.3.4 Milling.......................................................................................................................... 603 6.12.3.5 Shaping......................................................................................................................... 603 6.12.3.6 General Comment......................................................................................................... 603 6.12.4 Measurement of Stresses.............................................................................................................. 605 6.12.4.1 General......................................................................................................................... 605 6.12.4.2 Indirect Methods........................................................................................................... 605 6.12.4.3 Direct Methods............................................................................................................. 606 6.12.5 Subsurface Properties Influencing Function................................................................................ 607 6.12.5.1 General......................................................................................................................... 607 6.12.5.2 Influences of Residual Stress........................................................................................ 607 6.13 Surface Geometry—A Fingerprint of Manufacture.................................................................................. 609 6.13.1 General......................................................................................................................................... 609 6.13.2 Use of Random Process Analysis..................................................................................................610 6.13.2.1 Single-Point-Cutting Turned Parts................................................................................610 6.13.2.2 Abrasive Machining.......................................................................................................612 6.13.3 Spacefrequency Functions (the Wigner Distribution)...................................................................613 6.13.4 Application of Wavelet Function...................................................................................................614 6.13.5 Non-Linear Dynamicschaos Theory.............................................................................................616 6.13.6 Application of Non-Linear Dynamics—Stochastic Resonance....................................................619 6.14 Surface Finish Effects In Manufacture of Microchip Electronic Components........................................ 620 6.15 Discussion and Conclusions...................................................................................................................... 622 References............................................................................................................................................................ 622 Chapter 7. Surface Geometry and Its Importance in Function.............................................................................................. 629 7.1
7.2
7.3
Introduction............................................................................................................................................... 629 7.1.1 The Function Map........................................................................................................................ 629 7.1.1.1 Summary of the “Function Map” Concept................................................................... 630 7.1.1.2 Chapter Objective and Function of the Concept Map.................................................. 631 7.1.2 Nature of Interaction.................................................................................................................... 632 Two-Body Interaction—The Static Situation............................................................................................ 633 7.2.1 Contact......................................................................................................................................... 633 7.2.1.1 Point Contact................................................................................................................ 635 7.2.2 Macroscopic Behavior.................................................................................................................. 635 7.2.2.1 Two Spheres in Contact................................................................................................ 636 7.2.2.2 Two Cylinders in Contact............................................................................................. 637 7.2.2.3 Crossed Cylinders at Any Angle.................................................................................. 638 7.2.2.4 Sphere on a Cylinder.................................................................................................... 638 7.2.2.5 Sphere inside a Cylinder............................................................................................... 638 7.2.3 Microscopic Behavior.................................................................................................................. 638 7.2.3.1 General......................................................................................................................... 638 7.2.3.2 Elastic Contact.............................................................................................................. 642 7.2.3.3 Elastic/Plastic Balance—Plasticity Index.................................................................... 663 7.2.3.4 Size and Surface Effects on Mechanical and Material Properties............................... 675 7.2.4 Functional Properties of Normal Contact.................................................................................... 678 7.2.4.1 General......................................................................................................................... 678 7.2.4.2 Stiffness........................................................................................................................ 679 7.2.4.3 Normal Contact Other Phenomena—Creep and Seals................................................ 682 7.2.4.4 Adhesion....................................................................................................................... 684 7.2.4.5 Thermal Conductivity................................................................................................... 689 Two-Body Interactions—Dynamic Behavior............................................................................................ 694 7.3.1 General......................................................................................................................................... 694 7.3.2 Friction......................................................................................................................................... 695 7.3.2.1 Friction Mechanisms—General................................................................................... 695 7.3.2.2 Friction Modeling in Simulation.................................................................................. 696
xvii
Contents
7.4
7.5
7.3.2.3 Dry Friction.................................................................................................................. 702 7.3.2.4 Wet Friction—Clutch Application................................................................................ 705 7.3.2.5 Atomistic Considerations—Simulations...................................................................... 706 7.3.2.6 Rubber Friction and Surface Effects............................................................................ 709 7.3.2.7 Other Applications and Considerations.........................................................................710 7.3.2.8 Thermo-Mechanical Effects of Friction Caused by Surface Interaction..................... 712 7.3.2.9 Friction and Wear Comments–Dry Conditions............................................................ 715 7.3.3 Wear—General............................................................................................................................ 715 7.3.3.1 Wear Classification........................................................................................................716 7.3.3.2 Wear Measurement from Surface Profilometry............................................................717 7.3.3.3 Wear Prediction Models................................................................................................718 7.3.3.4 Abrasive Wear and Surface Roughness........................................................................ 720 7.3.4 Lubrication—Description of the Various Types with Emphasis on the Effect of Surfaces......... 722 7.3.4.1 General......................................................................................................................... 722 7.3.4.2 Hydrodynamic Lubrication and Surface Geometry..................................................... 722 7.3.4.3 Two Body Lubrication under Pressure: Elasto Hydrodynamic Lubrication and the Influence of Roughness................................................................................................. 733 7.3.4.4 Mixed Lubrication and Roughness............................................................................... 743 7.3.4.5 Boundary Lubrication................................................................................................... 743 7.3.4.6 Nanolubrication—A Summary..................................................................................... 748 7.3.5 Surface Geometry Modification for Function.............................................................................. 749 7.3.5.1 Texturing and Lubrication............................................................................................ 749 7.3.5.2 Micro Flow and Shape of Channels............................................................................. 754 7.3.5.3 Shakedown, Surface Texture, and Running In............................................................. 755 7.3.6 Surface Failure Modes................................................................................................................. 760 7.3.6.1 The Function–Lifetime, Weibull Distribution.............................................................. 760 7.3.6.2 Scuffing Problem...........................................................................................................761 7.3.6.3 Rolling Fatigue (Pitting and Spalling) Problem........................................................... 762 7.3.6.4 3D Body Motion and Geometric Implications............................................................. 764 7.3.7 Vibration Effects.......................................................................................................................... 769 7.3.7.1 Dynamic Effects—Change of Radius.......................................................................... 769 7.3.7.2 Normal Impact of Rough Surfaces............................................................................... 770 7.3.7.3 Squeeze Films and Roughness..................................................................................... 772 7.3.7.4 Fretting and Fretting Fatigue, Failure Mode................................................................ 776 One-Body Interactions.............................................................................................................................. 778 7.4.1 General Mechanical Electical Chemical...................................................................................... 778 7.4.1.1 Fatigue.......................................................................................................................... 778 7.4.1.2 Corrosion...................................................................................................................... 783 7.4.2 One Body with Radiation (Optical): The Effect of Roughness on the Scattering of Electromagnetic and Other Radiation.......................................................................................... 786 7.4.2.1 Optical Scatter—General............................................................................................. 786 7.4.2.2 General Optical Approach............................................................................................ 787 7.4.2.3 Scatter from Deterministic Surfaces............................................................................ 791 7.4.2.4 Summary of Results, Scalar and Geometrical Treatments.......................................... 792 7.4.2.5 Mixtures of Two Random Surfaces.............................................................................. 793 7.4.2.6 Other Considerations on Light Scatter......................................................................... 794 7.4.2.7 Scattering from Non-Gaussian Surfaces and Other Effects......................................... 800 7.4.2.8 Aspherics and Free Form Surfaces............................................................................... 804 7.4.3 Scattering by Different Sorts of Waves........................................................................................ 805 7.4.3.1 General......................................................................................................................... 805 7.4.3.2 Scattering from Particles and the Influence of Roughness........................................... 807 7.4.3.3 Thin Films—Influence of Roughness.......................................................................... 808 System Function Assembly....................................................................................................................... 809 7.5.1 Surface Geometry, Tolerances, and Fits....................................................................................... 809 7.5.1.1 Tolerances..................................................................................................................... 809
xviii
Contents
7.6
Discussion...................................................................................................................................................811 7.6.1 Profile Parameters........................................................................................................................ 812 7.6.2 Areal (3D) Parameters.................................................................................................................. 815 7.6.2.1 Comments on Areal Parameters................................................................................... 815 7.6.3 Amplitude and Spacing Parameters............................................................................................. 815 7.6.4 Comments on Textured Surface Properties..................................................................................816 7.6.5 Function Maps and Surfaces.........................................................................................................817 7.6.6 Systems Approach.........................................................................................................................819 7.6.7 Scale of Size and Miniaturization Effects of Roughness............................................................. 822 7.7 Conclusions................................................................................................................................................ 825 References............................................................................................................................................................ 826 Chapter 8. Surface Geometry, Scale of Size Effects, Nanometrology.................................................................................. 837 8.1 8.2
8.3
8.4
Introduction............................................................................................................................................... 837 8.1.1 Scope of Nanotechnology............................................................................................................ 837 8.1.2 Nanotechnology and Engineering................................................................................................ 837 Effect of Scale of Size on Surface Geometry............................................................................................ 838 8.2.1 Metrology at the Nanoscale.......................................................................................................... 838 8.2.1.1 Nanometrology............................................................................................................. 838 8.2.1.2 Nanometer Implications of Geometric Size................................................................. 839 8.2.1.3 Geometric Features and the Scale of Size.................................................................... 839 8.2.1.4 How Roughness Changes with Scale............................................................................ 839 8.2.1.5 Shape and (Scale of Size).............................................................................................. 841 8.2.1.6 General Comments Scale of Size Effects..................................................................... 843 Scale of Size, Surface Geometry, and Function........................................................................................ 845 8.3.1 Effects on Mechanical and Material Properties........................................................................... 845 8.3.1.1 Dynamic Considerations—Balance of Forces............................................................. 845 8.3.1.2 Overall Functional Dependence on Scale of Size........................................................ 846 8.3.1.3 Some Structural Effects of Scale of Size in Metrology................................................ 847 8.3.1.4 Nano-Physical Effects.................................................................................................. 848 8.3.1.5 Hierarchical Considerations......................................................................................... 850 8.3.2 Multiscale Effects—Nanoscale Affecting Macroscale................................................................ 851 8.3.2.1 Archard Legacy—Contact and Friction....................................................................... 851 8.3.2.2 Multiscale Surfaces....................................................................................................... 852 8.3.2.3 Langevin Properties...................................................................................................... 854 8.3.3 Molecular and Atomic Behavior.................................................................................................. 857 8.3.3.1 Micromechanics of Friction—Effect of Nanoscale Roughness—General Comments.....857 8.3.3.2 Micromechanics of Friction—Nanoscale Roughness.................................................. 857 8.3.3.3 Atomistic Considerations and Simulations................................................................... 859 8.3.3.4 Movable Cellular Automata (MCA)............................................................................. 859 8.3.4 Nano/Microshape and Function................................................................................................... 862 8.3.4.1 Microflow...................................................................................................................... 862 8.3.4.2 Boundary Lubrication................................................................................................... 862 8.3.4.3 Coatings........................................................................................................................ 864 8.3.4.4 Mechanical Properties of Thin Boundary Layer Films............................................... 865 8.3.5 Nano/Micro, Structured Surfaces and Elasto-Hydrodynamic Lubrication EHD........................ 867 8.3.5.1 Highly Loaded Non-Conformal Surfaces.................................................................... 867 8.3.5.2 Summary of Micro/Nano Texturing and Extreme Pressure Lubrication..................... 868 8.3.6 Nanosurfaces—Fractals............................................................................................................... 869 8.3.6.1 Wave Scatter Characteristics........................................................................................ 869 8.3.6.2 Fractal Slopes, Diffraction, Subfractal Model.............................................................. 871 Scale of Size and Surfaces in Manufacture............................................................................................... 872 8.4.1 Nano Manufacture—General...................................................................................................... 872 8.4.1.1 Requirements................................................................................................................ 872 8.4.1.2 Issues............................................................................................................................. 872 8.4.1.3 Solution......................................................................................................................... 872
xix
Contents
8.4.2
8.5
8.6
8.7
8.8
Nanomachinability....................................................................................................................... 872 8.4.2.1 Abrasive Methods......................................................................................................... 872 8.4.2.2 Alternative Abrasion with Chemical Action................................................................ 874 8.4.2.3 Nanocutting.................................................................................................................. 875 8.4.2.4 Micro/Nano-Ductile Methods Microforming and Scale of Size Dependence............. 876 8.4.3 Atomic-Scale Machining............................................................................................................. 878 8.4.3.1 General......................................................................................................................... 878 8.4.3.2 Atomic-Scale Processing.............................................................................................. 878 8.4.3.3 Electron Beam (EB) Methods....................................................................................... 879 8.4.3.4 Ion Beam Machining.................................................................................................... 879 8.4.3.5 General Comment on Atomic-Type Processes............................................................. 882 8.4.4 Chemical Production of Shapes and Forms................................................................................. 883 8.4.5 Use of Microscopes for Structuring Surfaces.............................................................................. 885 8.4.6 General Nanoscale Patterning...................................................................................................... 886 Nano Instrumentation................................................................................................................................ 887 8.5.1 Signal from Metrology Instruments as Function of Scale........................................................... 887 8.5.2 Instrument Trends—Resolution and Bandwidth.......................................................................... 887 8.5.3 Scanning Probe Microscopes (SPM), Principles, Design, and Problems.................................... 888 8.5.3.1 History.......................................................................................................................... 888 8.5.3.2 The Probe...................................................................................................................... 891 8.5.3.3 The Cantilever.............................................................................................................. 896 8.5.4 Interactions................................................................................................................................... 898 8.5.4.1 Tip–Sample Interface................................................................................................... 898 8.5.4.2 Cantilever—Sample Interaction (e.g., Tapping Mode)................................................. 900 8.5.5 Variants on AFM.......................................................................................................................... 902 8.5.5.1 General Problem........................................................................................................... 902 8.5.5.2 Noise............................................................................................................................. 903 8.5.5.3 Cantilever Constants..................................................................................................... 905 8.5.5.4 SPM Development........................................................................................................ 905 8.5.6 Electron Microscopy.................................................................................................................... 908 8.5.6.1 General......................................................................................................................... 908 8.5.6.2 Reaction of Electrons with Solids................................................................................. 909 8.5.6.3 Scanning Electron Microscope (SEM)......................................................................... 909 8.5.6.4 Transmission Electron Microscope (TEM)...................................................................911 8.5.7 Photon Interaction........................................................................................................................ 912 8.5.7.1 Scanning Near Field Optical Microscope (SNOM)..................................................... 912 8.5.7.2 Photon Tunneling Microscopy (PTM)......................................................................... 912 Operation and Design Considerations....................................................................................................... 913 8.6.1 General Comment on Nanometrology Measurement.................................................................. 913 8.6.2 The Metrological Situation........................................................................................................... 913 8.6.3 Some Prerequisites for Nanometrology Instruments................................................................... 913 Standards and Traceability........................................................................................................................ 915 8.7.1 Traceability................................................................................................................................... 915 8.7.2 Calibration.................................................................................................................................... 915 8.7.2.1 General......................................................................................................................... 915 8.7.2.2 Working Standards and Special Instruments................................................................916 8.7.2.3 Nano-Artifact Calibration—Some Probe and Detector Problems................................918 8.7.2.4 Dynamics of Surface Calibration at Nanometer Level................................................. 921 8.7.3 Comparison of Instruments: Atomic, Beam and Optical............................................................. 922 Measurement of Typical Nanofeatures...................................................................................................... 922 8.8.1 General......................................................................................................................................... 922 8.8.1.1 Thin Films.................................................................................................................... 922 8.8.2 Roughness.................................................................................................................................... 923 8.8.3 Some Interesting Cases................................................................................................................ 924 8.8.3.1 STM Moiré Fringes...................................................................................................... 924 8.8.3.2 Kelvin Probe Force Microscope (KPFM).................................................................... 925
xx
Contents
8.8.3.3 Electrostatic Force Microscope (EFM)........................................................................ 925 8.8.3.4 Atomic Force Microscope (Afm)—Nanoscale Metrology with Organic Specimens..................................................................................................................... 926 8.8.4 Extending the Range and Mode of Nano Measurement.............................................................. 928 8.8.4.1 Using Higher Frequencies............................................................................................ 928 8.8.4.2 Increasing the Range.................................................................................................... 929 8.9 Measuring Length to Nanoscale with Interferometers and Other Devices............................................... 930 8.9.1 Optical Methods........................................................................................................................... 930 8.9.1.1 Heterodyne.................................................................................................................... 930 8.9.2 Capacitative Methods................................................................................................................... 930 8.10 Nanogeometry in Macro Situations........................................................................................................... 932 8.10.1 Freeform Macro Geometry, Nanogeometry and Surface Structure............................................. 932 8.11 Discussion and Conclusions...................................................................................................................... 933 8.11.1 SPM Development........................................................................................................................ 933 8.11.2 Conclusion about Nanometrology................................................................................................ 933 References............................................................................................................................................................ 934 Chapter 9. General Comments............................................................................................................................................... 941 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9
Introduction............................................................................................................................................... 941 Characterization........................................................................................................................................ 941 Processing, Operations, and Simulations.................................................................................................. 942 Measurement Techniques.......................................................................................................................... 942 Traceability Standardization Uncertainty................................................................................................. 943 Surfaces and Manufacture......................................................................................................................... 943 Surface Geometry and Performance......................................................................................................... 944 Nanometrology.......................................................................................................................................... 944 Overview................................................................................................................................................... 945
Glossary.................................................................................................................................................................................... 947 Index.......................................................................................................................................................................................... 957
Preface There has been a shift in emphasis in surface and nanometrology since the first edition of this book was published. Previously it was considered acceptable to deal with miniaturization and nanotechnology separately from traditional surface geometry and its metrology, but this is no longer the case: the extent to which they are inextricably linked is now very clear. Nanoscale investigations on the molecular and sometimes atomic level are often fundamental to understanding and even controlling micro and macro behavior. Also, many macrosized objects have nanoscale detail and are made to nanoscale tolerances as in free-form applications. A principal objective of this book has been to determine how the reduction in scale of size from macro to nano has affected all aspects of surface use and manufacture as well as their measurement. This shift has extended through characterization, standardization, manufacture, and performance. So, instead of nanotechnology being reserved for one chapter as in the first edition, it now permeates Chapters 2 through 7. Chapter 8 collates the results. Although many of the original topics are preserved, they are approached with this change of emphasis in mind. This interdependence of scale has had profound practical implications because it is difficult and expensive to
carry out experiments encompassing more than one scale, with the result that the number of truly significant practical experiments is reducing and the number of simulations is increasing—dramatically, sometimes with detrimental effects. There is the danger that the physics of an experiment can get blurred. Also, recourse to common sense is sometimes not an option because of the shortfall in practical experience of the investigator, however enthusiastic he or she may be. Simulations do not provide an independent feedback. A new word coined here is “simpirical” which means the use of computer simulations to verify theory rather than using practical, empirical evidence. An aim of this edition, therefore, has been to point out, wherever appropriate, and clarify, if possible, extreme cases of this trend. There are many new challenges confronting the surface metrologist of today. New types of structured and free-form surface, changes in behavior with scale, the use of soft materials and new tools of characterization, as well as a new generation of atomic instrumentation, are spawning more problems than ever before. It is an exciting era in surface and nanometrology!
xxi
Acknowledgments I am grateful for the discussions I have had with Professors Xiang Jiang and Paul Scott of Huddersfield University’s Centre for Precision Technologies, UK, who have given me continual encouragement throughout the project. I list below, in no particular order, a number of people who have let me have comprehensive use of their figures, which has enhanced the publication considerably. There are authors of figures used in the manuscript who are not mentioned below. These are fully acknowledged in the text and referenced where appropriate in the figure captions. Professors Jiang and Scott from Huddersfield University, Professor R. Leach and Dr. Yacoot from the National Physical Laboratory, D. Mansfield of Taylor Hobson
(Ametek) Ltd., and Professor G. Smith formerly of the Southampton Institute, all from the UK. Drs. C. Evans and P. de Groot from Zygo Corporation and Jim Bryan from the USA. Dr. H. Haitjema of Mitutoyo Research Centre, Netherlands. Professor G. Goch from Bremen University, Germany, Professor G. Zhang of Tianjin University, PR China, and especially Professor E. Brinksmeier from Bremen University and Dr. B. Persson from IFF, FZ— Julich, from Germany. All opinions, suggestions, and conclusions in this book are entirely my own. While I have made every effort to be factually correct, if any errors have crept in, the fault is mine and I apologize.
xxiii
1 Introduction—Surface and Nanometrology 1.1 General As the title suggests this book is about surfaces and their measurement. This, in itself, is not a complete objective. There is little to be gained by just measurement unless it is for a purpose. So, the book also addresses the problems of why the surface is being measured, what needs to be identified and quantified and what is the effect of its manufacture. Not simply surfaces involved in general engineering are examined but increasingly surfaces at the nanometer level, in all covering a wide range of object sizes ranging from telescope mirrors down to molecular and atomic strcutures. Enough mathematics and physics are used to ensure that the book is comprehensive but hopefully not too much to muddy the issues. What the book does not do is to attempt to provide full details of applications or of manufacturing processes but only those related to the role and relevance of the surface.
1.2 SURFACE METROLOGY Surface metrology from an engineering standpoint is the measurement of the deviations of a workpiece from its intended shape, that is, from the shape specified on the drawing. It is taken to include features such as deviations from roundness, straightness, flatness, cylindricity, and so on. It also includes the measurement of the marks left on the workpiece in trying to get to the shape, i.e., the surface texture left by the machining process and also surface deviations on naturally formed objects. Surface metrology is essentially a mechanical engineering discipline but it will soon be obvious that as the subject has developed in recent years it has broadened out to include aspects of many other disciplines, and in particular that of nanotechnology in which it is now deeply interwoven. Perhaps the best way to place the role of engineering surface metrology is to consider just what needs to be measured in order to enable a workpiece to perform according to the designer’s aim. Assuming that the material has been specified correctly and that the workpiece has been made from it, the first thing to be done is to measure the dimensions. These will have been specified on the drawing to a tolerance. Under this heading is included the measurement of length, position, radius, and so on. So, dimensional metrology is the first aim because it ensures that the size of the workpiece conforms to the designer’s wish. This in turn ensures that the workpiece will assemble into an engine, gearbox, gyroscope or whatever; the static characteristics have therefore been satisfied.
This by itself is not sufficient to ensure that the workpiece will satisfy its function; it may not be able to turn or move, for example. This is where surface metrology becomes important. Surface metrology ensures that all aspects of the surface geometry are known and preferably controlled. If the shape and texture of the workpiece are correct then it will be able to move at the speeds, loads, and temperatures specified in the design; the dynamic characteristics have therefore been satisfied. The final group of measurements concerns the physical and chemical condition of the workpiece. This will be called here as physical metrology. It includes the hardness of the materials, both in the bulk and in the surface layers, and the residual stress of the surface, both in compression or in tension, left in the material by the machining process or the heat treatment. It also includes measurement of the metallurgical structure of the material, and its chemical construction. All these and more contribute to the durability of the component, for example, its resistance to corrosion or fatigue. Physical metrology therefore is the third major sector of engineering metrology: the long-term characteristics. As a general rule, all three types of measurement must take place in order to ensure that the workpiece will do its assigned job for the time specified; to guarantee its quality. This book, as mentioned earlier, is concerned specifically with surfaces but this does not necessarily exclude the other two. In fact it is impossible to divorce any of these disciplines completely from the others. After all, there is only one component and these measurements are all taken on it. Physical and chemical features of importance will be covered in some detail as and when they are necessary in the text.
1.3 Background to SURFACE METROLOGY Figure 1.1 shows where the block representing engineering surface metrology can be placed relative to blocks representing manufacture and function. In the block marked “manufacture” is included all aspects of the manufacturing process, such as machine performance, tool wear, and chatter, whereas in the block marked “function” is included all functional properties of the surfaces of components, such as the tribological regimes of friction, wear, and lubrication and other uses. The measurement block includes the measurement of roughness, roundness, and all other aspects of surface metrology. The figure shows that the texture and geometry of a workpiece can be important in two quite different applications: 1
2
Handbook of Surface and Nanometrology, Second Edition
Production engineer
Control
Measurement
Development design Optimize engineer Function
Manufacture
Quality control engineer
Satisfactory performance
FIGURE 1.1 Relationship of surface metrology.
one is concerned with controlling the manufacture (this is examined in detail in Chapter 6) and the other is concerned with the way in which the surface can influence how well a workpiece will function. Many of these uses fall under the title of tribology—the science of rubbing parts—but others include the effect of light on surfaces and also static contact. In fact the two blocks “manufacture” and “function” are not completely independent of each other, as illustrated by the line in the figure joining the two. Historically the correct function of the workpiece was guaranteed by controlling the manufacture. In practice, what happened was that a workpiece was made and tried out. If it functioned satisfactorily the same manufacturing conditions were used to make the next workpiece and so on for all subsequent workpieces. It soon became apparent that the control of the surface was being used as an effective go-gauge for the process and hence the function. Obviously what is required is a much more flexible and less remote way of guaranteeing functional performance; it should be possible, by measuring parameters of the surface geometry itself, to help to predict the function. The conventional way of ensuring performance is very much a balancing act and unfortunately the delicate equilibrium can be broken by changing the production process or even the measurement parameters. This may seem an obvious statement but within it lies one of the main causes for everyday problems in engineering. It will soon be made clear that deviations in surface geometry cannot simply be regarded as an irritant to be added to the general size measurement of a component. The smallness of geometric deviations does not infer smallness of importance. It will be shown that the surface geometry is absolutely critical in many applications and that it contains information that can be invaluable to a manufacturer if extracted correctly from the mass of data making up the surface. In almost every example of its use the criticism can be raised that the nature of the surface is not well understood and, even where it is, it is not properly specified especially on drawings. The importance of surface geometry should be recognized not just by the quality control engineer or inspector but also, more importantly, by the designer. It is he or she who should understand the influence that the surface has on behavior and
specify it accordingly. It is astonishing just how ill-informed most designers are where surface metrology is concerned. Many of them in fact consider that a “good” surface is necessarily a smooth one, the smoother the better. This is not only untrue, in many cases, it can be disastrous.
1.4 NanoMETROLOGY Since the first edition, the subject has developed in a number of ways. One is due to miniaturization which has brought out new situations in production techniques, inspection, and ways in which surfaces are employed as well as bringing in many new materials with their different properties. Miniaturization is much more than making things small. It is not just the size that has changed; the properties of materials change as well as the rules concerning static and dynamic behavior. At the nano and atomic scale the new generation of scanning microscopes have emerged and methods of investigating behavior on the atomic scale, such as molecular dynamics and mobile cellular automata are now being used. In fact the “scale of size” has a profound effect and has upset many preconceived beliefs in engineering and physics to such an extent that it has altered the content of every chapter in the book from that of the first edition. Another profound development has been due to the increasing use of computing in trying to understand and predict performance. Theoretical problems can be more readily solved and computer simulations to test theory as well as to examine methods of production are being used as a matter of course. These developments have meant that optimum shapes for performance can be explicitly worked out. This has resulted in the emergence of “free form” geometries and “structured surfaces,” which despite bringing tremendous gains in performance, in weight, amount of material, and size have also brought extra pressure on measurement and standardization, not to mention production processes. These new areas in technology do not mean that traditional metrology has stood still: there have been improvements over the whole spectrum of metrological activity, as will be seen.
1.5 book structure This new edition will report on progress in the traditional but vital subject areas as well as the new developments. A few comments will be made on each of the chapters to indicate some of the additions and other modifications that have been made. The format has in the chapters been revised to reflect a change in emphasis on the various subjects. In addition, there has been a limited amount of repetition in order to make the individual chapters to some extent self-contained. Chapter 2 starts by considering some of the simpler parameters and reference lines including splines and phasecorrected filters from the original concept to the now universal Gaussian version. The convolution filters are followed by morphological filters stemming from the old “E” system. Statistical parameters and functions follow together with their uses in manufacturing and performance (function). A
Introduction—Surface and Nanometrology
key approach looks at discrete random processes because this is how parameters are actually evaluated by instruments. Some aspects of areal (sometimes but wrongly termed 3D) parameter assessment are covered as well as some other functions including Wigner and wavelet to pull in space frequency effects. Fractal analysis and Weierstrass series are described to explain some aspects of scale of size problems. Chaotic properties of some surface signals are explored using stochastic resonance. Geometric form characterization is represented by roundness, cylindricity, etc. Finally, free form and aspherics are discussed. Chapter 3 discusses digital implementation of some of the concepts covered in Chapter 2 with emphasis on sampling patterns in areal random process analysis including areal Fourier analysis. The theoretical background to the digital evaluation of functional surface parameters is given. Computational geometry issues are discussed with particular reference to cylindricity and conicity. Some aspects of surface behavior at the nanometer level are considered using molecular dynamics and mobile cellular automata. Chapter 4 examines first the basic properties of stylus instruments with emphasis on dynamic measurement of different forms of surface revealing different criteria than for the traditional but rare sine-wave. The generation and philosophy of references are considered and then it is shown how the new scanning probe microscopes (SPM) resemble and differ from conventional stylus instruments. The modes of operation are explained and some time is spent on looking at the variants. The vexed problem of stylus-surface and cantilever-surface interaction is discussed from a number of viewpoints. One of the biggest sections in this chapter is devoted to optical methods ranging from optical probes mimicking the mechanical stylus to whole field assessment via conformal microscopy, moiré methods, interferometry, holography, speckle, and diffraction ending the optical section with flaw detection. Thin film measurement is dealt with at some length in the subsection on white light interferometry. This is followed by some other general methods such as capacitance and ultrasonic techniques. Electron microscopy, photon microscopy, and Raman spectroscopy and their relationship to the various other techniques are discussed, but some fundamental problems associated with presenting the comparative performance of instruments are revealed. To complete the chapter, some fundamental design issues are presented which is more of a check list for aspiring instrument designers. Because of their fundamental relevance, the nature of errors, error propagation, and traceability are considered early on in Chapter 5. These are followed by statistical testing and the design of test experiments. Uncertainty in instruments is explored with reference to some instruments. A lot of emphasis is placed in this chapter on the GPS (Geometric product specification) system which is being advocated throughout the world. The chain of standards, the role of the technical specification documents, and a selected list of standards are contained in the chapter together with some comments on the new concept of categorization. As in Chapter 4 the problem of surface–tip interaction and its meaning has to be included
3
in this chapter because of its central position in the traceability of SPM instruments and in the validity of the term “Fully traceable metrology instrument.” In Chapter 6 the basic processes are discussed with respect to the surfaces produced. Some aspects of the effect of the tool and workpiece materials, tool wear, chip formation, and lubrication on the surfaces are considered. Dry cutting in milling is singled out for discussion. Normal abrasive methods such as grinding, honing, and polishing are discussed as well as some issues in unconventional machining like ultrasonic machining, magnetic float polishing, ECM, EDM, and forming. A large section is devoted to the effects of the scale of size on the machining process covering, such subjects as nanocutting, nano-milling, nano-finishing by grinding. Physical effects and transitions from brittle to ductile machining at the nanoscale are considered in detail. This continues with micro forming and the atomic scale machining with electron beams, ion beams focused and collimated and finishes up with some detail on molecular beam epitaxy, etc. Another major section is devoted to the generation of structured surfaces starting with some old examples and then progressing through to the most modern techniques such as diamond micro chiseling, and the properties and geometry of surfaces generated in this way as well as the problems in measuring them. Micro fabrication of arrays is covered along with mother lens problems and there is a large section on the use of laser and other energy beam ways of structuring surfaces, mentioning various surface problems that can result from using these techniques. Self-assembly is discussed as well as bio assembly. In the section on the manufacture of free-form surfaces, optical free-form complex surfaces are considered first followed by methods of achievement using ball-end milling. Polishing methods are also given with examples. Mathematical processing is explained with some emphasis on multi-scale dynamics and molecular dynamics touching on finite element analysis. Surface integrity is not forgotten and some discussion on the residual stress implications of the processes and how to measure them is included. The section concludes with a breakdown of how the surface geometry can be used as a fingerprint of manufacture bringing in some aspects of chaos theory and stochastic resonance. Chapter 7 is the biggest chapter covering recent developments in the functional importance of surfaces starting with static two body interactions. There have been considerable developments by Persson in the deformation and contact mechanics of very elastic bodies using the unlikely mathematics of diffusion which are extensively covered, together with a comparison with existing theories mainly due to Greenwood. Alternative ways of presenting the results are considered. Scale of size issues are prominent in these discussions bringing in fractal considerations and the use of Weierstrass characterization linking back to Archard. How these new issues relate to the functional contact problems of stiffness, creep, adhesion, electrical, and thermal conductivity are discussed
4
Handbook of Surface and Nanometrology, Second Edition
at length. Plastic and dynamic effects including wet and dry friction bringing in the atomistic work of Popov using mobile cellular automata and how they differ from the macro-scale are discussed. Wear and wear prediction models are investigated. The influence of roughness and waviness on traditional lubrication regimes are compared with those using structured surfaces. Also considered is the influence of roughness on micro flow. Some examples are given. Other dynamic effects in the text are vibration, squeeze film generation, fretting, fretting fatigue, and various aspects of corrosion. Single-body effects, particularly the influence of surfaces and their structure on the scatter of light are reviewed including the various scattering elastic and inelastic modes, e.g., Rayleigh and Raman. Aspects of assembly, tolerancing, and the movement of very small micro parts are considered revealing some interesting scale of size effects including a Langevin engineering equivalent of Brownian movement in molecules. Nanotechnology and nanometrology form the main subjects in Chapter 8. This starts with an evaluation of the effect of the scale of size on surface metrology showing how roughness values change disproportionately with size and how shapes and even definitions have to be reconsidered as sizes shrink. How the balance of forces change with scale and how this results in completely different physical and material properties of nanotubes, nanowires, and nanoparticles when compared with their micro and macro equivalents. Considerations of scale on behavior are integrated by bringing together the results of previous chapters. Issues with the
development of the SPM and how they relate to other instruments are investigated in depth showing some concern with aspects of surface interaction and traceability. Often there are situations where there is a mixture of scales of size in which nanoscale features are superimposed on macroscopic objects. These situations pose some special problems in instrumentation. Cases involving nanostructure and roughness on large free-form bodies are particularly demanding. Some examples are given of telescopes and prosthetic knee joints energy conversion. Where appropriate, throughout the chapter some discussion of biological implications is given. Quantum effects are included in the atomistic sections. The book’s conclusions are summarized in Chapter 9 together with questions and answers to some of the issues raised throughout the book. Finally there is a glossary which gives the meaning of many key words and expressions used throughout the book as understood by the author. Copious references are provided throughout the book to enable the reader to pursue the subject further. Throughout the book it has been the intention to preserve the thread from the historical background of the subjects through to recent developments. This approach is considered to be absolutely essential to bring some sort of balance between the use of computing and, in particular, the use of simulations and packages and the fundamental “feel” for the subject. Simulation loses the independent input given by a practical input: the historical thread helps to put some of it back as well as adding to the interest.
2 Characterization 2.1 THE NATURE OF SURFACES Surface characterization, the nature of surfaces and the measurement of surfaces cannot be separated from each other. A deeper understanding of the surface geometry produced by better instrumentation often produces a new approach to characterization. Surface characterization is taken to mean the breakdown of the surface geometry into basic components based usually on some functional requirement. These components can have various shapes, scales of size, distribution in space, and can be constrained by a multiplicity of boundaries in height and position. Issues like the establishment of reference lines can be viewed from their ability to separate geometrical features or merely as a statement of the limit of instrument capability. Often one consideration determines the other! Ease of measurement can influence the perceived importance of a parameter or feature. It is difficult to ascribe meaningful significance to something which has not been or cannot be measured. One dilemma is always whether a feature of the surface is fundamental or simply a number which has been ascribed to the surface by an instrument. This is an uncertainty which runs right through surface metrology and is becoming even more obvious now that atomic scales of size are being explored. Surface, interface, and nanometrology are merging. For this reason what follows necessarily reflects the somewhat disjointed jumps in understanding brought on by improvements in measurement techniques. There are no correct answers. There is only a progressively better understanding of surfaces brought about usually by an improvement in measurement technique. This extra understanding enables more knowledge to be built up about how surfaces are produced and how they perform. This chapter therefore is concerned with the nature of the geometric features, the signal which results from the measuring instrument, the characterization of this signal, and its assessment. The nature of the signal obtained from the surface by an instrument is also considered in this chapter. How the measured signal differs from the properties of the surface itself will be investigated. Details of the methods used to assess the signal will also be considered but not the actual data processing. This is examined in Chapter 3. There is, however, a certain amount of overlap which is inevitable. Also, there is some attention paid to the theory behind the instrument used to measure the surface. This provides the link with Chapter 4. What is set down here follows what actually happened in practice. This approach has merit because it underlies the problems which first came to the attention of engineers in the early 1940s and 1950s. That it was subsequently modified to reflect more sophisticated requirements does not make it wrong; it simply allows a more complete picture to be drawn up. It also shows how characterization and measurement are inextricably
entwined, as are surface metrology and nanometrology, as seen in all chapters but especially in Chapter 8. It is tempting to start a description of practical surfaces by expressing all of the surface geometry in terms of departures from the desired three-dimensional shape, for example departures from an ideal cylinder or sphere. However, this presents problems, so rather than do this it is more convenient and simpler to start off by describing some of the types of geometric deviation which do occur. It is then appropriate to show how these deviations are assessed relative to some of the elemental shapes found in engineering such as the line or circle. Then, from these basic units, a complete picture can be subsequently built up. This approach has two advantages. First, some of the analytical tools required will be explained in a simple context and, second, this train of events actually happened historically in engineering. Three widely recognized causes of deviation can be identified:
1. The irregularities known as roughness that often result from the manufacturing process. Examples are (a) the tool mark left on the surface as a result of turning and (b) the impression left by grinding or polishing. Machining at the nanoscale still has process marks. 2. Irregularities, called waviness, of a longer wavelength caused by improper manufacture. An example of this might be the effects caused by a vibration between the workpiece and a grinding wheel. It should be noted here that by far the largest influence of waviness is in the roundness of the workpiece rather than in the deviations along the axis consequently what is measured as waviness along the axis is in fact only a component of the radial waviness which is revealed by the roundness profile. 3. Very long waves referred to as errors of form caused by errors in slideways, in rotating members of the machine, or in thermal distortion.
Often the first two are lumped together under the general expression of surface texture, and some definitions incorporate all three! Some surfaces have one, two or all of these irregularities [1]. Figure 2.1a, shows roughness and waviness superimposed on the nominal shape of a surface. A question often asked is whether these three geometrical features should be assessed together or separately. This is a complicated question with a complicated answer. One thing is clear; it is not just a question of geometry. The manufacturing factors which result in waviness, for instance, are different from those that produce roughness or form error. The 5
6
Handbook of Surface and Nanometrology, Second Edition
(a)
Waviness
Roughness
Nominal shape (b)
Roughness
(i) Subsurface plastic damage caused by roughness
Waviness
Subsurface elastic effects caused by waviness (ii)
Roughness
(c)
Geometric amplitude
Energy into surface
Waviness
λ
Roughness
Waviness
λ
Deformation Waviness effect Elastic deformation Roughness effect Plastic deformation µ
FIGURE 2.1 (a) Geometric deviations from intended shape, (b) energy wavelength spectrum, and (c) deformation mode wavelength spectrum.
effect of these factors is not restricted to producing an identifiable geometrical feature, it is much more subtle: it affects the subsurface layers of the material. Furthermore, the physical properties induced by chatter, for example, are different from those which produce roughness. The temperatures and stresses introduced by general machining are different from those generated by chatter. The geometrical size of the deviation is obviously not proportional to its effect underneath the surface but it is at least some measure of it. On top of this is the effect of the feature of geometry on function in its own right. It will be clear from the chapter 7 on function how it is possible that a long-wavelength component on the surface can affect performance differently from that of a shorter wavelength of the same amplitude. There are, of course, many examples where the total geometry is important in the function of the
workpiece and under these circumstances it is nonsense to separate out all the geometrical constituents. The same is true from the manufacturing signature point of view. The breakdown of the geometry into these three components represents probably the first attempt at surface characterization. From what has been said it might be thought that the concept of “sampling length” is confined to roughness measurement in the presence of waviness. Historically this is so. Recent thoughts have suggested that in order to rationalize the measurement procedure the same “sampling length” procedure can be adapted to measure “waviness” in the presence of form error, and so on to include the whole of the primary profile. Hence lr, the sampling length for roughness, is joined by lw. For simplicity roughness is usually the surface feature considered in the text. It is most convenient to describe the
7
Characterization
nature and assessment of surface roughness with the assumption that no other type of deviation is present. Then waviness will be brought into the picture and finally errors for form. From a formal point of view it would be advantageous to include them all at the same time but this implies that they are all able to be measured at the same time, which is only possible in some isolated cases.
A
(a)
500×
B
Air
C
D
Material 500× A B CD
2.2 SURFACE GEOMETRY ASSESSMENT AND PARAMETERS 2.2.1 General—Roughness Review Surface roughness is that part of the irregularities on a surface left after manufacture which are held to be inherent in the material removal process itself as opposed to waviness which may be due to the poor performance of an individual machine. (BS 1134 1973) mentions this in passing. In general, the roughness includes the tool traverse feed marks such as are found in turning and grinding and the irregularities within them produced by microfracture, built-up edge on the tool, etc. The word “lay” is used to describe the direction of the predominant surface pattern. In practice it is considered to be most economical in effort to measure across the lay rather than along it, although there are exceptions to this rule, particularly in frictional problems or sealing (see Chapter 7). Surface roughness is generally examined in plan i.e., areal view with the aid of optical and electron microscopes, in crosssections normal to the surface with stylus instruments and, in oblique cross-sections, by optical interference methods. These will be discussed separately in Sections 4.2.7, 4.3, and 4.4.5. First it is useful to discuss the scales of size involved and to dispel some common misconceptions. Surface roughness covers a wide dimensional range, extending from that produced in the largest planing machines having a traverse step of 20 mm or so, down to the finest lapping where the scratch marks may be spaced by a few tenths of a micrometer. These scales of size refer to conventional processes. They have to be extended even lower with non-conventional and energy beam machining where the machining element can be as small as an ion or electron, in which case the scale goes down to the atomic in height and spacing. The peak-to-valley height of surface roughness is usually found to be small compared with the spacing of the crests; it runs from about 50 µm down to less than a few thousandths of a micrometer for molecular removal processes. The relative proportions of height and length lead to the use of compressed profile graphs, the nature of which must be understood from the outset. As an example, Figure 2.2 shows a very short length of the profile of a cross-section of a ground surface, magnified 5000 ×. The representation of the surface by means of a profile graph will be used extensively in this book because it is a very convenient way to portray many of the geometrical features of the surface. Also it is practical in size and reflects the conventional way of representing surfaces in the past. That it does not show the “areal” characteristics of the surface is
(b)
5000× 100×
FIGURE 2.2 Visual distortion caused by making usable chart length.
understood. The mapping methods described later will go into this other aspect of surface characterization. However, it is vital to understand what is in effect a shorthand way of showing up the surface features. Even this method has proved to be misleading in some ways, as will be seen. The length of the section in Figure 2.2 from A to D embraces only 0.1 mm of the surface, and this is not enough to be representative. To cover a sufficient length of surface profile without unduly increasing the length of the chart, it is customary to use a much lower horizontal than vertical magnification. The result may then look like Figure 2.2b. All the information contained in the length AD is now compressed into the portion A′D′ with the advantage that much more information can be contained in the length of the chart, but with the attendant disadvantage that the slopes of the flanks are enormously exaggerated, in the ratio of the vertical to horizontal magnifications. Thus it is essential, when looking at a profile graph, to note both magnifications and to remember that what may appear to be fragile peaks and narrow valleys may represent quite gentle undulations on the actual surface. Compression ratios up to 100:1 are often used. Many models of surfaces used in tribology have been misused simply because of this elementary misunderstanding of the true dimensions of the surface. Examination of an uncompressed cross-section immediately highlights the error in the philosophy of “knocking off of the peaks during wear!” The photomicrographs and cross-sections of some typical surfaces can be examined in Figure 2.3. The photomicrographs (plan or so-called areal views) give an excellent idea of the lay and often of the distance (or spacing) between successive crests, but they give no idea of the dimensions of the irregularities measured normal to the surface. The profile graph shown beneath each of the photomicrographs is an end view of approximately the same part of the surface, equally magnified horizontally, but more highly magnified vertically. The amount of distortion is indicated by the ratio of the two values given for the magnification, for example 15,000 × 150, of which the first is the vertical and the second the horizontal
8
Handbook of Surface and Nanometrology, Second Edition
Shaped ×3
800 µ˝
400/3 Ground × 150
20 µ˝
Diamond turned × 150 15 µ˝
Lapped × 150
2 µ˝
FIGURE 2.3 Photo micrographs showing plan view, and graphs showing cross-section (with exaggerated scale of height) of typical machined surfaces—The classic Reason picture.
magnification. This figure is a classic, having being compiled by Reason [2] in the 1940s and still instructive today! In principle, at least two cross-sections at right angles are needed to establish the topography of the surface, and it has been shown that five sections in arbitrary directions should be used in practice; when the irregularities to be portrayed are seen to have a marked sense of direction and are sufficiently uniform, a single cross-section approximately at right angles to their length will often suffice. Each cross-section must be long enough to provide a representative sample of the roughness to be measured; the degree of uniformity, if in doubt, should be checked by taking a sufficient number of cross-sections distributed over the surface. When the directions of the constituent patterns are inclined to each other, the presence of each is generally obvious from the appearance of the surface, but when a profile graph alone is available, it may be necessary to know something of the process used before
being able to decide whether or not the profile shows waviness. This dilemma will be examined in the next section. Examination of the typical waveforms in Figure 2.3 shows that there is a very wide range of amplitudes and crest spacings found in machining processes, up to about five orders of magnitude in height and three in spacing. Furthermore, the geometric nature of the surfaces is different. This means, for example, that in the cross-sections shown many different shapes of profile are encountered, some engrailed in nature and some invected and yet others more or less random. Basically the nature of the signals that have to be dealt with in surface roughness is more complex than those obtained from practically any sort of physical phenomena. This is not only due to the complex nature of some of the processes and their effect on the surface skin, but also due to the fact that the final geometrical nature of the surface is often a culmination of more than one process, and that the characteristics of any
9
Characterization
process are not necessarily eradicated completely by the following one. This is certainly true from the point of view of the thermal history of the surface skin, as will be discussed later. It is because of these and other complexities that many of the methods of surface measurement are to some extent complementary. Some methods of assessment are more suitable to describe surface behavior than others. Ideally, to get a complete picture of the surface, many techniques need to be used; no single method can be expected to give the whole story. This will be seen in Chapter 4 on instrumentation. The problem instrumentally is therefore the extent of the compromise between the specific fidelity of the technique on the one hand and its usefulness in as many applications as possible on the other. The same is true of surface characterization: for many purposes it is not necessary to specify the whole surface but just a part of it. In what follows the general problem will be stated. This will be followed by a breakdown of the constituent assessment issues. However, it must never be forgotten that the surface is three dimensional and, in most functional applications, it is the properties of the three-dimensional gap between two surfaces which are of importance. Any rigorous method of assessing surface geometry should be capable of being extended to cover this complex situation. An attempt has been made in Chapter 7. The three-dimensional surface z = f(x, y) has properties of height and length in two dimensions. To avoid confusion between what is a three-dimensional or a two-dimensional surface, the term “areal” is used to indicate the whole surface. This is because the terms 2D and 3D have both been used in the literature to mean the complete surface. Some explanation is needed here, of the use of the term “areal.” This is used because the area being examined is an independent factor: it is up to the investigator what the area to be looked at actually is: the values of the dimensions are by choice, on the other hand, the values of the feature to be investigated over the area are not known prior to the measurement. The term “areal” therefore should be applied to all measurement situations where this relationship holds irrespective of the feature to be measured, whether it is height or conductivity or whatever! The term 3D should be used when all three axes are independently allocated as in a conventional coordinate measuring machine. There are a number of ways of tackling the problem of characterization; which is used is dependent on the type of surface, whether or not form error is present and so on. The overall picture of characterization will be built up historically as it occurred. As usual, assessment was dominated by the available means of measuring the surface in the first place. So because the stylus method of measurement has proved to be the most useful owing to its convenient output, ease of use and robustness, and because the stylus instrument usually measures one sample of the whole surface, the evaluation of a single cross-section (or profile) will be considered first. In many cases this single-profile evaluation is sufficient to give an adequate idea of the surface; in some cases it is
not. Whether or not the profile is a sufficient representation is irrelevant; it is the cornerstone upon which surface metrology has been built. In subsequent sections of this chapter the examination of the surface will be extended to cover the whole geometry. In the next section it will be assumed that the cross-section does not suffer from any distortions which may be introduced by the instrument, such as the finite stylus tip or limited resolution of the optical device. Such problems will be examined in detail in Chapter 4, Sections 4.2.2 and 4.4.1.1. Problems of the length of profile and the reliability of the parameters will be deferred until Chapter 5. 2.2.1.1 Profile Parameters (ISO 25178 Part 2 and 4278) The profile graph shown in Figure 2.4 and represented by z = f(x) could have been obtained by a number of different methods but it is basically a waveform which could appear on any chart expressing voltage, temperature, flow, or whatever. It could therefore be argued that it should be representable in the same sort of way as for these physical quantities. To some extent this is true but there is a limit to how far the analogy can be taken. The simplest way of looking at this is to regard the waveform as being made up of amplitude (height) features and wavelength (spacing) features, both independent of each other. The next step is to specify a minimum set of numbers to characterize both types of dimension adequately. There is a definite need to constrain the number of parameters to be specified even if it means dropping a certain amount of information, because in practice these numbers will have to be communicated from the designer to the production engineer using the technical drawing or its equivalent. More than two numbers often cause problems of comprehension. Too many numbers in a specification can result in all being left off, which leaves a worse situation than if only one had been used. Of the height information and the spacing information it has been conventional to regard the height information as the more important simply because it seems to relate more readily to functional importance. For this reason most early surface finish parameters relate only to the height information in the profile and not to the spacing. The historical evolution of the parameters will be described in the introduction to Chapter 4. The definitive books prior to 1970 are contained in References [1–7]. However, there has been a general tendency to approach the problem of amplitude characterization in two ways, one attempting in a crude way to characterize functions by measuring peaks, and the other to control the process by measuring average values. The argument for using peaks seems sensible
z x
FIGURE 2.4 Typical profile graph.
10
and useful because it is quite easy to relate peak-to-valley measurements of a profile to variations in the straightness of interferometer fringes, so there was, in essence, the possibility of a traceable link between contact methods and optical ones. This is the approach used by the USSR and Germany in the 1940s and until recently. The UK and USA, on the other hand, realized at the outset that the measurement of peak parameters is more difficult than measuring averages, and concentrated therefore on the latter which, because of their statistical stability, became more suitable for quality control of the manufacturing process. Peak measurements are essentially divergent rather than convergent in stability; the bigger the length of profile or length of assessment the larger the value becomes. This is not true for averages; the sampled average tends to converge on the true value the larger the number of values taken. The formal approach to statistical reliability will be left to Chapter 5. However, in order to bring out the nature of the characterization used in the past it is necessary to point out some of the standard terms governing the actual length of profile used. The basic unit is the sampling length. It is not called the “sample length” because this is a general term whereas sampling length has a specific meaning which is “the length of assessment over which the surface roughness can be considered to be representative.” Obviously the application of such a definition is fraught with difficulty because it depends on the parameter and the degree of confidence required. For the purpose of this subsection the length will be assumed to be adequate—whatever that means. The value of the sampling length is a compromise. On the one hand, it should be long enough to get a statistically good representation of the surface roughness. On the other, if it is made too big, longer components of the geometry, such as waviness, will be drawn in if present and included as roughness. The concept of sampling length therefore has two jobs, not one. For this reason its use has often been misunderstood. It has consequently been drawn inextricably into many arguments on reference lines, filtering, and reliability. It is brought in here to reflect its use in defining parameters. Sometimes the instrument takes more than one sampling length in its assessment and sometimes some of the total length traversed by the instrument is not assessed for mechanical or filtering reasons, as will be described in Chapter 4. However, the usual sampling length of value 0.03 in (0.8 mm) was chosen empirically in the early 1940s in the UK by Rank Taylor Hobson from the examination of hundreds of typical surfaces [2]. Also, an evaluation length of nominally five sampling lengths was chosen by the same empirical method. In those days the term “sampling length” did not exist; it was referred to as the meter cut-off length. The reason for this was that it also referred to the cut-off of the filter used to smooth the meter reading. Some typical sampling lengths for different types of surface are given in Tables 2.1 through 2.4. In Tables 2.2 through 2.4 some notation will be used which is explained fully later. See Glossary for details. The usual spatial situation is shown in Figure 2.5.
Handbook of Surface and Nanometrology, Second Edition
TABLE 2.1 Sampling Lengths for Ra, Rz, and Ry of Periodic Profiles Sm (mm) Over (0.013) 0.04 0.13 0.4 1.3
Up to (Inclusive)
Sampling Length (mm)
Evaluation Length (mm)
0.04 0.13 0.4 1.3 4.0
0.08 0.25 0.8 2.5 8.0
0.4 1.25 4.0 12.5 40.0
TABLE 2.2 Roughness Sampling Lengths for the Measurement of Ra, Rq, Rsk, Rku, RΔq and Curves and Related Parameters for Non-Periodic Profiles (For Example Ground Profiles) Ra (µm)
Roughness Sampling Length lr (mm)
(0.006), Ra LM. Then the graph no longer takes on the shape of either a limaçon or a circle. However, this situation is very rare and only occurs when the magnification is small and the radius of the part is small, which automatically makes L small and the apparent shape very different. Instead of the bulge at the center at right angles to the origin it is a reduction! Angular considerations are similar. In normal circumstances for eccentric parts the angular relationships of the component are only valid when measured through the center of the chart; it is only in special cases where there is only a small amount of zero suppression that consideration should be given to measurement through a point in the region of the center of the part. This is shown in Figure 2.159. It is possible to make the limaçon obtained when the workpiece is eccentric simply by adding further terms in Equation 2.386. The displaced graph can look more and more circular despite being eccentric. However, this doesn’t mean that angular relationships (in the eccentric “circular” trace) are corrected. All angle measurements still have to go through the center of the chart. Centering the corrected graph by removing the eccentricity term is the only way that the center for roundness is at the circle center—it is also at the center of rotation so that there is no problem. Radial variation can be measured from the center of the profile itself rather than the center of the chart, but the measurements are subject to the proviso that the decentering is small. R. E. Reason has given maximum permissible eccentricities to allow reasonably accurate measurement of the diametral, radial and angular relationships, subject to the criteria that the differences are just measurable on the graph. These are listed in the following tables. The errors in Table 2.18 refer to the eccentric errors of a graph of mean radius 40 mm.
Table 2.18 is important because it shows the permissible tolerance on centering for different purposes as shown on the chart. For example, with 1.25 mm eccentricity the error is too small to be detected, while with 2.5 mm eccentricity it will only just be measurable. Above this the difference will only matter if it is a large enough proportion of the height of the irregularities to affect the accuracy of their assessment. Eccentricity up to 5 mm can generally be accepted for normal workshop testing, with 7.5 mm as an upper limit for a graph around 75 mm diameter. Table 2.19 shows that the more perfect the workpiece the better it needs to be centered. This requirement is generally satisfied in the normal use of the instrument, for good parts tend naturally to be well centered, while in the case of poorer parts, for which the criterion of good centering is less evident, the slight increase in ovality error can reasonably be deemed of no consequence. Some diametral comparisons with and without eccentricity are also shown in Table 2.19. A practical criterion for when diameters can no longer be compared through the center of the chart can be based on the smallest difference between two diameters of the graph that could usefully be detected. Taking 0.125mm as the smallest significant change, Table 2.19 shows the lowest permissible magnification for a range of workpiece diameters and eccentricities when the graph has a mean diameter of 3 in (75 mm). Notice how in Figure 2.159 that, even when the workpiece has been decentered, the valley still appears to point toward the center of the chart and not to the center of the graph of the component. This illustrates the common angular behavior of all roundness instruments. A criterion for when angles should no longer be measured from the center of rotation can be based on the circumferential resolving power of the graph. Allowing for an error in the centering of the chart itself, this might be in the region of 0.5 mm circumferentially. If lower values of magnification should be required then the definitive formulae should be consulted. This applies to both the diametral measurement and the angular relationships. These tables merely give the nominally accepted bounds. Summarizing the foregoing account of the properties
TABLE 2.19 Magnifications Allowed Eccentricity of Graph Diameter of Workpiece (mm) 0.25 0.5 1.25 2.5 5 12.5
FIGURE 2.159 Angular relationship through chart center.
2.5 mm
5 mm
7.5 mm
Magnification must exceed: 400 200 80 40 – –
1600 800 320 160 80 40
3600 1800 720 360 180 72
126
Handbook of Surface and Nanometrology, Second Edition
of polar graphs, it will be seen that the following rules can generally be applied:
of the number of irregularities. It is with variations in radius that the present method is mainly concerned. A basic point is that whatever numerical assessment is made, it will refer to that profile, known as the measured profile, which is revealed by the instrument and is in effect the first step in characterization. The peak-to-valley height of the measured profile, expressed as the difference between the maximum and minimum radii of the profile measured from a chosen center, and often represented by concentric circles having these radii and thus forming a containing zone, is widely used as an assessment of the departure of a workpiece from perfect roundness. This is called the “roundness error”, the “out-of-roundness” error or sometimes DFTC.
Plotting:
1. To avoid excessive polar distortion, the trace should generally be kept within a zone of which the radial width is not more than about one-third of its mean radius. 2. The eccentricity should be kept within about 15% of the mean radius for general testing, and within 7% for high precision.
Reading:
1. Points 180° apart on the workpiece are represented by points 180° apart through the center of rotation of the chart. 2. Angular relationships are read from the center of rotation of the chart. 3. Diametral variations are assessed through the center of rotation of the chart. 4. Radial variations are assessed from the center of the profile graph, but are subject to a small error that limits permissible decentring. 5. What appear as valleys on the chart often represent portions of the actual surface that are convex with respect to its center.
Modern measuring instruments are making it less necessary to read or plot graphs directly, but the foregoing comments are intended to provide a suitable background from which all advances can be judged. 2.4.4.5â•…Roundness Assessment Clearly, there are many parameters of roundness that might be measured, for example diametral variations, radial variations, frequency per revolution (or undulations per revolution), rates of change (velocity and acceleration). Most, if not all, could be evaluated with respect both to the whole periphery and to selected frequency bands. Radial variations can be assessed in a number of ways, for example in terms of maximum peak-to-valley, averaged peak-to-valley, and integrated values like RMS and Ra. As far as can be seen, there is no single parameter that could fully describe the profile, still less its functional work. Each can convey only a certain amount of information about the profile. The requirement is therefore to find out which parameter, or parameters, will be most significant for a given application, remembering that in most cases performance will depend on the configuration of two or more engaging components, and that roundness itself is but one of the many topographic and physical aspects of the whole story of workpiece performance. Assessment is now widely made on a radial basis because the parameters so determined provide information about the quality of roundness regardless
2.4.4.5.1â•… Least Squares and Zonal Methods General The center can be determined in at least four different ways which lead to slightly different positions of the center and slightly different radial zone widths in the general irregular case, but converge to a single center and radial zone width when the undulations are repetitive. All four have their limitations and sources of error. These four ways of numerical assessment are referred to and described as follows (see Figure 2.160):
1. Ring gauge center (RGC) and ring gauge zone (RGZ): alternatively the minimum circumscribing circle center MCCI and RONt (MCCI) in ISO. If the graph represents a shaft, one logical approach is to imagine the shaft to be surrounded with the smallest possible ring gauge that would just “go” without interference. This would be represented on the chart by the smallest possible circumscribing circle from which circle the maximum inward departure (equal (a)
(b)
Rmax
Rmax Rmin
(c)
Rmin
(d)
Rmax Rmin
y
Rmax Rmin
FIGURE 2.160â•… Methods of assessing roundness Rmax–Rmin = 0.88 mm (a); 0.76 mm, (b); 0.72 mm, (c); 0.75 mm, (d).
127
Characterization
to the difference between the largest and smallest radii) can be measured. As mentioned in the introductory section this is a functionally questionable argument. 2. Plug gauge center (PGC) and plug gauge zone (PGZ): alternatively the maximum inscribed circle center MICI and RONt (MICI) in ISO. If the graph represents a hole, the procedure is reversed, and the circle first drawn is the largest possible inscribing circle, representing the largest plug gauge that will just go. From this is measured the maximum outward departure, which can be denoted on the graph by a circumscribing circle concentric with the first. 3. Minimum zone center (MZC) and minimum zone (MZ): alternatively MZCI and RONt (MZCI) in ISO Another approach is to find a center from which can be drawn two concentric circles that will enclose the graph and have a minimum radial separation. 4. Least-squares center (LSC) and least-squares zone (LSZ): alternatively LSCI and RONt (LSCI) inISO In this approach, the center is that of the leastsquares circle. Parameters measured from the least squares best fit reference circles have the ISO designations given by RONt, RONp, RONv, RONq the first qualified by (LSLI). Note that these four parameters correspond to the STR parameters in straightness being the maximum peak to valley deviation from the reference line (circle), the maximum peak, maximum valley and the root mean square value, respectively. The obvious difference between the ring methods and the least-squares circle is that whereas in the former the highest peaks and/or valleys are used to locate the center, in the least-squares circle all the radial measurements taken from the center of the chart are used. Another point is that the center of the least-squares circle is unique. This is not so for the maximum inscribing circle nor for the minimum zone. It can be shown, however, that the minimum circumscribing center is unique, therefore joining the least-squares circle as most definitive. Methods of finding these centers will be discussed in Chapter 3. Figure 2.161 shows the reliability of these methods. Of these four methods of assessment the least square method is easiest to determine by computer but the most difficult to obtain graphically. Summarizing, the only way to get stable results for the peak-valley roundness parameters is by axial averaging i.e., taking more than one trace. The least-squares method gets its stability by radial averaging i.e., from one trace. Put simply the least squares is basically an integral method whereas the other three methods are based on differentials and are therefore less reliable.
Axial average Average possible plug or ring gauge–many traces
Angular average
Average of all points determines reference
FIGURE 2.161 Reliability of circular reference systems.
Ring gauge reference
3
2
Common zone boundary determined by two constraints
3 Plug gauge reference
Shaft
(Clearance mini–max zone) Journal 2 Common zone boundary determined here by four restraints
Outer minimum zone 4
2
Inner minimum zone Shaft
(Clearance minimum zone) Journal
FIGURE 2.162 Shaft and journal bearing.
Although the minimum zone method is more difficult to find than the plug gauge and ring gauge methods it leads to a more stable determination of the common zone between a shaft and bearing as seen in Figure 2.162. Four points determine the clearance zone using the minimum zone method whereas only two determine the clearance zone using the plug/ring gauge method. Note:
1. Energy interaction between relative rotation between two parts takes place at the common zone 2. Common zone plug/ring—two point interaction 3. Common zone minimum zone—four point interaction
128
Handbook of Surface and Nanometrology, Second Edition
2.4.4.5.2 Graphical Verification Conceptually, the ring gauge, plug gauge and minimum zone methods are graphical in origin, the inspector working on the chart with a pair of compasses and a rule. These methods therefore provide simple practical ways for assessing the out-of-roundness. They suffer to some extent from two disadvantages. First, the fact that the centers so derived are dependent on a few isolated and possibly freak peaks or valleys which can make the measurement of concentricity somewhat risky. Second, a certain error is bound to creep in if the graph is eccentric because the graphical method depends on circles being drawn on the chart whether or not the part is centered. It has already been shown that the shape of a perfectly round part as revealed by an instrument will look like a limaçon when decentered, and therefore ideally the inspector should use compasses that draw limaçons [90]. Nowadays although the graphical methods are rarely used they should be understood for verification purposes: any errors in graphical methods are usually insignificant when compared with errors that can result from software problems. Graphical methods are a valuable source of assessment providing an independent way of testing suspect computer results. (The graphical method for the least-squares technique is given in the section under partial arcs). Some potential problems are given below for the zonal methods. The form of graphical error can easily be seen especially using some instruments, which use large graphs and small central regions. For example, consider the ring gauge method (minimum circumscribed circle). This will be determined by the largest chord in the body, which is obviously normal to the vector angle of eccentricity. Thus from Figure 2.163 it will be c = 2r sin θ, where r = t + E cos θ1 and θ1 is the angle at which the chord is a maximum, that is r sin θ is maximum. Thus sin θ (t + E cos θ )is a maximum from which
Hence
dc = 2E cos 2 θ1 + t cos θ1 − E = 0. dθ
ρ E
θ1
c/2
0
t
FIGURE 2.163 Use of large polar chart.
(2.396)
−t + t 2 + 8E 2 θ1 = cos −1 . 4E
(2.397)
This will correspond to a chord through a point O,” that is (t + E cos θ1) cos B θ1. Notice that this does not correspond to the chord through the apparent center of the workpiece at O’, a distance of E from O. The angle θ2 corresponding to the chord such that (t + E cos θ2) cos θ2 = E, from which E cosR θ2 –t cos θ2–E = 0,
−t + t 2 + 4 E 2 θ2 = cos −1 2E
(2.398)
from which it can be seen that θ2 is always less than θ1. The maximum chord intercepts the diameter through O and O’ at a distance of less than E (because the terms under the root are substantially the same). The maximum chord value is given by 2(t + E cos θ1) sin θ1 using the same nomenclature as before. The apparent error in the workpiece measured on the chart (due to using compasses on the badly centered part) will be seen from Figure 2.131 to be d, where d is given by
d = ( t + E cos θ1 )(sin θ1 − cos θ1 ) + E − t ,
(2.399)
which has to be divided by the magnification to get the apparent error on the workpiece itself. It should be emphasized that this effect is rarely of significance. It is not important at all when computing methods are used, but as long as graphical verification of results is carried out and as long as manufacturers use larger polar graph paper with smaller centers there is a danger that this problem will occur. This apparent out-of-roundness value of the graph on the chart would be measured even if the workpiece and the spindle of the instrument were absolutely perfect. Ways of computing these centers and zones without regard to this inherent error obtained when evaluating graphically will be discussed in Chapter 3. This distortion can also be troublesome when examining charts for lobing and in some circumstances can cause confusion in interpreting cylinders having a tilted axis relative to the instrument datum. 2.4.4.5.3 Effect of Imperfect Centering on the Minimum Zone Method On a perfectly circular eccentric part the form revealed by a roundness instrument is as shown in Figure 2.164. It can be shown that the center of the minimum zone coincides with that of the least-squares circle and plug gauge for this case. This is because moving the center of the circle from the PGC at O′ inwards toward O minimizes the plug gauge radius faster than it minimizes the ring gauge radius. To find
129
Characterization
Slightly eccentric part Chart range d
θ 0
α E 0'
ε Center of rotation Limaon t
FIGURE 2.164 Effect of centering on minimum zone.
the apparent measurement of out-of-roundness it is necessary to find the maximum radius from O′, that is dmax which is obtained when
d = (t 2 + E 2 sin 2 θ)1/ 2
FIGURE 2.165 Actual extent of movement of workpiece relative to its size.
Limaçon True circle shape
(2.400)
Apparent Center of workpiece
is a maximum, that is
Center of chart
E 2 sin 2 θ =0 (t 2 + E 2 sin 2 θ)1/ 2
E
from which θ = π/2 and dmax = (t2 + E2)1/2. Hence the error ε is
ε = d max − t = (t 2 + E 2 )1/ 2 − t
(2.401)
FIGURE 2.166 Effect of angular distortion 1.
ε E 2t 2
This is what would be measured by the operator. For E = 10 mm and t = 30 mm the error could be 1.7 mm on the chart, a considerable percentage! 2.4.4.5.4 Effect of Angular Distortion From Figure 2.164 the relationship between α as measured from O and θ as measured from O' is
(t + E cos θ)sin θ α = tan −1 (t + E cos θ)cos θ − E
(a)
(b) α2 θ
θ
θ θ
0'
(2.402)
where for convenience the eccentricity has been taken to be in the x direction. As previously mentioned the angles should always be measured through the center of the chart irrespective of the eccentricity for normal purposes. This is easily seen by reference to Figure 2.165. Merely magnifying the derivations from the centered part and transcribing them to a chart as in Figure 2.166 does not
θα 2
θ
0.3°
FIGURE 2.167 Effect of angular distortion 2.
significantly change the angular relationships between the arrows as marked off. However, if the angles between the arrows are measured relative to the apparent center O’ (which an operator may think is the true center), considerable distortion of the results occur. Figure 2.167 shows what actually happens to angles on the chart when the workpiece is
130
Handbook of Surface and Nanometrology, Second Edition
decentered. Instead of the arrows being separated by 60° they are seemingly at a1 and a2 which are gross distortions. Although this angular invariance of the centered and eccentric charts is somewhat astonishing, it is not usual for it to cause confusion except in special cases. One such case is measuring any type of rate of change or curvature of the profile from the chart. A slope feature subtending an angle δθ in the centered case of Figure 2.168a will still subtend it in the eccentric case of Figure 2.168b. The feature will appear to enlarge on the one side and shrink on the other. In fact, however, they still subtend the same angle δθ about the chart center. But measuring any feature from the apparent center at O’ will give considerable angular errors, which in turn give corresponding errors in slope because it is a function of θ. Differentiating the equation for angle shows that δx1 is (t + E)/t times as big as δθ and δx2 is (t – E)/t times δθ. For E = 10 mm and t = 30 mm the subtended angle of a feature as seen from the apparent center O’ can differ by a factor of 3:1 depending on where it is relative to the direction of eccentricity! Hence if the slope feature has a local change of radius δr the value of δr/δa will vary by 3:1 depending on where it is. For E = 10 mm the variation is 2:1. The extent of this possible variation makes quality control very difficult. The answer will depend on the purely chance orientation of the slope feature relative to the direction of eccentricity. Measuring such a parameter from O, the chart center, can also be difficult in the highly eccentric case because dρ/dθ = E sin θ which has a minimum value of E length units per radian at a direction of θ = π/2, that is perpendicular to the direction of eccentricity. More affected still are measurements of curvature because the dθ to dx distortions are squared. The only safe way to measure such parameters is by removing the eccentricity by computation or by centering the workpiece accurately.
coincide and there is no ambiguity. But if the magnification is increased to 1000 ×, the representation acquires the shape shown in Figure 2.169b. Although the MZ, RG (and LS) centers remain the center of the figure, two centers can now be found for the circle representing the largest plug gauge. Thus, while the MZ, RG, and LS evaluations are the same for both magnifications, the plug gauge value, if based literally on the maximum inscribed circle, is erroneously greater for the higher magnification. In the former, plotted on as large a radius as the paper permits, the ambiguity of centers is just avoided, but on a small radius at the same magnification the ambiguity reappears. It is therefore important, when seeking the PGC, to keep the zone width small compared with its radius on the graph, that is to plot as far out on the paper as possible and to use the lowest magnification that will provide sufficient reading accuracy. This is a practical example of how zonal methods based upon graphical assessment can give misleading results if applied literally. Again, they highlight the importance of knowing the nature of the signal. Because the best-fit circle is only a special case of that of a best-fit limited arc, the general derivation will be given from which the complete circle can be obtained. The only assumption made in the derivation is that for practical situations where what is required is a best-fit partial limaçon rather than a circle simply because of the very nature of the instrumentation. How this all fits in with the graphical approach will be seen in the next section.
2.4.4.5.5 Effect of Irregular Asperities Amongst the most obvious difficulties associated with the application of zonal methods is the possible distortion of the center position due to irregular peaks on the circumference. An example of this can be seen with reference to the measurement of an ellipse using the plug gauge method. Apart from the effects of eccentricity, polar distortion can affect more especially the drawing of the inscribed circle. Consider, for example, the representation of an ellipse at 200 × magnification in Figure 2.169a. All possible centers (a)
Slope feature
δr
0 δθ
δr
M ( R − L ) + Me cos(θ − ϕ )
δθ 0
δθ δα2
0'
M (R − L) = S
Me = E
(2.404)
E sin ϕ = y
(2.405)
and E cos ϕ = x
(a)
(b)
δα1
FIGURE 2.168 Effect of angular distortion on slope measurement.
(2.403)
and letting
Slope feature
(b) δθ
2.4.4.5.6 Partial Arcs Keeping to existing convention, let the raw data from the transducer be r(θ), r having different values as θ changes due to the out-of-roundness or roughness of the part. Remembering that the reference line to the data from the transducer before display on the chart has the equation
FIGURE 2.169 Problems in plug gauge assessment.
131
Characterization
then the limaçon form for the reference line between θ1 and θ2 is ρ(θ) − S = R + x cos θ + y sin θ
(2.406)
and in order to get the best-fit limaçon having parameters R, x, y to the raw data r(θ) the following Equation 2.324 has to be minimized. (Here, for simplicity, the argument θ to the r values will be omitted.) The criterion for best fit will be least squares. Thus the integral I, where I=
∫
θ2
θ1
[r − ( R + x cos θ + y sin θ)]2 dθ
(2.407)
has to be minimized with respect to R, x, and y, respectively. This implies that ∂I = 0 ∂I = 0 ∂I = 0. ∂R x , y ∂x R ,x ∂y R ,x
(2.408)
Solving these equations gives the desired values for R, x, and y over a limited ar c θ1 to θ2. Hence the general solution for a least-squares limaçon over a partial area is given by
x=
y=
R=
A
F
∫
θ2
∫
θ2
r cos θ dθ − B
θ1
θ1
1 θ2 − θ1
r sin θ dθ − D
∫
θ2
θ1
rdθ −
∫
θ2
∫
θ2
θ1
θ1
x θ2 − θ1
rdθ + C E rdθ + C E
∫
θ2
θ1
∫
θ2
∫
θ2
θ1
θ1
r sin θ dθ − D
r cos θ dθ − B
y θ2 − θ1
cos θ dθ −
∫
θ2
θ1
∫
θ2
θ1
∫
θ2
θ1
rdθ
rdθ
θ2
1 θ2 − θ1
∫
B=
1 ( sin θ2 − sin θ1 ) θ2 − θ1
C=
1 θ2 − θ1
D=
1 ( cos θ1 − cos θ2 ) θ2 − θ1
θ1
sin 2 θdθ −
∫
θ2
θ1
cos θdθ
∫
θ1
∫
θ2
θ1
sin θdθ
2π
R=
1 2π
∫
y=
1 π
2π
0
∫
0
rdθ
x=
1 π
∫
2π
0
r cos θdθ
r sin θdθ
(2.411)
which are the first Fourier coefficients. In Equation 2.2.411 the only unknowns for a given θ1 and θ2 are the three integrals
∫
θ2
θ1
rdθ
∫
θ2
θ1
r cos θdθ
∫
θ2
θ1
r sin θdθ (2.412)
which can be made immediately available from the instrument. The best-fit circle was first obtained by R C Spragg (see BS 2370) [164], the partial case by Whitehouse [163]. For the instrument display the constant polar term S is added spatially about O to both the raw data and the computed reference line in exactly the same way so that the calculation is not affected, that is the term cancels out from both parts within the integral Equation 2.326. Note that the extent to which the assumption made is valid depends on the ratio e/R which, for most practical cases, is of the order of 10−3. Two practical examples of how the lines derived from these equations look on difficult surfaces are shown in Figure 2.170.
sin θ dθ
(2.409)
In this equation the constants A, B, C, D, E, and F are as follows:
A=
In the special case of full arc Equation 2.326 reduces to
Best fit center
2
Reference flat
sin θdθ −
∫
θ2
θ1
sin θ cos θdθ
(2.410) Keyway
E = AF − C 2 F=
∫
θ2
θ1
cos 2 θdθ −
1 θ2 − θ1
∫
θ2
θ1
cos θdθ
2
FIGURE 2.170 Best-fit reference lines for partial circumference.
132
Handbook of Surface and Nanometrology, Second Edition
Because in practice the ratio e/R is so small all angular relationships have to be measured relative to the origin of the chart and not the center of the part as seen on the graph. Also, because the initial and final angles of a partial arc or circumference will be known from the instrument’s angular reference, a considerable reduction in computation can be achieved simply by ensuring that the limits of integration are symmetrical about the center of the arc. This change can be effected by a polar transformation of the angular datum on the instrument by (θ1 + θ2 ) / 2.
4 Y
5
3 X2
6 – X X 7
R2
X1 R1
– y Y2
0c
X0
2 Y1 1X
R 8
(2.413)
12
Thus, if θ3 = (θ2â•›–â•›θ1)/2, Equation 2.264 becomes
9
11 10
x =
∫
θ3
− θ3
r cos θdθ −
sin θ3 θ3
∫
θ3
− θ3
rdθ
+ sin 2θ3 1 cos 2θ3 θ − + 3 2 θ3 θ3
y= R=
θ3
sin 2θ3 r sin θdθ θ3 − 2 − θ3
∫
1 2θ3
FIGURE 2.171â•… Calculation of best-fit least-squares circle.
instrumentation, then the parameters of the best-fit circle to fit the raw data x, y of the coordinates of the center and the mean radius R will be given by
−1
−1
(2.414)
x=
2 N
N
∑
xi
y=
i =1
2 N
N
∑
R=
yi
i =1
N
∑ r . (2.415)
1 N
i
i =1
θ3
rdθ − 2 x sin θ3 . − θ3
∫
Use of these equations reduces the amount of computation considerably at the expense of only a small increase in the constraints of operation. For measuring out-of-roundness around a complete circumference then Equation 2.415 is the important one. Although as will be shown later this is easy to instrument, it is more laborious than the other three methods of measurement to obtain graphically. The way proposed at present is based on the observation that any r cos θ value is an x measurement off the chart and any r sin θ value is a y measurement (see Figure 2.171). Thus replacing the r cos θ values and r sin θ values and taking discrete measurements around the profile graph rather than continuously, as will be the case using analog ∑ cos 2 θi + ∑ cos 2 θ j ∑ sin θi cos θi + ∑ sin θ j cos θ j ∑ cos θi ∑ cos θ j x y × R1 R2
Equation 2.409 gives the best-fit conditions for a partial arc which can enclose any amount of the full circle. Often it is necessary to find the unique best-fit center of a concentric pair of circular ares. This involves minimizing the total sum of squares of the deviations. Arcs 1 and 2 have the sum of squares S1 and S2: M
S1 =
∑ (r − R − x cos θ − y sin θ ) 1
i
i
2
i =1
(2.416)
N
S2 =
∑ (r − R − x cos θ − y sin θ ) . 2
j
j
2
i =1
Minimizing S1 + S2 and differentiating these polar equations with respect to x–, y–, R1, and R2 gives
∑ sin θi cos θi + ∑ sin θ j cos θ j ∑ cos θi ∑ cos θ j ∑ sin 2 θi + ∑ sin 2 θ j ∑ sin θi ∑ sin θ j ∑ sin θi M 0 ∑ sin θ j 0 N ∑ rij cosθi + ∑ rij cos θ j ∑ rij sin θi + ∑ rij sin θ j . = ∑ rij ∑ r ij
(2.417)
133
Characterization
Obviously the key to solving these sorts of problems is how to make the equations linear enough for simple solution. This is usually done automatically by the choice of instrument used to get the data. The fact that a roundness instrument has been used means that the center a, b is not far from the axis of rotation. If a coordinate measuring machine (CMM) had been used this would not be the case unless the center positions were carefully arranged.
These equations are useful when data is available in the polar form. But when data is available in the Cartesian form, the other criterion, namely minimizing the deviation from the property of the conic as an example, is useful as described below. In this case the equations of the arcs are written as x 2 + y 2 − ux − vy − D1 = 0
x 2 + y 2 − ux − vy − D2 = 0
(2.418)
2.4.4.6 Roundness Filtering and Other Topics Filtering of the roundness profile-these are the longwave and shortwave transmission characteristics from ISO/TS 12181-2
and the total sum of the squares of the deviation from the property of the arc/conic is defined as: Es =
∑ (x +
+ y 2 − uxi − vyi − D1 )2
2
∑ (x
2
+ y 2 − ux j − vy j − D2 )2 .
2.4.4.6.1 Longwave-Pass Filter X is undulations per revolution and Y is the transmission given by
(2.419)
Differentiating partially with respect to u, ν, D1, and D2, then the equation in matrix form for the solution of u, y, D1, and D2 is given by ∑ xi2 + ∑ x 2j ∑ xi yi + ∑ x j y j ∑ xi ∑ x j
∑ xi yi + ∑ x j y j ∑ y12 + ∑ y 2j ∑ yi ∑ yj
∑ xi ∑ yi M 0
αf 2 a1 = exp − π a0 fc
where the filter is a phase corrected filter according to ISO 11562. α = ln ( 2 ) /π = 0.4697 a1 and a 0 are the sinusoidal amplitudes after and before filtering, f and fc the frequencies (UPR) of the roundness deviation being filtered and the filter cut-off, respectively. Cut-off values 15 UPR, 50, UPR, 150 UPR, 500 UPR, 1500 UPR (Undulations per revolution) see Figure 2.172.
∑ xj u ∑ yj ν 0 D1 N D2
∑( x12 + y12 ) xi + ∑ ( x 2j + y 2j ) x j ∑( x12 + y12 ) yi + ∑ ( x 2j + y 2j ) y j . = ∑ xi2 + ∑ y12 ∑ x 2 + ∑ y 2 j j
2.4.4.6.2 Shortwave Pass Filter αf 2 a2 = 1 − exp − π a0 fc
Then
x = u/2
(2.421)
y = v/2
R1 = D1 + (u 2 + v 2 ) / 4 R2 = D2 + (u 2 + v 2 ) / 4
(2.422)
a2 and a 0 are the sinusoidal amplitudes after and before filtering, f and fc the frequencies (in UPR) of the roundness deviations being filtered and the filter cut-off, respectively.
(2.420)
Y 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3
15
50
150
500
1500
0.2 0.1 0
0
10
100
1 000
10 000 X
FIGURE 2.172 Roundness longwave-pass filter (From ISO/TS 12181-2, 2003. Geometric product specification (GPS) roundness, specification operators Part 4.2.1 transmission band for longwave passband filter. With permission.)
134
Handbook of Surface and Nanometrology, Second Edition
Cut-off values 15 UPR, 50 UPR, 150 UPR (Undulations per revolution). It cannot have zero revolutions! Please remember that the transmission characteristics start at 1 UPR not zero because of the nature of the signal from the roundness instrument! (see Figure 2.173). 2.4.4.6.3â•… Lobing Because a signal taken from a roundness instrument is periodic. It is straightforward to break It down into a Fourier series whose fundamental component corresponds to one revolution of the instrument. This analysis has some useful features because, whereas all the methods of numerical assessment discussed so far have been in terms of amplitude, only the Fourier series gives an opportunity to introduce a frequency factor into the assessment. Thus using the same terminology as before, the raw data from the instrument ρ(θ) may be expressed as ∞
ρ(θ) = R +
∑ C cosθ (nθ − ϕ ). n
n
(2.423)
n =1
an = bn =
1 2π 1 π 1 π
R+
∑ (a cos nθ + b sin θ nθ). n
n
n =1
In the Equation 2.339 Cn = ( an2 + bn2 ) and φn = tan –1 (bn /an) represent the amplitude of the nth harmonic and φn the phase, that is the orientation of the harmonic in the profile relative to the angle on the instrument taken as datum. The coefficients are obtained from the profile (θ) by the following expressions:
∫
π
∫
π
−π
−π
or in digital form
r (θ)cos nθ dθ r (θ) sin nθ dθ
Fourier Coefficient
3
In practice the series of harmonics is not taken to infinity but to some number M deemed sufficient to represent the profile of the workpiece adequately.
−π
r (θ) dθ
or or
2 N 2 N
1 N
∑ r (θ) i
N
∑ r (θ)cos nθ i
i =1 N
∑ r (θ)sin nθ. i
i =1
(2.424)
TABLE 2.20 Roundness Signal Typology
1 2
∞
∫
π
The coefficients described in the equations above can, to some extent, be given a mechanical interpretation which is often useful in visualizing the significance of such an analysis. Breaking the roundness signal down into its Fourier components is useful because it enables a typology of the signal to be formulated. This is shown in Table 2.20.
0
or
R=
4–5
1/ 2
5–20 20–50 50–1000
Cause
Effect
(a) Dimension of part (b) Instrument set-up Instrument set-up (a) Ovality of part (b) Instrument set-up (a) Trilobe on part (b) Machine tool set-up (a) Unequal angle—genuine (b) Equal angle—machine tool Machine tool stiffness (a) Machine tool stiffness (b) Regenerative chatter Manufacturing process signal
Tolerance/fit Eccentricity on graph Component tilt Distortion of component due to jaws of chuck clamping Distortion due to clamping Out-of-roundness Out-of-roundness Causes vibration Out-of-roundness causes noise
Y 1 0.9 0.8 0.7 0.6 0.5 0.4
15
0.3
150
50
0.2 0.1 0
0
10
100
1 000
10 000 X
FIGURE 2.173â•… Roundness shortwave-pass filter. (From ISO/TS 12181-2, 2003. Geometric product specification (GPS) roundness, specification operators Part 4.2.2. Transmission characteristic for short wave pass filter. With permission.)
135
Characterization
From Table 2.20 it can be seen that there are a number of influences that produce the signal which is assessed for the out-of-roundness of the workpiece. The point to note is that these influences are not related directly to each other: some are genuine, some are not. It is important to know the difference. Although useful there are problems associated with placing such mechanical interpretations on the coefficients because it can sometimes be misleading. An analysis of these waveforms would yield that both have a term C1 and yet an engineer would only regard Figure 2.174a as being eccentric. In fact the analysis of the profile in Figure 2.174b shows it to contain harmonics all through the spectral range of n = 1 to infinity so that it could also be thought of as being elliptical and trilobe etc. The general term Cn is given by
2 A nα Cn = sin 2 nπ
(2.425)
2A sin ( α / 2). π
(2.426)
The advantages of this method over simply counting the number of undulations around the circumference is that there are occasions where it is difficult, if not impossible, to make a count. In particular, this situation arises if the profile has a large random element or a large number of undulations. Consider Figure 2.175a. It is easy to count the undulations on a four-lobed figure but for Figure 2.175b it is much more difficult. The average wavelength values are shown for comparison. Another similar idea has been proposed by Chien [165] who defines effectively the root mean square energy. Thus N, the Nth-order wavenumber equivalent in energy to all the rest of the spectrum, is defined as N =
so that C1 =
Fortunately examples like this are not very often encountered and so confusion is not likely to arise. One way of utilizing the spectrum in the numerical assessment of roundness has been suggested by Spragg and Whitehouse [13] who worked out an average number of undulations per revolution (Na) based upon finding the center of gravity of the spectrum, in keeping with the random process methods of roughness. Na is given by n = m2
∑ n( A sin nθ + B cos nθ) r
Na =
r
n = m1
n = m2
∑ a cos nθ + b sin nθ r
(2.427)
r
n = m1
where no account is taken of the sign of the coefficient. The limits m1 and m2 refer to the bandwidth. (a)
Chart
A
Eccentric graph
α
Graph of part with keyway
FIGURE 2.174 Problems with the harmonic approach.
( a + b )n n =1 m (an2 + bn2 ) n =1
∑
2 n
2 n
1/ 2
2
∑
.
(2.428)
2.4.4.7 Roundness Assessment Using Intrinsic Datum The vee method has been mentioned as a screening technique for measuring round workpieces more or less as an approximation to the radial method. The question arises as to what is really needed as an instrument. Is it necessary to have recourse to the sophisticated radial methods described above? This question breaks down into whether it is possible to measure roundness radially without an accurate datum. This in turn poses the question as to whether both can be done without introducing serious distortions of the signal which need to be compensated for. The answer is that it is possible to arrange that a certain vee probe combination can remove the need for an accurate axis of rotation. It is also possible to use a multiplicity of probes as in the case of straightness. Consider a truly circular part and its measurement. Errors in the measurement of such a part would result from spindle uncertainty. The probe configuration has to be such that any random movement of the part is not seen. This is equivalent to making (a)
(b)
m
(b)
5µm
10µm
FIGURE 2.175 Average wavelength for roundness: (a) average height 0.4 μm, average UPR 4 and (b) average height 1 μm, average UPR 20.
136
Handbook of Surface and Nanometrology, Second Edition
sure that a movement of the workpiece as a whole during the measurement cycle is not detected and that it does not upset the signal. Suppose that there are two probes at angles of –α and β to a datum angle. If the part, considered to be a simple circle, is moved an amount e at an angle δ the two probes will suffer displacements of e cos(δ + α) and e cos(δ–β) assuming that e 1. Should there be any confusion, the vee-block method [97] is not the same as the multiprobe method: the veeblock method is essentially a low-order lobe-measuring technique. The signal has amplitude and phase characteristics of A(n) and φ(n): A(n) = [(1 − a cos nα − b cos nβ)2 + (b sin nβ − a sin nα)2 ]1/ 2 ϕ (n) = tan −1 [(b sin nβ − a sin nα) / (1 − a cos nα − b cos nβ)]. (2.435) The real significance of the use of variable sensitivity in the method will become clear in the case of variable error suppression. It is interesting to note from Equation 2.434 that this is the general case for three points. If the major criterion is simply to get rid of the first harmonic caused by spindle movements, one probe and two points of contact at an angle of x + β will in fact suffice to satisfy Equations 2.434 and 2.435, that is a vee method, for example, on two skids (Figure 2.177). This is simpler than the three-probe method and does not need balanced probes. Probe
α
Yoke
β
FIGURE 2.177â•… Three-point roundness with two skids.
However, it does not have the same flexibility as the threeprobe method because a and b can be adjusted with respect to each other and still maintain Equation 2.434. This means that the Fourier coefficient compensation Equation 2.435 can be made to be much more well behaved over a wide range of n, so reducing numerical problems. So far, using the multiple probe technique, the out-ofroundness has been obtained by a synthesis of modified Fourier components. There are other ways. One such simple but novel method is to solve a set of linear simultaneous equations. In effect what needs to be done in the two-orientation method, for example, is to look for only that part of the signal which has moved by the angle α. The signal which moves is identified as component out-of-roundness. The signal which remains stationary is attributed to instrument error. Solving for the spindle and component values (here called S) in terms of the matrix M and the input voltages V
S = M −1V .
(2.436)
This method still suffers from exactly the same frequency suppressions as the synthesis technique. As before the effect can be reduced by making α small, but other problems then arise. Differences between measurements become small— the readings become correlated and the matrix inversion becomes susceptible to numerical noise. For any given a, however, it is possible to remove the need for a matrix inversion and at the same time improve the signal-to-noise ratio. This is accomplished by repeating the shift of the specimen until a full 360° has been completed, that is having m separate but equi-angled orientations [150, 167]. The reduction of noise will be about m−1/2 in RMS terms. Once this exercise has been carried out it is possible to isolate the component error from the instrument error simply by sorting the information. For example, to find the component signal it is necessary to pick one angle in the instrument reference plane and then to identify the changes in probe voltage at this angle for all the different orientations in sequence. To get instrument errors, a fixed angle on the workpiece has to be chosen instead. Before this sifting is carried out the data sets from each orientation have to be normalized. This means that the data has to be adjusted so that the eccentricity and radius are always the same. These are the two Fourier coefficients which cannot be guaranteed to be the same from one orientation to the next, because they correspond to setting-up errors and do not relate to instrument datum or component errors. Figure 2.178 shows a typical result in which a magnification of 1 million has been obtained using this method. The figure illustrates a plot of the systematic error in a typical spindle. Providing that these errors do not change in time they can be stored and offset against any subsequent data runs, therefore enabling very high magnifications to be obtained. This method has the advantage over the simple reversal method that axial errors can be determined as well as radial. The above treatment has dealt primarily with the nature of roundness as seen by an instrument. There are other
138
Handbook of Surface and Nanometrology, Second Edition
C
01
0.0125 µm
FIGURE 2.178â•… Systematic error determination.
significant aspects of the part not specifically concerned with roundness but with other metrological features of the component such as concentricity, squareness, curvature, etc., and they can be evaluated and classified from data obtained with a roundness instrument. They can confuse the roundness data. In what follows a number of these features will be identified and quantified. It will be shown that a multiplicity of ways of assessing the features all give slightly different forms depending on the nature of the assumptions made. This is particularly true of the measurement of curvature. Why these features are included here is because of the pressure to make integrated measurements of the whole component in order to cut down setting-up and calibration time. The fact that, in general, they are a different type of signal to that of roundness obviously makes characterization more difficult. 2.4.4.8â•…Eccentricity and Concentricity Eccentricity is simply a measurement of the difference in position between the center of rotation of the instrument and the geometric center of the workpiece. This is the term e referred to in the text and covered extensively in [158]. Concentricity represents the roundness equivalent of taper Here the difference in the centers of circles taken from different parts of a component is the measure. Sometimes it is taken simply as 2 × eccentricity. In Figure 2.179 the distance e represents the lack of concentricity of the two circles in the same plane, that is the eccentricity of one relative to the other. No mention is made of the difference in radius between the two circles. Such circles may be taken in different parts of a bore or shaft or inside and outside cylinders, etc. Obtaining concentricity values instrumentally has been discussed elsewhere but it is desirable at this stage to mention one or two points relevant to the future discussion. The addition of the reference circle greatly facilitates the measurement of errors of concentricity between two or more diameters. If the profile graphs are round and smooth the relationship can easily be determined by measuring the radial separation along the axis of maximum eccentricity (Figure 2.180). If, however, the graphs are of poor shape then it is a great advantage to have the least-squares circles which
e
02
FIGURE 2.179â•… Concentricity determination.
Graph of reference surface
Eccentricity axis
M Second graph N
FIGURE 2.180â•… Eccentricity assessment: eccentricity = (M–N)/ 2 × 1/magnitude, where M and N are in inches or millimeters.
Graph of reference surface
Eccentricity axis
M Second graph N
FIGURE 2.181â•… Eccentricity assessment.
are automatically plotted on the graph as the basis for measurement (Figure 2.181). Remember that the center positions of such circles are defined by the first harmonic of the Fourier series. In the measurement of concentricity of cross-sections in separated planes it is first necessary to establish a reference axis aligned to the axis of rotation of the turntable. The relationship of all other cross-sections may then be compared with respect to this defined axis. The surfaces chosen to define the reference axis will depend largely on the configuration of the workpiece, but in most cases it can generally be established from either two cross-sections along
139
Characterization
the workpiece, or from one cross-section and a shoulder or end€face. If two cross-sections along the workpiece are chosen they should be cross-sections of functional surfaces (i.e., bearing surfaces), where good roundness and surface roughness quality may be expected. For the greatest possible accuracy in setting up the reference axis, the two surfaces should be as widely spaced as possible. If the shaft has two separated bearing surfaces which happen to be the only suitably finished surfaces from which to define the reference axis, and which in themselves are to be measured for concentricity, the procedure would be to define the axis in the most widely spaced cross-sections in the two surfaces and then to measure the relationship of the required intermediate cross-sections.
The cylinder illustrated in Figure 2.183a is shown to have a misalignment between the axis of the outside surface and the axis of the bore. To determine the amount of misalignment it is necessary to define a reference axis from the outside surface and align this axis to the axis of rotation of the turntable in such a way that the profile graphs from surfaces at A and B are concentric. Misalignment of the bore axis may then be measured by transferring the stylus to the bore and taking graphs at C and D, although movement of the pick-up position along the component does not in any way affect the alignment of the component relative to the selected axis. 2.4.4.9â•… Squareness Squareness can be measured with a roundness instrument, for example by measuring the circles at A and B to establish a reference axis. In squareness measurement and alignment two measurements other than the test measurement are needed to establish a reference axis. For eccentricity one measurement other than the test measurement is needed. In Figure 2.183b the squareness can be found relative to axis AB by measuring the eccentricity of the graph on the shoulder C and, knowing the position at which it was made, r, the angular squareness e/r can be found. In all such measurements great care should be taken to ensure that the workpiece is not moved when the probe is being moved from one measuring position to another.
2.4.4.8.1â•…Errors due to Probe Alignment in Concentricity Spragg has evaluated the errors that can arise when two probes are used to evaluate concentricity [168]. Measurements can sometimes be speeded up by the use of two probes. However, care must be used. If the perfect part shown is eccentric, the differential signal is (e2/2R) (1–cos 2θ). There is an apparent ellipse present, so the eccentricity should be closely controlled if two probes are used (Figure 2.182). A further geometrical problem arises if the pick-up styli are not accurately aligned to the center of rotation (by g, say). Then the output is e2 2eg (1 − cos 2θ) + (1 − cos θ). 2R R
2.4.4.10â•…Curvature Assessment Measurement from Roundness Data So far only departures from roundness have been considered or immediate derivatives like eccentricity or squareness [171,172]. There is, however, a growing need to measure many features simultaneously with one set-up. This saves time and calibration and in general it reduces errors. One such multiple measurement is that of measuring the curvature of the component at the same time as the roundness or sometimes in spite of it. There is one major problem of estimating curvature on a workpiece by least-squares techniques and this is the fact that the normal equations are non-linear. Operational research
(2.437)
This shows that the ellipse produced by having an eccentric part is modified to a kidney shape (limaçon) by a factor g. If the pick-ups are on the same side of the component, lack of alignment again gives an error. This time it is e (e + g)(1 − cos θ). R
(a)
(2.438) (b)
X
G CL 1 G1 R
eθ 0
e
G2 G2' 0'
R
G1
CL
(c) X 0
g e
G2 CL G2
G1 G1' G2'
X R
θ e 0 g
0'
FIGURE 2.182╅ Problems with probe positions: (a) pick-up styli in line with center of rotation, (b) pick-up styli not on center line and (c)€pick-ups on same side of part but not aligned to axis of rotation.
140
Handbook of Surface and Nanometrology, Second Edition
(a)
Misalignment of axis B
Center of leveling rotation of turntable
Transducer range
zi
0
x1
C
R
Independance length
r
L
b
D A
1in (25.4 mm) F
Table ‘squaredon’ to axis
a
FIGURE 2.184 Form and texture-integrated method.
Axis of table rotation
From Figure 2.184, if the coordinate axis starts at O, then the equation of the circle is (yi is zi in the figure) (a − xi )2 + ( yi + b)2 = r 2 .
(b) A
B
r
FIGURE 2.183 (a) Cylinder with misaligned bore and (b) determination of squareness.
techniques allow some leeway in this area. An ideal way of getting out of this problem is to linearize the equations by making suitable approximations. In the case of the circle the limaçon approximation provides just this basic link, but it does rely on the fact that the e/R ratio has to be small, therefore allowing second-order terms to be ignored. Measuring circular workpieces with a roundness instrument is, in effect, linearizing the signal mechanically. When attempting to measure the curvature of workpieces having a restricted arc the same limitation arises. Measuring the workpiece using the formulae for instantaneous curvature can be very dangerous if there is noise present in the nature of form or roughness, because differentiation tends to enhance the noise at the expense of the long-wavelength arc making up the curve. Also, limaçon approximations rely on a natural period which cannot be assumed from the length of the wave, so two factors have to be considered from the metrology point of view: one is linearization (if possible) and the other is noise stability. Obviously there are a large number of ways in which both can be achieved. Two are given below to illustrate the different techniques.
(2.439)
It is required to find the best estimates of a, b, and r taken from a Cartesian-coordinate-measuring system x, y. It is unlikely that the limaçon form for partial arcs would be suitable because there is no easy way of estimating the fundamental period. The x values are not necessarily equally spaced and are subject to error, and the y values are likely to be contaminated with roughness information. Assume that the data has the correct units, that is the magnification values removed. Let the observed quantities be X1, X2,.., Y1, Y2, Y3... and the values of the true curve be x1, x2,...,y1, y2, y3.. .. Let the weighting of each of the data points be wx for the x and wy for the y (here assumed to be the same for all the x and y values but not necessarily the same as each other, i.e., wy ≠ wx). Let the residues between the observed and adjusted values be Ui and Vi. Thus Ui = Xi –xi and Vi = Yi –yi. An assumption is made that the observed values can be expressed in terms of the true values and the residuals by the first two terms of a Taylor series. This means that only first differentials are used. Thus F ( X1 X n ,Y1 Yn , a0 b0r0 ) = F ( x1 , x 2 x n , y1 , y2 yn , a, b, r )
N
+
∑ i =1
Ui
∂F + ∂xi
N
∑V ∂yF + A ∂Fa + B ∂Fb + R ∂r (2.440)
∂
∂
∂
∂F
i
i =1
i
where F is some function in this case dependent on that of a circle. 2.4.4.10.1 Least Squares The nature of minimization is such that
S = ( wxUi2 + wvYi 2 )
(2.441)
141
Characterization
is a minimum, that is Σw(residues)2 is a minimum with respect to the adjusted values. In Equation 2.441 if the values of x are not in error then the residuals Ui will be zero; this implies that the weight wx is ∞. Equation 2.441 will therefore involve the minimization of S such that S = (wyUi) is a minimum only and vice versa with x.
The nearest to the true number has to be obtained at every point by using the observed values and the estimated parameters. To force the adjusted data to lie on each point of the true curve it would be necessary for F(xi, yi; a, b, r) = 0 to be true for all i. Here again estimates can be used:
2.4.4.10.2 Curve Fitting Here not only are observations involved but also estimates a 0, b 0, r 0 of the unknown parameters of the curve to be found. In general, for n points of data D conditions or conditional equations can be used to help define the adjusted values of x, y and a, b, r. Thus
F1 ( x1 x 2 … xn yn ; a, b, r ) = 0 F2 ( . ) = 0 D equations for D conditions: FD ( . ) = 0 (2.442) F1, F2 are conditional functions and have to be chosen such that, when equated to zero, they force the conditions that have to be imposed on the adjusted coordinates. Note that all these would automatically be satisfied if the true values of x1, x2, y1, y2, a, b, r were available. The derivatives of these condition functions which are to be used are
∂F ∂x i
∂F ∂yi
∂F ∂a
∂F ∂b
∂F . ∂r
(2.443)
The function F satisfying the above is obtained from Equation 2.441, yielding
F = (a − x ) 2 + ( y + b) 2 − r 2 = 0 .
(2.444)
Suitable estimates of the derivatives can be obtained by using the observed quantities and estimates of the parameters a 0, b 0, r0. Thus
∂F ∂F ∂F = 2( xi − a0 ) = 2( yi + b0 ) = 2(a0 − xi ) ∂xi ∂yi ∂a
but the number should be close to zero in each case if possible. Thus Equation 2.445 provides n equations. Using this Taylor assumption there are as many conditions as points. Defining 2
Li =
2
1 ∂F 1 ∂F + wx ∂xi w y ∂yi
(2.448)
which is called the reciprocal of the weight of the condition functions, when the observed values are used it is possible to write out the normal equations for curve fitting in matrix form. This method by Whitehouse [170] is based upon the Deming approach [169]: L 1 0 0 0 ∂F 1 ∂a ∂F1 ∂b ∂F1 ∂r
0
0
0
L2
0
0
0
L3
0 0
0
0
Ln
∂F2 ∂Fn ∂a ∂a ∂F2 ∂Fn ∂b ∂b ∂F2 ∂Fn ∂r ∂r
∂F1 ∂a
∂F1 ∂b
∂F2 ∂a ∂Fn ∂a
∂Fn ∂b
0
0
⋅
⋅
0
0
∂F1 F1 λ ∂r 1 λ 2 F2 × = ∂Fn λ ∂r n Fn 0 A 0 ⋅ B 0 0 R 0 (2.449)
S = λ1F1 + λ 2 F2 + λ 3F3 etc. (2.445)
The terminology is such that ∂F = 2( yi + b0 ). ∂b
for all i in general (2.447)
where
∂F ∂F = 2( yi − b0 ) = −2r0 . ∂b ∂r
F ( X i , Yi ; a0 , b0 , r0 ) ≠ 0
(2.446)
A point on the estimation of a0, b0, and r0 will be made later. The values of Equation 2.445 are required in the calculation.
S=
∑
λFi
(2.450)
that is, the minimum squares sum S is expressible in terms of the Lagrange multipliers. It is not necessary to compute them as far as S is concerned, nor the residuals! Equation 2.449 can be immediately reduced to a set of equations containing only A, B, R by eliminating the Lagrangian multipliers λ1, λ2, etc. Thus
λ1 =
1 ∂F ∂F ∂F Fi − i A − i B − i C i = 1… n. (2.451) Li ∂a ∂b ∂c
142
Handbook of Surface and Nanometrology, Second Edition
Note that a = a0€ –€A, b = b 0€ –€B, r = r 0€ –€R, and A, B, C are the residuals in the parameters. Thus the final equation to be solved is
Having found A, B, R, the true parameters a, b, r can then be found from a = a 0€–€A, b = b 0€–€B, r = r 0€–€R.
∂Fi 2 ∑ ∂a ∂F ∂Fi ∑ i ∂b ∂a ∑ ∂Fi ∂Fi ∂r ∂a
∂Fi ∂Fi 1 ∂a ∂b Li
1 Li
∑
1 Li
∂F 1 ∑ i ∂b Li
1 Li
∑
∂F ∑F i i ∂a ∂F = ∑ Fi i ∂b ∑ F ∂Fi i ∂r
2
∂Fi ∂Fi 1 ∂r ∂b Li
∂Fi ∂Fi 1 ∂a ∂r Li ∂Fi ∂Fi 1 ∑ ∂b ∂r Li 2 ∂F 1 ∑ i ∂r Li ∑
A × B R
1. Selecting three points on the raw data P1, P2, P3 at points X1, Y1, X2, Y2, X3, Y3 and forcing the condition equations to be true, therefore yielding three simultaneous equations from which a 0, b0, r 0 can be found. Thus` Y1 = [r02 − ( x1 − a0 )2 ]1/ 2 − b0
1 Li 1 . Li 1 Li
Y2 = [r02 − ( x2 − a0 )2 ]1/ 2 − b0
(2.452)
This equation can be expressed directly in terms of the observed values and estimated parameters. Assuming that the x values are accurately known, wx = ∞, Ui = 0, and wy = 1, then 2
2.4.4.10.3â•… Estimation of a0, b 0, r0 Parameters These can be estimated very easily from the data subject to certain constraints such as b 0 1.
(2.473)
It is useful to consider whether there is an effective transfer function for the system. The effect of a given harmonic from the ring on the probe output takes no account of the fact that they may be out of phase. This gives an effective transfer function TF given by
TF =
sin nγ sin α . sin γ sin nα
(2.474)
The theoretical validity of the method depends on the degree to which the periphery of the ring can be regarded as a good reference. This can be worked out and it can be shown that the dr/dθ error values are at most 80% out. This is not good enough for an absolute roundness instrument but certainly good enough for a screening instrument. 2.4.4.12â•…Assessment of Ovality and Other Shapes Ovality is the expected basic shape in workpieces generated from round stock rolled between two rollers. It is one of the commonest faults in rings such as ball races. It is, however, different from other faults in that it is most likely to disappear when the rings are pressed into a round hole or over a round shaft and when functioning. This is also true for piston rings. They are often made deliberately oval so as to be able to take the load (along the major axis) when inserted into the cylinder. On the other hand it is a bad fault in balls and rollers and leads to a two-point fit of a shaft in a hole and hence to sideways wobble. There is, in principle, a clear distinction to be made between ovality and ellipticity. The term ovality can be applied to any more or less elongated figure and has been defined as the maximum difference in diameter between any cross-section
a + b x = (a + b)cos θ − λb cos θ b
(2.475)
a + b y = (a + b)sin θ − λb sin θ −b from which the radial distance r between O and a fixed point Q on a circle radius b located λb from O is given by
r = [(a + b)2 + λ 2b 2 − 2λb(a + b) cos(a /bθ)]1/ 2 , (2.476)
146
Handbook of Surface and Nanometrology, Second Edition
Z
P(xyz)
0 a
b
p
R 0'
Hypertrochoid
0
α
FIGURE 2.189â•… Hypotrochoid measurement.
x
θ
which reduce, when a = 2b, to y
x = 3b cos θ − λb cos 3θ
y = 3b sin θ − λb sin 3θ,
(2.477)
FIGURE 2.190â•… Coordinate system for sphericity.
which is the case for the Wankel stator. It is in those regions where y = 0 and x~b(3–λ) (i.e., in the regions where r is a minimum) that care has to be taken to ensure that waviness is excluded. It can be shown that in these regions the form can be expressed by a parabola
y 2 = 3bx − b(3 − λ )
(2.478)
from which deviations due to waviness can be found by simple instrumental means. Similar curves can be generated by circles rolling within circles; in these cases they are hypotrochoids (Figure 2.189). One of particular interest in engineering is the case as above where a = 2b. The locus of the point Q distance λb from O’ is in fact an ellipse. Problems of measuring these types of curve and even faithful data on the effect of misgeneration are not yet Â�readily available.
2.4.5â•…Three-Dimensional Shape Assessment 2.4.5.1â•… Sphericity In the same way that straightness was extended into flatness, roundness can be extended into sphericity. Sphericity may be taken as the departure of a nominally spherical body from truly spherical shape, that is one which is defined by the relationship
R 2 = ( x − a ) 2 + ( y − b) 2 + ( z − c) 2 .
r(θ,α)
Probe sensitivity direction
FIGURE 2.191â•… Coordinate system for sphericity.
Using the same nomenclature as in the case of roundness xa yb zc + + R R R
(2.480)
making the same assumptions as before. Thus Equation 2.480 is in effect a three-dimensional limaçon. Because of the nature of the object to be measured, spherical coordinates should be used (ρ, u, a). Thus x = ρ cos θ cos α assuming
(2.479)
Because of the inherently small size of these departures (in real workpieces such as ball bearings, etc.) relative to the dimension R itself, the same procedure as for roundness can be used. As in waviness the lack of suitable instrumentation has inhibited much work in this subject. Some attention will be paid to this point here (Figures 2.190 and€2.191).
ρ= R+
y = ρ sin θ cos α
x x ρ R
y y ρ R
z z ρ R (2.481)
z = ρ sin α θ corresponds to the longitude angle and α to the latitude. From this equation
ρ = R + a cos θ cos α + b sin θ cos α + c sin α (2.482)
may be written. This is the nature of the signal to be measured.
147
Characterization
2.4.5.2.1â•… Best-Fit Minimization Assuming that, in a real case, the raw data is r(u, a), then it is required to minimize the integral Z, which is given by l=
=
α2
∫ ∫
θ2
α2
θ2
α1
θ1
∫ ∫ α1
θ1
−
+a [r (θ,α ) − ρ(θ, α)]2 dθ dα
+c
[r (θ,α) − ( R + a cos θ cos α
(2.483)
−
+ b sin θ cos α + c sin α )]2
∫∫ [r − (R + a cos θ cos α + b sin θ cos α ∫∫ [r
2
2
− 2rR − 2ra cos θ cos α − 2rb sin θ cos α
+ 2ab cos θ cos α sin θ cos α + 2ac cos θ cos α sin α
∫∫ sin θ cos α sin α + c ∫∫ sin α = 0. 2
a=
4 4π2
b=
4 4π2
c=
2 4π2
R=
1 4π2
+ b 2 sin 2 θ cos 2 α + 2bc sin θ cos α sin α + c 2 sin 2 α)dθ dα..
∂l ∂a
∂l ∂b
∂l = 0. ∂c
∫∫
r + Rθα + a
∫∫
+c −
∫∫
+b +c
∫∫
cos θ cos α + b
∫∫
∑ ( x /N ) R = ∑ ( r / N ). a=4
∫∫
∫∫ cos θ sin α cos α 2
∫∫ cos θ cos α sin α = 0
∫∫
0
0
2π
∫ ∫
2π
0
0
2π
∫ ∫
2π
0
0
2π
2π
0
0
∫ ∫
r (θ, α ) cos θ cos αdθ dα
r (θ, α )sin θ cos αdθ dα
r (θ, α )siin θ dθ dα
r (θ, α )dθ dα
(2.485)
sin θ cos α
cos θ cos α + a
∫ ∫
2π
(The numerical equivalents of these equations are derived in Chapter 3.) These are the best-fit coefficients for a full sphere and could be evaluated from a point-to-point measuring system whose sensitive direction is always pointing to the common origin O (Figure 2.156). The numerical equivalents of these equations are
sin α = 0
r cos θ cos α + R
2π
Separately leaving off dθ, dα, rationalizing the trigonometry and letting θ = θ2╯–╯θ1 and α = α2╯–╯α1, then −
(2.487)
(2.484)
For best-fit minimization ∂l ∂R
2.4.5.2.2â•… Full Sphere In the case where θ1€–€θ2 = 2π and α2€–€α1 = 2π
+ 2 Rc sin α + a 2 cos2 θ cos 2 α
2
∫∫ sin θ cos α sin α = 0
− 2rc sin α + R 2 + 2 Ra cos θ cos α + 2 Rb sin θ cos α
2
From these four equations R, a, b, c can be evaluated for best fit for partial spheres in limitations for either α or θ.
+ c sin α)]2 dθ dα =
∫∫ cos θ sin θ cos α + b ∫∫ sin θ cos α
∫∫ r sin α + R ∫∫ sin α + a ∫∫ cos θ cos α sin α
+b
dropping the argument of r and the limits of integration for simplicity. This becomes I=
∫∫ r sin θ cos α + R ∫∫ sin θ cos α
cos 2 θ cos2 α
b=4
∑ ( y /N )
c=
∑ 2( y /N ) (2.487)
After getting the center and radius the amount of sphericity is obtained by calculating all the radii with respect to the center and finding their maximum difference. 2.4.5.2.3â•… Partial Sphere: Case A Letting θ1€–€θ2 = 2π, α2€–€α1 = dα and mean angle α = α (i.e., one latitude) the normal equations become, using the mean value theorem
148
Handbook of Surface and Nanometrology, Second Edition
−δα or −
∫
∫
2π
0
0
r ( θ, α ) dθ + 2 πR + 2 πc sin = 0
or
− cos α −
∫
2π
∫
∫
0
a r (θ, α ) cos θ dθ + 2 πδα (1 + cos 2α ) = 0 4 (2.488)
0
2π
r (θ, α ) cos θ dθ + aπ cos2 α = 0
0
(2.489)
−
∫
2π
0
1 π cos α
1 b= π cos α
∫
2π
∫
2π
0
0
−δα sin α
∫
0
−
∫
2π
0
r (θ, α ) siinθ dθ
∫
2π
0
r (θ, 0 ) dθ.
c
− δθ
(2.491)
(2.492)
∫
1 π
2π
0
∫
r ( θ, α ) sin α dθ dα +
2π
0
c 2 π δθ = 0 (2.496) 2
r ( θ, α ) sin α dα.
Hence all measurements will help with the evaluation of R and c with equal weight, which is not the case for constant a and variable θ, which have to be weighted; only the great circle value a = 0 enables R to be obtained. Interpretation of Equations 2.488 through 2.496. Consider Equations 2.488 through 2.492 and Figure 2.192. The probe will move only a distance r in response to a shift of α in the x direction. Hence all estimates of α thus derived from the raw r values will be smaller by the factor cos α. In order to get the true estimate of a and b from lesser circles (i.e., those where α = 0) a weighting factor of 1/cos α has to be applied to compensate for this. r
2.4.5.2.4 Partial Sphere: Case B For α = 2π and θ2 – θ1 = δθ, mean angle θ, Equation 2.488 becomes
∫ ∫ r (θ,α ) dθ dα + R ∫ ∫ dθ dα + a ∫ ∫ sin α dθ dα = 0 becomes − δθ r ( θ, α ) dα + Rδθ2 π = 0 ∫
−
or
R=
1 2π
∫
2π
0
r (θ, α ) dα.
(2.495)
∫ ∫ r (θ,α ) dθ dα sin α + 2 2π δθ = 0 c=
r ( θ, α ) dθ + Rδα 2π sin α
1 2π
r ( θ, α ) cos α dα
+ aπ δθ sin θ cos θ + bπ sin 2 θ = 0.
−
This Equation 2.491 shows that it is only possible to get a true value of R when sin α = 0, that is around an equatorial trace. Under these conditions. R=
0
(2.490)
r ( θ, α ) dθ + 2 πR + c2 π sin α = 0.
∫
2π
Equations 2.494 and 2.495 show that a and b cannot be determined using great circles of constant θ values:
r (θ, α ) cos θ dθ
+ cδα 2 π sin ( 2/α ) = 0
(2.494)
Equation 2.490 becomes − δθ sin θ
(the best value being when cos α = 1, i.e., α = 0) or 2π
π bδθ sin 2θ = 0 2
r ( θ, α ) cos α dα + aπ cos θ + π b sin θ = 0.
a=
r ( θ, α ) cos α dα + aπ cos 2 θ δα +
Hence
2π
2π
r ( θ, α ) cos θ dθ + aπ cos α = 0.
0
∫
− δθ cos θ
2π
−δα cos α
or
Equation 2.489 becomes
r (θ, α ) dθ + R δα 2 π + 2πcδα sin α = 0
a
0
– α a
(2.493) FIGURE 2.192 Direction of probe movement.
149
Characterization
For€high latitudes (declinations) the factor will be so great as to make the estimate meaningless in terms of experimental error. To get the best estimates of a and b the average of a number of latitudes should be taken. Similarly Equations 2.492 through 2.496 show that the measurement of great circles by keeping θ constant (i.e., circles going through the poles, Figure 2.193) will always give equal estimates of c and R but none will give estimates of a and b. This is only possible if more than one value of θ is used. Similarly to get a measure of R and c at least two values of α should be used. For a full exposition of the partial sphericity problem refer to Murthy et al. [175]. In a cylindrical coordinate measurement system the probe sensitivity direction is pointing the wrong way for good spherical measurement. All sorts of complications would be involved in compensation for this. To build up a composite sphere using cylindrical coordinates (r, θ, z) the value of r would have to be measured very accurately for each latitude, far better in fact than is possible using normal techniques. A 1 µm resolution would not be sufficient. This problem is shown in Figure 2.194. Note: A general rule for surface metrology instrumentation is that if the measuring system is matched to the component shape
in terms of coordinate system, the number of wide range movements in the instrument which require high sensitivity can be reduced by one. Measurement and checking of sphericity using a zonal technique rather than best-fit least squares is likely to produce errors in the estimation of the center positions because it is difficult to ensure that peaks and valleys are always related. 2.4.5.2.5â•… Partial Sphericity As in most engineering problems the real trouble begins when measuring the most complex shapes. Spheres are never or hardly ever complete. For this reason estimates of sphericity on partial spheres—or even full spheres—are made using three orthogonal planes. This alternative is only valid where the method of manufacture precludes localized spikes. Similarly estimates of surface deviations from an ideal spherical shape broken down in terms of deviations from ideal circles are only valid if the centers are spatially coincident—the relation between the three planes must be established somewhere! With components having very nearly a spherical shape it is usually safe to assume this if the radii of the individual circles are the same [176]. In the case of a hip prosthesis the difficult shape of the figure involves a further reorganization of the data, because it is impossible to measure complete circles in two planes. In this case the partial arc limaçon method proves to be the most suitable. Such a scheme of measurement is shown in Figure 2.195. Similar problems can be tackled in this way for measuring spheres with flats, holes, etc., machined onto or into them. The general approach to these problems is to use free form i.e., non Euclidean geometry. See Section 4.72 in this chapter. The display of these results in such a way as to be meaningful to an inspector is difficult, but at least with the simplified technique using orthogonal planes the three or more traces can all be put onto one polar or rectilinear chart. Visually the polar chart method is perhaps the best, but if, for example,
FIGURE 2.193â•… Longitudinal tracks. Partial circle
Full circle r
Z
θ
FIGURE 2.194â•… Cylindrical coordinates for sphericity.
FIGURE 2.195â•… Prosthetic head-partial sphere.
150
Handbook of Surface and Nanometrology, Second Edition
wear is being measured on prosthetic heads it is better to work directly from Cartesian records in order to measure wear (by the areas under the charts) without ambiguity. Assessment of the out-of-sphericity value from the leastsquares center is simply a matter of evaluating the maximum and minimum values of the radial deviations of the measured data points from the calculated center (a, b, c) and radius R.
In what follows “cylindricity” will be taken as departures from a true cylinder. Many misconceptions surround cylindricity. Often a few acceptable measurements of roundness taken along a shaft are considered a guarantee of cylindricity. This is not true. Cylindricity is a combination of straightness and out-ofroundness. Worst of all, any method of determining cylindricity must be as independent of the measuring system as possible. Thus, tilt and eccentricity have to be catered for in the measurement frame. To go back to the assessment of shaft roundness, the argument is that if the machine tool is capable of generating good roundness profiles at different positions along the shaft then it will also produce a straight generator at the same time. This method relies heavily on the manufacturing process and the machine tool and is not necessarily true (Figure 2.196). To be sure of cylindricity capability these roundness graphs should be linked linearly together by some means independent of the reliance on manufacturing. Alternatively, linear measurements could be tied together by roundness graphs, as shown in Figure 2.197a and b. There is another possibility which involves the combination of a and b in Figure 2.163; to form a “cage” pattern. This has the best coverage but takes longer. Yet another suggestion is the use of helical tracks along the cylinder (Figure 2.197c). In any event some way of spatially correlating the individual measurements has to be made. Whichever one is used depends largely on the instrumentation and the ability to unravel the data errors due to lack of squareness, eccentricity, etc. Fortunately quite a number of instruments are available which work on what is in effect a cylindrical coordinate system, namely r, z, θ axes, so that usually work can be carried out on one instrument. In those cases where component errors are large compared with the instrument accuracy specifications, for example in the squareness of the linear traverse relative to the rotational plane, the instrument itself will provide the necessary spatial correlation. Unfortunately, from the surface metrology point of view there is still a serious problem in displaying the results obtained, let alone putting a number to them.
2.4.5.2.6â•… Other Methods of Sphericity Assessment The minimum zone method of measuring sphericity is best tackled using exchange algorithms [177]. Murthy and Abdin [178] have used an alternative approach, again iterative, using a Monte Carlo method which, although workable, is not definitive. The measurement of sphericity highlights some of the problems that are often encountered in surface metrology, that is the difficulty of measuring a workpiece using an instrument which, even if it is not actually unsuitable, is not matched to the component shape. If there is a substantial difference between the coordinate systems of the instrument and that of the component, artifacts can result which can mislead and even distort the signal. Sometimes the workpiece cannot be measured at all unless the instrument is modified. An example is that of measuring a spherical object with a cylindrical coordinate instrument. If the coordinate systems are completely matched then only one direction (that carrying the probe) needs to be very accurate and sensitive. All the other axes need to have adjustments sufficient only to get the workpiece within the working range of the probe. This is one reason why the CMM has many basic problems: it does not match many shapes because of its versatility, and hence all axes have to be reasonably accurate. The problem of the mismatching of the instrument with the workpiece is often true of cylindricity measurement, as will be seen. Special care has to be taken with cylinder measurement because most engineering components have a hole somewhere which is often a critical part of the component.
2.4.6â•…Cylindricity and Conicity 2.4.6.1â•…Standards ISO/TS 1280-1, 2003, Vocabulary and Parameters of Cylindrical Form, ISO/ TS 1280-2, 2003, Specification Operators* 2.4.6.2â•… General The cases of flatness and sphericity are naturally two-dimensional extensions from straightness and roundness, whereas cylindricity and conicity are not. They are mixtures of the circular and linear generators. The number of engineering problems which involve two rotations are small but the combination of one angular variable with one translation is very common hence the importance attached to cylindricity and, to a lesser extent, conicity. There is little defined in non-Euclidean terms. * See also ISO 14660-2, 1999, Extracted median line of a cylinder and cone, extracted median surface, local size of an extracted feature.
FIGURE 2.196â•… Cross-section of shaft with roundness graphs.
151
Characterization
(a)
(b)
(a)
Cylinder axis
(b)
Cylinder axis
PV
(c)
(d)
(c)
Cylinder axis
(d)
Cylinder axis
PV
FIGURE 2.197╅ Methods of measuring cylinders (a) radial section method, (b) generatrix method, (c) helical line method, and (d)€point method.
The biggest problem is to maintain the overall impression of the workpiece and at the same time retain as much of the finer detail as possible. The best that can be achieved is inevitably a compromise. The problem is often that shape distortions produced by tilt and eccentricity mask the expected shape of the workpiece. The instrumental set-up errors are much more important in cylindricity measurement than in sphericity measurement. For this and other reasons cylindricity is very difficult to characterize in the presence of these unrelated signals in the data. Another problem interrelated with this is what measured feature of cylindricity is most significant for the particular application. In some cases such as in interference fits, it may be that the examination and specification of the generator properties are most important, whereas in others it may be the axis, for example in high-speed gyroshafts. Depending on what is judged to be most suitable, the most informative method of display should be used. Because of the fundamental character of cylindrical and allied shapes in all machines these points will be investigated in some detail. 2.4.6.3â•… Methods of Specifying Cylindricity As was the case in roundness, straightness, etc., so it is in cylindricity. There is a conflict amongst metrologists as to which method of assessment is best—zonal or best fit. There is a good case for defining cylindricity as the smallest separation c which can be achieved by fitting two coaxial
FIGURE 2.198╅ Methods of defining a cylinder: (a) least-squares axis (LSCY), (b) minimum circumscribed cylinder (MCCY), (c)€maximum inscribed cylinder (MICY), and (d) minimum zone cylinder (MZCY).
sleeves to the deviations measured (Figure 2.198d). This corresponds to the minimum zone method in roundness. But other people argue that because only the outer sleeve is unique the minimum circumscribing sleeve should be used as the basis for measurement and departures should be measured inwardly from it. Yet again there is strong argument for the use of a best-fit least-squares cylinder. Here the cylindricity would be defined as P1╯+╯V1 (remember that figures of roundness show highly magnified versions of the outer skin of the real workpiece which will be considered later). The advantage of the bestfit method is not only its uniqueness but also its great use in many other branches of engineering. Using the Â�minimum zone or other zonal methods can give a distortion of the axis angle of the cylinder without giving a substantially false value for the cylindricity measurement. This is the case where the odd big spike (or valley) dominates the positioning of the outer (inner) sleeve. Least-squares methods would take note of this but would be unlikely to give a false axis angle. For this reason the conventional way of examining the least-squares cylinder will be dealt with shortly. It should be remembered here that it is a vastly more difficult problem than that of measuring either roundness or straightness. Interactions occur between the effect of tilt of the cylinder and the shape distortion introduced necessarily by the nature of cylinder-measuring machines—the limaçon approach.
152
Handbook of Surface and Nanometrology, Second Edition
How these interact will be considered shortly. Before this some other different methods will be considered to illustrate the difficulty of ever giving “one-number” cylindricity on a drawing with ease. Another difficulty arises when the concept of the “referred cylinder” is used. This is the assessment of the out-of-cylindricity of the measured data from a referred form—the cylinder that is the perfect size to fit the data. The problem is that the referred cylinder has to be estimated first from the same raw data!
CYLrr
2.4.6.2.1â•… Cylindrical Parameters Some cylindrical parameters: CYLrr-Cylinder radii peak to valley. CYLtt-Cylinder taper(SCY). CYLp-Peak to reference cylindricity deviation (LSCY). CYLt-Peak to valley cylindricity deviation (LSCY), (MZCY), (MICY), (MCCY). CYLv-Reference to valley cylindricity deviation(LSCY). CYLq-ROOT mean square cylindricity deviation (LSCY).
Cylinder radii peak-to-valley CYLat
where LSCY is the least squares cylinder reference, MZCY is minimum zone reference, MICY is maximum inscribed cylinder reference and MCCY minimum circumscribing cylinder reference Two examples are in Figure 2.200 2.4.6.2.2â•… Assessment of Cylindrical Form Ideally any method of assessment should isolate errors of the instrument and the setting-up procedure from those of the part itself. One attempt at this has been carried out [166] (Figure 2.199 and as a practical example Figure 2.200). Taking the coordinate system of the instrument as the reference axes, θ
Cylinder taper angle
FIGURE 2.200â•… Parameters CYLrr and CYLat (Adapted from ISO/TS 12181-1. With permission.)
for rotation and z for vertical translation they expressed the profile of a cylinder by
k+1
r ( θ, z ) =
k
l+1 l
FIGURE 2.199â•… Cylindrical data set. (From Liu, J., Wang, W., Golnaraghi, F., and Liu, K., Meas. Sci. Technol., 19(015105), 9, 2008.)
n
∑A
0j
j=0
Pj ( z ) +
m
n
i =1
j=0
∑ ∑ A P (z) cos iθ ij
j
+ Bi j Pj ( z ) sin iθ (2.497)
where P and A, B are orthogonal polynomials and Fourier coefficients, respectively. The choice of function in any of the directions is governed by the shape being considered. In the circular direction the Fourier coefficients seem to be the most appropriate because of their functional significance, especially in bearings. P is in the z direction and A, B are€the€Fourier coefficients in any circular plane, j represents the order of the vertical polynomial and i the harmonic of
153
Characterization
the Fourier series, r denotes the observation of the deviation of the probe from its null at the lth direction, and on the kth section. Thus
∑ ∑ r P (z )cos iθ ∑ ∑ P (z )cos iθ ∑ ∑ r P (z )sin iθ . = ∑ ∑ P (z )cos iθ
Aij =
t
u
k =1
l =1
2 j
t
Bij
k
j
2
2
u
k =1
l =1 2 j
k
j
(2.498)
1
k
2
k
∆R
1
k
k
X
2
The advantage of such a representation is that some geometric meaning can be allocated to individual combinations of the functions. The average radius of the whole cylinder, for instance, is taken as (0, 0), and the dc term in the vertical and horizontal directions. (0, j) terms represent variations in radius with z and (i, 0) represent geometrical components of the profile in the z direction, that is the various samples of the generator. The effect of each component of form error can be evaluated by the sum of squares
Sij =
u 2 ( Aij + Bij2 ) 2
z
t
∑ P ( z ) . 2 j
k
Y
FIGURE 2.201â•… Method of specifying cylindrical error. 2π
l
dia
Axial direction
0
Ra
on
cti
e dir
FIGURE 2.202â•… Development of cylinder surface.
(2.499)
k =1
Taper due to the workpiece and lack of squareness of the axes is given by the coefficient (l, 0) as will be seen from the polynomial series. More complex forms are determined from the way in which the least squares polynomial coefficients change, for example the Legendre polynomial FIGURE 2.203â•… Conality development.
P0 ( z ) = 1
P1 ( z ) = z P2 ( z ) =
(2.500)
3z 2 1 − etc. 2 2
In this way, taking higher-order polynomials often enables complex shapes to be specified. Extending this concept has to be allowed because often the behavior of the axis of the part has to be considered, not simply the profile (or generator). In these instances the function representing the cylinder may be better described using three arguments representing the three coefficients. Thus F(P, A, B) describes the shape, where the center coordinates for a given z are A1, B1. Plotting the curve representing F(z, 1, 1) describes the behavior of the x and y axes with height. Also, plotting F(z, 0, 0), the way in which the radius changes with height, can be readily obtained (Figure 2.201). The questions of what order of polynomial is likely to be met with in practice to describe the axis change with z or how R changes with z have not been determined mainly because
of the lack of availability of suitable instrumentation. This situation has now been remedied. However, it seems that these changes as a function of z could be adequately covered by simple quadratic curves. Examining these curves gives some of the basic information about the three-dimensional object. However, showing these curves demonstrates the difficulty of displaying such results, as in Figure 2.202. One obvious way is to develop the shape, but this has the disadvantage that it does not retain a visual relationship to the shape of the workpiece. As in the measurement of flatness, Lagrangian multipliers can be used to reduce error from the readings. Here, however, there are more constraining equations than in flatness. Other more general shapes such as conicity can be equally well dealt with (Figure 2.203). Exactly the same analysis can be used except for scaling factors in the angular direction. Note that, in all these analyses, the validity of the Fourier analysis relies on the fact that the signal for the circular part is a limaçon. The use of the Fourier coefficients to identify and separate out set-up errors from the genuine form errors depends on this approximation.
154
Handbook of Surface and Nanometrology, Second Edition
An example of the problems involved is shown in Figure 2.204. Assessing the departure from a true cylinder or any similar body by using a single parameter in the separation of radii, as in Figure 2.169, as the basis of measurement is prone to ambiguity. Figure 2.205 shows the classical shapes of taper, or conicity, bowing, concave and convex “barreling.” Each of the figures illustrated in Figure 2.170 has the same nominal departure. “One-number cylindricity” is obviously not sufficient to control and specify a cylindrical shape. The figures obviously have different forms and consequently different functional properties. Only by summing some type
of specification, which includes, for example, the change of apparent radius with height and/or the way in which the leastsquares axis changes with height, can any effective discrimination of form be achieved. 2.4.6.2.3 Cylindrical Form Error The fact that the axial deviations and the polar deviations are usually regarded as independent, at least to a first order suggests that the best way to measure a cylinder is by means of a cylindrical-based coordinate-measuring machine, or its equivalent, in which a linear reference and a rotational reference are provided at the same time. Furthermore, the argument follows that these two features should not be mixed in the measurement a set of individual roundness data linked with straightness should be obtained. This idea is generally used as shown in Figure 2.199 However, the spiral method does mix them up. The advantage is purely instrumental, both drive motors for the reference movement are in continuous use, giving high accuracy if the bearings are of the hydrodymamic type and also some advantage in speed. Figures 2.206 through 2.210 show other definitions, such as run-out coaxiality, etc., and the effect of sampling. Cylindricity or, more precisely, deviations from it as seen in Figure 2.211 is much more complicated than roundness, Asperity
FIGURE 2.204 One-number cylindricity—minimum separation of two cylinders.
(a)
Shifted datum
(b)
Original datum
FIGURE 2.206 Effect of asperities. The MICY, MZCY, MCCY axes are very sensitive to asperities and where possible should not be used for the datum axis.
(c)
FIGURE 2.205 Three types of error in cylindrical form, typical examples (a) axis distortion, sometimes called distortion of median line, (b) generatrix deviations, and (c) cross-section form deviations.
Instability of the MC cylinder
FIGURE 2.207 Some factors which cause errors I cylindricity measurement-mismatch of shape.
155
Characterization
Datum axis
Total run-out
(a)
(b)
(c)
(d)
Cylinder axis
FIGURE 2.208â•… Total run-out, some definitions of cylindrical parameters. Total run-out is similar to “total indicated reading” (TIR) or “full indicated movement” (FIM) as applied to a roundness or two-dimensional figure, but in this case it is applied to the complete cylinder and is given as a radial departure of two concentric cylinders, centered on a datum axis, which totally encloses the cylinder under test.
Datum axis
Datum axis
FIGURE 2.211â•… Errors in cylindrical form basic types of deviations (a) axial form error, (b) overall shape, (c) radial form error, and (d) combination of errors.
Component axis
Component axis
Coaxiality vaule
Coaxiality vaule
FIGURE 2.209â•… Some definitions of cylindrical parameters. Coaxiality is the ability to measure cylindricity, and to set an axis allows the measurement of coaxiality and relates the behavior of one axis relative to a datum axis.
With only 50 data points some of the surface detail is lost
FIGURE 2.210â•… Some factors which cause errors in cylindricity measurement: the effect of insufficient data points per plane.
straightness or sphericity. This is not only because it is three dimensional but also because it is defined relative to a mixture of coordinate systems, that is polar and Cartesian rather than either one or the other. Cylindricity is also much more important functionally than sphericity because of its central role in bearings and shafts in machines. The
figures highlight some of the problems. One of the most important is the realization that considerable thought has to be put into defining the axes of cylinders. Whereas the actual “one-number” peak-to-valley deviation estimating cylindricity is not too sensitive to the choice of algorithm used to estimate the referred cylinder, the position of the best axis very definitely is. The extra large peak or valley can completely swing the direction of the axis, as seen in Figure 2.206. Also, the amount of data required to cover completely the surface of a cylinder is likely to be high. It is sensible to use hundreds of data points per revolution in order to catch the spikes. This sort of cover is not required for the harmonic content of the basic shape, but it is required to find the flaw (see Figure 2.210). Mismatching of shapes, such as in Figure 2.207, is also a problem. Trying to fit a cone into a cylinder requires more constraint than if the data set of the test piece is nominally cylindrical. In the case shown a preferred direction would have to be indicated, for example. There is also the problem of incompatible standards as shown in Figure 2.209 for coaxiality. Whichever is used has to be agreed before queries arise. If in doubt the ISO standard should always be used. Again, if in doubt the leastsquares best-fit algorithm should be used. This may not give the smallest value of cylindricity, neither may it be the most functional, but it is usually the most stable and consequently less susceptible to sample cover and numerical analysis problems, as seen in Figure 2.176. Many other factors are important, such as the definition of run-out and inclined cylinders, as will be seen. Many have not yet been formally defined but are unfortunately being asked for.
156
Handbook of Surface and Nanometrology, Second Edition
The proposed method for defining cylindricity described above relies on the usual reference limaçon, at least at first sight. It seems that this method was first adopted more to demonstrate the use of least squares by which the parameters can be found than for metrological reasons, but it is a method of describing the surface of which more use could probably be made. The most comprehensive discussion of the problem of cylindricity is due to Chetwynd [177]. Harmonic analysis in roundness measurement has been variously proposed, and the extension to a polynomial axis seems natural. It is, however, still restricted by the need to interpret part of the “profile error” as caused by residual misalignment of the workpiece to the instrument coordinate system. To do this the first harmonic of each cross-section (e.g., A and B1) and the linear polynomial along the axis are the only terms caused by misalignment. Thus the limaçon approximation is being applied at each cross-section to account for eccentricity there and the least-squares straight line through the centers of these limaçons is taken as the tilt error between the workpiece and the instrument. Other workers have implicitly assumed the use of a “reference cylinder,” which is in fact a limaçon on each crosssection perpendicular to the z axis, with the centers of these limaçons lying on a straight line. This is true even of methods which do not actually measure such cross-sections, such as schemes using a helical trace around the workpiece. Virtually all reported work is concerned with least-squares methods. One partial exception is an attempt to discover the minimum zone cylinders from the least-squares solution. Many search methods have been proposed but are considered to be inefficient and an alternative is proposed which uses a weighted least-squares approach, in which the weights relate to the residuals of an unweighted least-squares solution so that the major peaks and valleys are emphasized. This method is an estimation of the minimum zone cylinders rather than a solution. It still relies upon the validity of the limaçon approximation at every cross-section. The measurement of cylindricity will be examined in more detail from the viewpoint that it will be required to produce extensions of the roundness standards and the methods are required for the solution of least-squares, minimum zone, minimum circumscribing and maximum inscribing cylinders in instrument coordinates.
radius placed perpendicular to the z axis and having their centers lying on a straight line. In practice these circles are almost inevitably approximated by limaçons (Figure 2.212). The istinction of these different forms is important, so the distinct terminology used by Chetwynd will be adopted here. “Cylinder” will be reserved strictly for describing a figure in which all cross-sections perpendicular to its axis are identical with respect to that axis. Unless specifically stated otherwise, a right circular cylinder is implied. Other cylinder-like figures which do, however, have a different geometry will be called “cylindroids,” again following the terminology of Chetwynd. Distinction is also made between “tilt,” in which all points of a figure are rotated by the same amount relative to the coordinate system, and “skew,” in which the axis of the figure is so rotated but the cross-sections remain parallel to their base plane. In the case of a cylindroid, shape is described in terms of the cross-section parallel to the unskewed axis. The reference figure commonly used in cylinder measurement is then a skew limaçon cylindroid. It should be noted that, since the axis is tilted (skewed), the eccentricity at different heights will vary and so the skew limaçon cylindroid does not have a constant cross-sectional shape (Figure 2.212). An investigation of reference figures suitable for measuring cylindricity must start from a statement of the form of a true cylinder oriented arbitrarily in the space described by a set of instrument coordinates. The circular cylindrical surface is defined by the property that all its points have the same perpendicular distance (radius) from a straight line (the€ axis). The following analysis follows the work of
2.4.6.4â•…Reference Figures for Cylinder Measurement None of the literature describing work on the measurement of cylinders makes use of cylindrical reference figures. Nearly always the same implicit assumption is made, namely that the cross-sectional shape in a given plane is unaltered as the alignment of the workpiece is altered. The reason for this constancy of approach probably arises from the nature of the instruments used in the measurement. In effect, they produce profiles representing sections of a cylinder on planes perpendicular to the z axis of the instrument coordinate frame. The cylinder is represented by a series of circles of the same
FIGURE 2.212â•… Tilted cylinders. If a cylinder is tilted when measured, the result will appear as an ellipse. Therefore it is essential that the leveling of the cylinder is performed before measurement. However, it may be that if a second cylinder is being measured relative to the first (e.g., for coaxiality), re-leveling is not practical (since the priority datum will be lost). In this case, it is possible for the computer to correct for tilt by calculating the tilt and orientation of the axis and noting the radius of the second cylinder, and to compensate by removing the cylinder tilt ovality for each radial plane prior to performing the cylinder tilt. Removal of the secondharmonic term or applying a 2 UPR filter is not adequate, as any true ovality in the component will also be removed.
(b) Skewed circular cylinder
(a) Tilted circular cylinder
157
Characterization
chetwynd conveniently described using direction cosines and a vector notation for X and l in Equations 2.501 through 2506. The axis is fully defined by a set of direction cosines l1 and a point X0 through which it passes. The perpendicular distance of a general point X from this line is given by p = X − X 0 sin α,
(2.501)
where α is the angle between the axis and the line joining X to X0. The direction cosines l of this joining line will be
l=
X − X0 [( X − X 0 ) ( X − X 0 )]1/ 2 T
cos α = l1T l.
(
(2.503)
)
2
P 2 = ( X − X 0 ) ( X − X 0 ) − ( X − X 0 ) l1 . (2.504) T
R0 (1 + A + B ) 2 1
2 1
1/ 2
[1 + ( B1 cos θ − A1 sin θ) ] 2
1/ 2
[( X − A0 ) (1 + B12 ) + (Y − B0 ) (1 + A12 ) 2
(1 + A + B ) 2 1
1/ 2
2
+ Z 2 ( A12 + B12 )
−2 ( X − A0 )(Y − B0 ) A1 B1 − 2 ( X − A0 ) A1 Z − 2 (Y − B0 ) B1 Z ]1/ 2 .
R02 = ( X − X 0 ) ( l3 − l1l1T )( X − X 0 ) , T
(2.505)
where l3 is the three-square identity matrix. Within the context of normal cylindricity measurement, a less generally applicable description of the cylinder can be used to give a better “feel” to the parameters describing it. Also, experience of two-dimensional roundness measurement shows the type of operations likely to be needed on the reference (e.g., linearizations) and the forms of parameterization which are convenient to handle. It may be assumed that the axis of a cylinder being measured will not be far misaligned from the instrument Z axis (i.e., the axis of the instrument spindle) and so its description in terms of deviation from that axis has advantages. In practical instruments
(2.507)
The conversion of equation from Cartesian to cylindrical polar coordinates gives the equation of a tilted cylinder as
[( A0 + A1 Z ) sin θ − ( B0 + B1 Z )cos θ] 1 − R02 [1 + ( B1 cos θ − A1 sin θ)2 ]
To define the cylinder, all points having p = R0 are required, so a complete description is
1 2 1
( A + A1 Z + A0 B12 − A1 B0 B1 ) cos θ ( B + B12 + B0 A12 − A0 A1 B1 ) sin θ R (θ, Z ) = 0 + 0 2 1 ( B1 cos θ − A1 sin θ ) +
− A1 − B1 A12 + B12 (2.506)
R0 =
T
− A1 B1 1 + A12 − B1
and, on multiplying out Equation 2.506, gives
Substituting Equations 2.503 and 2.502 into Equation 2.501 and working, for convenience, with p2, since p is a scalar length, gives.
1 + B12 1 − A1 B1 (l3 − l1l 1T ) = (1 + A12 + B12 ) − A1
(2.502)
and the angle α is then found:
this distance can be taken to be small. In a direct parallel with the description of eccentricity, Cartesian components of these deviations are used. The intersection of the axis with the A = 0 plane will be at (A0, B0) and the slopes from the Z axis of the projections of the cylinder axis into the XZ and YZ planes will be A1 and B1. Any point on the axis is then defined by the coordinates (A0 + A1Z , B0 + B1Z , Z). The slopes A1 and B1 relate simply to the direction cosines so that
1/2
.
(2.508)
In this form both the similarity to and the differences from the simple eccentric circle can be seen. The cross-section in a plane of constant Z is an ellipse with minor semi-diameter R0 and major semi-diameter R0(1 + A21 + B21 )1/2, its major axis having the same direction as the cylinder axis projected onto the XY plane. The direction of the ellipse does not correspond to the direction of eccentricity in the plane since this latter value includes the contribution of A0 and B 0. To allow analytical solutions to reference fitting and to allow the practical need to work with radius-suppressed data, a reference figure linear in its parameters is desired. This may be found either by the direct application of Taylor expansions (it is easier to work with Equation 2.507 and then convert the result to polar coordinates) or by the removal of relatively small terms from Equation 2.508 in a manner akin to the truncation of the binomial series in deriving the limaçon
158
Handbook of Surface and Nanometrology, Second Edition
from the circle. The linearization of the cylinder about the point of perfect alignment (A0 = B0 = A1 = B1 = 0) is shown to be the skew limaçon cylindroid
where α is the angle of the axis to the Z axis and φa and φE are the directions of tilt and total eccentricity in the X Y plane. The eccentricity terms E and φE depend upon Z whereas the terms due to pure tilt do not. The acceptability of the model depends upon the maximum value of eccentricity ratio which occurs at any plane (which will be at one end of the axis length over which measurements are taken) and also upon the magnitude of the tilt compared with the absolute radius. As written above, the first term in the error can be identified with the representation of the tilted cylinder in terms of a skew circular cylindroid, while the second term relates to the approximation of the circular cross-sections of that cylindroid by limaçons. The above discussion is naturally also of concern to the measurement of roundness profiles on cylindrical objects. It is quite common for tilt to be the major cause of eccentricity in a reading, particularly when using fixtures that cannot support the workpiece in the plane of measurement; under such conditions the phases φa and φE will be broadly similar so that the possible sources of second-harmonic errors reinforce each other. On the other hand, the error in the radial term could be rather smaller than would be expected simply from the limaçon approximation.
consequence to measurement practice is that exactly the same assessment techniques may be used as have been used here for roundness assessment. The cylindroid’s behavior under radius suppression is exactly the same as that of the limaçon since the suppression operates in directions perpendicular to the Z axis. The magnification usually associated with the translation to chart coordinates has one extra effect on the cylindroid since, generally, it would be expected that different values of magnification would be applied in the radial and axial directions. The slope of the cylindroid axis from the measurement axis will be multiplied by the ratio of the magnifications in these directions. The shape difference between a limaçon cylindroid and a cylinder is subject to more sources of variation than that between a limaçon and a circle, but again similar methods can be used to control them. The amplitude of the second harmonic of the horizontal section through the cylinder will be, under practical measurement conditions, the effective error in that particular cross-section of the cylindroid. A worst condition for its size is that the harmonics generated by tilt and eccentricity are in phase when the combined amplitude will be R0/4(tan2 α + γ2 Z), γ(Z) being the eccentricity ratio at the cross-section. Thus a quite conservative check method is to use (tan2α + γ2max)1/2 as a control parameter in exactly the manner that e = 0.001 is used for roundness measurement. It should be stressed that the values of a likely to be encountered within current practices are very small. The total tilt adjustment on some commercially available instruments is only a few minutes of arc, so values of tan α = 0.001 would not be regarded as particularly small. In the majority of situations the limit on tilt will come from its effect on the allowable eccentricity: if the axial length of cylinder over which the measurement is performed is L 0, there must be at least one plane where the eccentricity is at least L α/2 tan α so γmax will exceed tan α whenever the length of cylinder exceeds its diameter (as it may, also, if this condition is not satisfied). The ellipticity introduced by tilting a cylinder is difficult to account for in reference figure modeling since, apart from the problems of working with a non-linear parameterization, there are other causes of elliptical cross-sections with which interactions can take place. Using, for example, bestfit “ellipses,” probably modeled by just the second harmonic of the Fourier series, on cross-sections will not usually yield information directly about tilt. This relates to the observation that, while every tilted cylinder can be described alternatively, and equivalently, as a skew elliptical cylindroid, the vast majority of elliptical cylindroids do not describe tilted circular cylinders. Given a good estimate of the cylinder axis and knowledge of the true radius, the amplitude and phase of the elliptical component can be calculated and could be used in a second stage of determining the reference.
2.4.6.4.1 Practical Considerations of Cylindroid References The development of the skew limaçon cylindroid from the cylinder is a parameter linearization. Thus the immediate
2.4.6.4.2 Limaçon Cylindrical References 2.4.6.4.2.1 Least-Squares Cylindroids Conventional Mea surement The skew limaçon cylindroid is linear in its parameters and so the least-squares solution for residuals δi
R(θ, Z ) = ( A0 + A1 Z )cos θ + ( B0 + B1 Z )sin θ + R0 . (2.509) A comparison of Equations 2.508 and 2.509 shows how much information is totally disregarded by this linearization. In particular, there is no remaining term concerned with the ellipticity of the cross-section. For small parameter values, the differences between Equations 2.508 and 2.509 will be dominated by the second-order term of the power series expansion, namely R0 ( A1 cos θ + B1 sin θ)2 2
−
1 [( A0 + A1 Z ) sin θ − ( B0 + B1 Z )cos θ]2 . (2.510) 2 R0
The nature of these error terms is emphasized if they are reexpressed as
tan 2 αR0 E 2 (Z ) [1 + cos 2(θ − ϕ a )] – [1 – cos 2(θ – ϕ E ( Z )] 4 4 R0 (2.511)
159
Characterization
δ i = Ri − [( A0 + A1 Zi ) cos θ + ( B0 + B1 Z i ) sin θi + RL ] (2.512)
2.512 can be replaced by a double summation over the points in each plane and the number of planes, for example
can be stated directly. In matrix form, the parameter estimates are given by the solution of
∑ cos θ ∑ sin θ cos θ ∑ Z cos θ ∑ Z sin θ cos θ ∑ cos θ ∑ R coss θ ∑ R sin θ = ∑ RZ cos θ ∑ RZ sin θ ∑ R
2
2
∑ sin θ cos θ ∑ sin θ ∑ Z sin θ cos θ ∑ Z sin θ ∑ sin θ 2
2
2
2
2
2
∑ ∑ cos θ
2
0
2
∑ ∑ sin θ 0
2
2
∑ Z ∑ cos θ 0
∑ Z ∑ cos θ 0 0
full cylindrical surfaces will be considered. For these it is probable that a sampling scheme having a high degree of uniformity would be used for instrumental as well as arithmetic convenience. Since, also, on a roundness-measuring instrument it is normally advisable for best accuracy to keep the spindle rotating constantly throughout the measurement, two patterns of measurement are suggested: a series of cross-sections at predetermined heights Zi or a helical traverse. If a series of cross-sections are used and each sampled identically, the summations over all the data in Equation
∑ ∑ cos θ
2
jk
(2.513)
j =1
∑ cos θ A ∑ sin θ B ∑ Z cos θ × A ∑ Z sin θ B 0
0
1
1
M
R 1
where there are m sections each of n points, mn = N. Now, if the sum over j satisfies the fourfold symmetry identified earlier for the simplification of the least-squares limaçon solution, at each plane the summations over cos θ sin θ and sin θ cos θ will be zero and so also will the sums of these terms over all the planes. The matrix of coefficients then becomes quite sparse:
2
∑ Z ∑ sin θ 0
2
2
n
Zk
k =1
∑ Z sin θ cos θ ∑ Z sin θ ∑ Z sin θ cos θ ∑ Z sin θ ∑ Z sin θ
2
2
∑ Z ∑ cos θ
∑
m
Z i cos θi =
i =1
∑ Z cos θ ∑ Z sin θ cos θ ∑ Z cos θ ∑ Z sin θ cos θ ∑ Z cos θ
where, to save space, indices have been omitted: R, θ, and Z all have subscript i and all summations are over i = 1–N. The added complexity of the three-dimensional problem means that there is even higher motivation than with the simple limaçon for choosing measurement schemes that allow simplification of the coefficient matrix. This is unlikely to be possible on incomplete surfaces and so only 0 0 0
N
0
∑ Z∑ 0
0 sin 2 θ 0 0 2 sin θ 0 mn
(2.514)
∑Z ∑ 2
0
Noting that those terms involving cos2 θ correspond to A0 and A1 and, similarly, sin2 θ to B 0 and B1, further interpretation of this matrix is possible. The radius of the leastsquares limaçon cylindroid is the mean value of all the radial data points and its axis is the least-squares straight line through the centers of the least-squares limaçons on the cross-sectional planes. The measurement scheme has, apart from computational simplicity, two advantageous features: the information on both axial straightness and cylindricity measurements is produced simultaneously and, depending upon exactly what is to
160
Handbook of Surface and Nanometrology, Second Edition
be measured, there is considerable scope for data reduction during the course of the measurement. There are other ways of selecting measuring schemes which lead to simplifications similar to, but not as complete as, the above when using measurement in cross-section. No details of them will be given here.
A point concerning coordinates should be made here. Given only the provision that the Z-axis scaling is unchanged, the cylindroid parameters can be used in chart or instrument coordinates by applying magnification and suppressed radius in the normal way. One property of the limaçon fit which does not apply to the cylindroid is the observation that the estimate for the center is exact. Reference to Equation 2.512 reveals that there are additional terms that contribute slightly to the odd harmonics in the case of the cylindroid. Taking the second-order term of the binomial expansion of the first part of the equation suggests that the fundamental is changed only by about 1 + tan2 (α/4): 1 so that the estimate of the axis from the cylindroid should still be good in practice. This is, however, a further warning that there is a greater degree of approximation between cylinder and cylindroid than between circle and limaçon. Although still a good approximation, the cylindroid can stand rather less abuse than the simpler situations. This section has been concerned with the definition and expression of the cylinder form as seen realistically from a typical instrument measurement point of view. Least-squares methods have been examined and the various linearizations necessary have been described. The use of zonal methods,
2.4.6.4.2.2 Least-squares cylindroids helical measurement The helical traverse method is attractive from an instrumentation point of view. However, computationally, it loses the advantage of having Z and θ independent and so evaluation must be over the whole data set in one operation. It would be expected that samples would be taken at equal increments of θ and, since Z depends linearly on θ this allows various schemes for simplifying Equation 2.514 quite considerably. Only one scheme will be discussed here. If it can be arranged that the total traverse encompasses an exact even number of revolutions of the workpiece and that there is a multiple of four samples in every revolution, then defining the origin such that Z = 0 at the mid-point of the traverse will cause all summations of odd functions of Z and θ to be zero, as will all those in simply sin θ cos θ or sin θ cos θ. The coefficient matrix in Equation 2.512 then becomes
∑ cos θ
0 0 0
2
0
0
∑ sin θ ∑ Z sin θ cos θ ∑ Z sin θ cos θ ∑ Z cos θ 2
2
∑ Z sin θ cos θ
0
0
0
0
The original set of five simultaneous equations is therefore reduced to a set of two and a set of three, with considerable computational saving. One failing of the helical traverse relative to the measurement of cross-sections is that no information relating directly to axial straightness is produced. Overall, it would seem that there need to be fairly strong instrumental reasons for a helical traverse to be used, particularly as there would appear to be more types of surface discontinuity that can be excluded from the measurement by the judicious choice of cross-sectional heights than from the choice of helix pitch.
2
∑ Z sin θ cos θ 0 0
∑ Z sin θ ∑ Z sinθ 2
2
0 0 Z sin θ N 0
(2.515)
∑
that is the minimum zone cylinder, maximum inscribed cylinder, and minimum circumscribed cylinder, to put a “number” to the deviations from a perfect cylinder will be considered in Chapter 3. 2.4.6.5 Conicity Conicity is slightly more complicated than cylindricity in that a taper term is included. There are many ways of specifying a cone, including a point on the axis, the direction of the axis and the apex angle. Together they make an independent set of six constraints, as opposed to five in cylindricity. Thus if the limaçon (or linearization) rules apply then the equations can be written taking n to be the taper. Thus a 6 × 6 matrix results,
− − − − − − − − − − − − − − − − 2 2 EZi cos θi EZi sin θc EZi cos θi EZi sin θ i − − − −
EZi cos θi − A0 EZi sin θi − B0 EZi2 cos θ − A1 × = EZi2 sin θ − B1 EZi2 EZi n ERi Z i EZi − R1
(2.516)
161
Characterization
where the dashes are in the respective places as in Equation 2.512. Equation 2.516 represents the equation of the best-fit cone. Despite being able to devise plausible formulae for measurement philosophy, there are still plenty of specimens which, for practical reasons, simply cannot be measured properly even today. Examples of such workpieces are long, slender, thin shafts, rolls of paper presses or the driving shafts for video recorders. They cannot be measured either owing to their physical dimensions or the accuracy required.
2.4.7â•…Complex Surfaces 2.4.7.1â•…Aspherics Aspheric surfaces are put into optical systems to correct for spherical aberration (Figures 2.215 and 2.216). Spherical aberration always occurs when polishing optical surfaces using random methods. This is because of the central limit theorem of statistics which asserts that the outcome of large numbers of operations, irrespective of what they are, will result in Gaussian statistics of the output variable, say y. So p( y) = exp(− y 2 / 2σ 2 ) / 2π.σ . In the case of optics in three dimensions the p(x, y, z) is of the form exp ( x 2 + y 2 + z 2 / 2σ 2 ) from which the form of the geometry is obviously x2 + y2 + z2, which is spherical. Luckily the sphere is not a bad starting point for optical instruments. In certain regions the curves look similar. See Figure 2.213. The general form for a curve which can be developed to an aspheric form is Equation 2.517 For example consider the general conic equation for Z. By adding a power series (i.e., A1x + A2 x2) the aspheric can be generated. In practice the Best fit aspheric form
Actual profile
Residual
z
Aspheric axis
FIGURE 2.213â•… Best fit aspheric form. (a)
number of terms are about 12 but often up to 20 can be used. This may be to avoid patent problems. Thus the general form Z=
Ra Rt Xp Xv Smx Smn
Average absolute deviation of residuals from best-fit€line. Maximum peak to valley error Distance of residual peak from aspheric axis. Distance of residual valley from aspheric axis Maximum surface slope error Mean surface slope error.
2.4.7.2â•… Free-Form Geometry When a part is determined to some extent by the interaction of a solid part with a fluid or when the performance involves interaction with electromagnetic or indeed any type of wave as in aerodynamics and optics it is not easy to achieve the desired result just by using a combination of simple surface shapes such as cylinders, spheres planes, etc., complex shapes are needed. These are known as free-form surfaces or sometimes sculptured surfaces. This term is not usually given to surfaces which have an axis of symmetry as for aspherics. Free-form surfaces are difficult to define except in a negative sense: it is easier to say what they are not! Campbell and Flyn [179] say “Free-form surfaces are complex surfaces which are not easily recognizable as fitting into classes involving planes and/ or classic quadratics.” Besl [180] has an alternative “a free-form surface has a well defined surface normal that is continuous almost everywhere except at vertices, edges and cusps.” (c)
Hyperboloid K z’. This constraint reduces the count as before.
Table 3.1 Digital vs. Analog Peaks Percentage Drop m 1 3 4 5 10 100
3.2.3 Effect of Quantization on Peak Parameters The quantization interval can influence the count as is shown in Figure 3.8b. It can be seen that using exactly the same waveform, simply increasing the quantization interval by a factor of 2 means that, in the case of Figure 3.8b, the three-point peak criterion fails, whereas in the other case it does not. So, even the A/D resolution can influence the count. In order to get some ideas of the acceptable quantization interval it should be a given ratio of the full-scale signal size, subject to the proviso that the interval chosen gives sufficient accuracy. As an example of the quantitative effect of quantization, consider a signal that has a uniform probability density. If the range of this density is split up into m1 levels (i.e., m blocks) then it can be shown that the ratio of peaks to ordinates is given by
ratio =
1 1 1 − + . 3 2m m 2
n∆z
∫
( n−1)∆z
Ratio Peaks/ Ordinates
From Continuous Signal
0.125 0.19 0.21 0.23 0.25 0.33
70 45 37 30 15 n. In this situation a stable numerical solution is obtained by finding a “singular value decomposition” (SVD) of the matrix A. In this A can be written as a product
u = (u1 , u2 ,…, un )T
(3.222)
where T indicates the transpose and u, the vector form of u. 3.6.1.3 Linear Least Squares Here di is a linear function of the parameters u and there exist constraints aij and bi such that
di = ai1u1 + …+ aiju j + …ainun − bi .
(3.223)
This is a set of linear equations which can be put in the matrix form Au = b:
a11 a21 :. aml
a12 a22 :. …
ain a2 n :. amnl
u1 b1 u2 b2 = . :. :. um bn
(3.224)
Bu = λu
(3.226)
for some eigenvalue λ The case in question is such that B = AT A,
A = USV T
(3.227)
(3.228)
with U and V orthogonal matrices and S a diagonal matrix containing the singular values of A. If i is as in Equation 3.227 the squares of the diagonal elements of S are the eigenvalues of B and the columns of V are the corresponding eigenvectors. These are usually produced as standard output from most software implementations. SVD is now standard for many solutions. See, for example Ref. [44].
3.6.2 Best-Fit Shapes 3.6.2.1 Planes The steps are as follows.
1. Specify point x0, y0, z0 on a plane 2. Evaluate direction cosines (a, b, c) of a normal to a plane; note that any point on a plane satisfies
216
Handbook of Surface and Nanometrology, Second Edition
3. Evaluate distance from plane
di = a( xi − x 0 ) + b ( yi − y0 ) + c(zi − z0 )
Suppose that there is a first estimate u 0 of where the function u crosses the u axis. Then:
a( x − x0 ) + b ( y − y0 ) + c( z − z0 ) = 0
4. Describe the algorithm
The best-fit plane P passes through the centroid –x, –y, –z and this specifies a point in the plane P. It is required to find the direction cosines of P. For this (a, b, c) is the eigenvector associated with the smallest eigenvalue of B = AT A
(3.229)
where A is the m × 3 matrix whose ith row is (xi – –x, yi – –y, zi – –z ); alternatively (a, b, c) is the singular vector associated with the smallest singular value of A. Thus, an algorithm to find the best-fit line in 3D is:
1. Calculate the centroid –x, –y, –z 2. Form matrix A from the data points and –x, –y, –z 3. Find the SVD (singular value decomposition) of A and choose the singular vector (a, b, c) corresponding to the smallest singular value. The best-fit plane is therefore –x, –y, –z, a, b, c.
Similarly for the best-fit line to data in two dimensions. 3.6.2.2 Circles and Spheres These shapes involving an axis of revolution are usually evaluated by linearization of the basic equations mechanically as stated by the process of radius suppression by mechanically shifting the instrument reference to a position near to the surface skin of the geometric element being measured. Failing this an iterative method has to be used. The Gauss– Newton iterative method can be used when the relationship between the distances di and the parameters uj is non-linear. Hence an iterative scheme has to be used. This is similar to the Deming method given in Chapter 2. The situation is shown in Figure 3.38. One iteration of the Newton algorithm for computing the zero of a function is as follows. f(u)
1. Evaluate f(u 0) 2. Form a tangent to the graph at (u 0,, f(u 0)) as shown in Figure 3.38 3. Find u1 where the tangent crosses the u axis
Then
u1 = u0 +
f (u0 ) = u0 + p. f ′(u0 )
(3.230)
u1 is now the new estimate of where f(u) crosses the u axis. This is repeated until the result is close enough to u*. Basically the Gauss–Newton method is as follows. Suppose there is a first estimate u of u*. Then solve the linear least-squares system Jp = − d
(3.231)
where J is the m × n Jacobean matrix whose ith row is the gradient of di with respect to u, that is Ji j =
∂ di . ∂uj
(3.232)
This is evaluated at u and the ith component of d is di(u). Finally, the estimate of the solution is
u : = u + p (: = means update).
These steps are repeated until u is close enough to u*. Ideally, changes in the iteration should be small for this method to be quick in convergence and stable. For example, for the best-fit circle:
1. Specify circle center x0 , y0, radius r. Note that (x–x0)2 + (y–y0)2 = r 2 2. Obtain distance from the circle point: di = ri − r ri = [( xi − x 0 )2 + ( yi − y0 )2 ]1 ⁄ 2
(3.233)
3. The elements of the Jacobean J are ∂d i = − ( xi − x 0 ) ⁄ ri ∂x 0
u
u1
p
u0
FIGURE 3.38 Gauss–Newton method.
∂di = − ( yi − y0 ) ⁄ ri ∂y0 ∂di = −1. ∂r
(3.234)
217
Processing, Operations, and Simulations
4. Algorithm: knowing x0 , y0, and r for the circle center and radius estimates, use them in a Gauss–Newton iteration. Form J p = –d from the d of Equation 3.233 and the J of Equation 3.234 5. Solve px0 J py0 = − d p r
(3.235)
for p 6. Update the x0 , y0, r according to x0 := x 0 + px 0
y0 := y0 + py 0
because one of the parameters is the direction of a line that is the axis of the cylinder. Such a line x0, y 0, z0, a, b, c, in 3D can be specified by four parameters together with two rules to allow the other two to be obtained: Rule 1: represent a direction (a, b, 1) Rule 2: given the direction above, ensure z0 = –ax0–by 0 For nearly vertical lines these two rules give stable parameterization for a, b, x0, and y0. The problem of finding the distance of a data point to an axis is quite complicated. The following strategy is therefore followed based on the fact that for axes which are vertical and pass through the origin, a = b = x0 = y0 = 0 and all expressions become simple. The strategy is as follows:
(3.236)
r := r + rr . Carry on until successful and the algorithm has converged. The linear best-fit circle can be evaluated by an approximation described earlier used by Scott [61] (Chapter 2, Ref. [172]). In this model
S=
∑f
i
2
(3.237)
is minimized, where fi = r 2i –r 2 rather than ri–r as in the linear case—the trick is to make the f 2i linear. By changing the parameters f can be made into a linear function of x0 , y0, and ρ = x20 + y20 r 2,
fi = ( xi − x0 )2 + ( y0 − yi )2 − r 2 = −2 xi x0 − 2 yi y0 − ( x02 + y02 − r 2 ) + ( xi2 + yi2 ).
To rotate a point (x, y, z) apply a 3 × 3 matrix U to the vector (x, y, z)T; the inverse rotation can be achieved by using the transpose of U: u x v = U y . w z
x0 A y0 = b ρ
x + y − ρ. 2 0
2 0
x u y = U T v z w
(3.239)
(from which x0 , y 0 , and ρ are found) where the elements of the ith row of A are the coefficients (2xi12yi1–1) and the ith element of b is xi2 + yi2. An estimate of r is
Note:
(3.238)
Thus
1. Iterate as usual but, at the beginning of each iteration, translate and rotate the data so that the trial best-fit cylinder (corresponding to the current estimates of the parameters) has a vertical axis passing through the origin. 2. This means that when it is time to evaluate the Jacobean matrix the special orientation can be used to simplify the calculations. At the end of the iteration use the inverse rotation and translation to update the parameterizing vectors x0, y0, z0, a, b, c, and thereby determine the new positions and orientation of the axis.
(3.240)
This can be used to get a first estimate of the parameter for the non-linear method if required. Both the linear and non-linear methods described above can be used for spheres. 3.6.2.3 Cylinders and Cones It has been suggested [33] that a modified Gauss–Newton iterative routine should be used in the case of cylinders
(3.241)
A simple way to construct a rotation matrix U to rotate a point so that it lies on the z axis is to have U of the form
C2 U= 0 − S2
0 1 0
S2 1 0 0 C2 0
0 C1 S1
0 S1 C1
(3.242)
where Ci = cosθ and Si = sinθi, i = 1, 2. So if it is required to rotate (a,b,c) to a point on the z axis, choose θ1 so that bC1 + cS1 = 0 and θ2 = aC2 + (cC1 –bS1)S2 = 0. These notes are only suggestions. There are other methods that can be used but these are most relevant to geometric parts like cylinders, which in surface metrology can usually be oriented to be in reasonable positions that is for a cylinder nearly vertical.
218
Handbook of Surface and Nanometrology, Second Edition
Care should be taken to make sure that the algorithm is still stable if reasonable positions for the part cannot be guaranteed.
Cylinder
1. Specify a point x0, y0, z0 on its origin, a vector a, b, c pointing along the axis and radius r 2. Choose a point on the axis. For nearly vertical cylinders z0 = − ax0 − by0
c = 1.
(3.243)
2. Transform the data by a rotation matrix U which rotates a, b, c to a point on the z axis: xi xi yi := U yi . zi zi
3. Form the right-hand side vector d and Jacobean according to Expressions 3.244, 3.246, and 3.247. 4. Solve the linear least-squares system for Px0, etc. Px0 P y0 J Pa = − d . Pb Pr
3. Distance of the chosen point to cylinder is di = ri − r
ri =
(ui2 + vi2 + wi2 )1 ⁄ 2 ⁄ (a 2 + b 2 + c 2 ) 1 2
(3.244)
P x0 x 0 y0 := y0 + U T Py 0 z0 z 0 − P P − P P x0 a y0 b
ui = c( yi − y0 ) − b(zi − z 0 ) vi = a(zi − z0 ) − c( xi − x 0 )
(3.245)
wi = b( xi − x 0 ) − a( yi − y0 ). To implement the Gauss–Newton algorithm to minimize the sum of the square distances the partial deviation needs to be obtained with the five parameters x0, y0, a, b, r (the five independent variables for a cylinder). These are complicated unless x 0 = y0 = a = b = 0 in which case ri = xi2 + yi2 (3.246) ∂di = − xi ⁄ ri ∂x 0 ∂di = − yi ⁄ ri ∂y0 ∂di = − xi zi ⁄ ri ∂a
Algorithm Operation
1. Translate data so that the point on the axis lies at the origin: ( xi , yi , zi ) := ( xi , yi , zi ) − ( x0 , y0 , z0 ).
a Pa T b := U Pb c 1
(3.250)
r := r + Pr. These steps are repeated until the algorithm has converged. In Step 1 always start with (a copy of) the original data set rather than a transformed set from the previous iteration. If it is required to have (x 0, y 0, z0) representing the point on the line nearest to the origin, then one further step is put in:
(3.247)
∂di = −1. ∂r
∂di = − yi zi ⁄ ri ∂b
(3.249)
5. Update the parameter estimates to
where
(3.248)
x0 x 0 a ax0 + by0 + cz0 y0 := y0 − b . a 2 + b 2 + c 2 z0 z 0 c
(3.251)
(See Forbes [33] for those situations where no estimates are available.) Luckily, in surface metrology, these iterative routines are rarely needed. Also it is not yet clear what will be the proportion of workpieces in the miniature domain that will have axes of centro-symmetry. Until now, in micro-dynamics the roughness of rotors and stators has made it difficult to measure shape. However, there is no doubt that shape will soon be a major factor and then calculations such as the one above will be necessary. Cones These can be tackled in the same way except that there are now six independent parameters from (x0, y0, z0), (a, b, c),
219
Processing, Operations, and Simulations
For most cones S0 is chosen such that
t
x0
For cones with moderate apex angle ( RST , the usual case, Equation 4.10 is multiplied by r2
FIGURE 4.7 Stylus pressure-geometric factors.
22 /3 RST . 3 RS
(4.11)
261
Measurement Techniques
This deformation difference vanishes where RS (r)→∞ since then the peak and valley deform equally. Some practical values are a ≈ 45(WR1/ 3 ) nm.
(4.12)
So for W = 0.7 mN (the typical force is about 70 mg) and R = r1 r 2/(r1 + r 2) = 2000 nm, a becomes equal to about 0.2µ m, giving a pressure of about 2500 N mm–2, which is less than the yield pressure for most materials except perhaps for soft coatings. In practice it is the area of contact difference between the peak and valley which is of more importance than the compliance. Similar results to Equation 4.9 are obtained with a stylus having a flat tip whose nominal dimension is 2 µm square. It should be noted that, in practice, the tip is usually longer in one direction (~ 10 µm) than another to allow more mechanical strength. In these circumstances the user should be aware of the fact that it is the longest dimension which is the effective integrator when an isotropic surface is being measured. The more difficult case for deformations on hollow cylindrical workpieces has also been derived [30].
Measured hardness
4.2.2.5 Elastic/Plastic Behavior One important point needs to be made concerning the apparently large pressures exerted by styluses on the surface. The criterion for damage is not that this pressure exceeds the yield stress of the material being measured; it is that the pressure does not exceed the skin yield stress of the material and not the bulk yield stress. Unfortunately these are not the same. If the hardness of the material is plotted as a function of the depth of indentation as in Figure 4.8, the hardness increases with decreasing depth. This increase in
hardness at the skin is not due to work hardening because the same effect occurs when measuring the hardness of indium, which anneals at room temperature. This apparent increase has been attributed to the fact that at very light loads the size of the indentation is small compared with the average distance between dislocations, so the material approaches its theoretical hardness, also the elastic modulus can increase see Figure 8.20 for this reason, in many cases where the stylus should damage the surface, no damage is discernible. See Liu [35] Figure 4.33 Also, the presence of compliance does not in itself distort the picture of the surface. Providing that (i) it is reasonably small compared with the texture being measured and (ii) it is more or less constant, an output of high-mechanical fidelity can be obtained. Typical tips are shown in Figure 4.9. Although at first sight these two types of stylus shape appear to be very different, in practice they are not. Usually what happens is that a spherical tip develops a flat on it with use and the flat tip gets rounded edges. Hence they tend to converge on each other in shape with wear. Most styluses are made out of diamond. To get the longest life out of the tip, if possible the diamond should have its (111) crystallographic axis parallel to the plane of the workpiece and hence movement; other planes are considerably softer. For the measurement of very fine surface roughness it is possible, with patience, to get a stylus tip down in size to smaller than 0.1 µm. However, under these circumstances the stylus force has to be considerably reduced. A typical force used in these applications is 1 mg. i.e., 10 –2 N. In the discussion above concerning loading it should be noted that, if the surface has been produced by polishing or similar finishing, there will always be a scale of size of asperity much smaller than the stylus (i.e., fractal) so that the curvatures will necessarily be high and the compliance will be so great that plastic flow will inevitably result. However, such marks are very small “in energy” compared with the major
Bulk hardness 1.0 0.1
10
Depth of penetration (µm)
FIGURE 4.8 Skin hardness relative to bulk hardness. (Hunt, P. and Whitehouse, D. J., Talystep for multiphysical measurement of plasticity index, Rank Taylor Hobson Technical report no 12, 1972.)
FIGURE 4.9 Typical stylus tips.
262
Handbook of Surface and Nanometrology, Second Edition
finishing scratches and can be neglected for most practical purposes. The same applies to fractal surfaces.
which is when the stylus is in a valley. In this case WD adds to F so that the maximum reaction is W = 2F. The important factor here is that it is in the valleys where the stylus is likely to cause damage due to the actual force not at the peaks. Damage at the peaks is due to the use of a skid which only touches the peaks (Figure 4.11) so it is quite easy to tell what is causing the damage simply by looking at the surface under a microscope. Also the nature of the mark is another indicator because the stylus tends to produce a scratch whereas the stylus produces a burnished mark because of the smearing effect.
4.2.2.6â•…Stylus and Skid Damage Prevention Index The criterion for surface damage can be given in terms of the hardness H (preferably of the skin) of the test material and the elastic pressure Pe[31]. It is the imbalance between these two pressures that gives rise to damage. Thus the comparison is between H, the Vickers or diamond pyramid number (DPN) and Pe. Vickers or Knoop hardness is preferred because they most nearly imitate the effect of the diamond onto the surface. Let ψst be the damage index. Then Pe = ψ st H .
(4.13)
ψ stylus =
If ψst > 1 then Pe is bigger than the yield stress of the material so damage occurs. For ψst < 1 only elastic deformation takes place, at least in priniple. So
ψ skid =
P W ψ st = e = H π a2H
(4.14)
where a is the elastic contact radius on the surface and W is the load. This has two components one static F and one dynamic WD. Using the values of W, a, and H an estimation of ψ can be made [31]. Equation 4.14 is assuming average pressure, if the maximum pressure is deemed more appropriate this should be multiplied by a factor usually taken to be 4/3. In the Formula 4.14 the values of W and H need to be understood. Consider first the reaction on the surface W. As indicated above
W = F − WD .
(4.15)
The criterion for trackability, i.e., that the stylus does not leave the surface when being traced across it is that W is never negative (i.e., F ≥ WD). The minimum value of W therefore is–F or −WD = F, i.e., W ≥ 0. This occurs at a peak where the stylus is being accelerated by the surface yet being held on by the stylus spring and gravity, see Figure 4.10. If this criterion holds it automatically sets the criterion for the maximum reaction at the surface. This is when the retardation of the stylus system is a maximum
−2 / 3 Pe st k 2 FR = H H E −2 / 3 Pe sk k W ′r = H H E
W′ ψ sk = ψ st 2F
FIGURE 4.10â•… Stylus Damage Index. (Whitehouse, D. T., Proc. Inst. Mech. Engrs., 214, 975–80, 2000.)
2 /3
Rst , rs
ψ sk > ψ st .
(4.18)
(4.19)
In other words, the skid is much more likely to cause damage than the stylus. Furthermore skid damage is at the peaks and
W=0
E
1/ 3
where W′ is the load on skid, Rst is stylus radius and rS is average asperity radius. So knowing that W’ >> 2F and Rst ≤ rs
Skid
W'
W = 2F
(4.17)
valley (Figure 4.10a), whereas in Equation 4.17 “r” is taken to be the radius of the peak on the surface and the curvature of the skid is neglected because its radius has to be large (Figure 4.10b). Also the reaction due to the stylus dynamics is at most equal to the static force F i.e., W = 2F. However, the skid does not have a big dynamic component: if it is integrating properly over a large number of peaks it should be practically zero. This does not make W′ equal to F because W′ is the load of the whole traversing unit, which is much greater than F, k is a constant dependent on Poisson’s ratio. See for example the term with E as the denominator in Equation 4.5 If the damage index for the stylus is ψst and that for the skid is ψsk then
R
Stylus
(4.16)
r E
FIGURE 4.11â•… Skid damage. (Whitehouse, D. T., Proc. Inst. Mech. Engrs., 214, 975–80, 2000.)
263
Measurement Techniques
any stylus damage is in the valleys. Which of the two possibilities is causing the damage can therefore be identified. Because of Equation 4.19 and other practical reasons, the use of a skid is not a preferred ISO method although in some cases where the component cannot be taken to an instrument its use is unavoidable. Also the skid is usually offset laterally from the stylus track to ensure that any skid damage does not affect the profile obtained by the stylus. For cases in surface investigation where the dynamic reaction WD is needed, for example in friction measurement, the normal force due to the geometry is WD (see Section 4.2.2.7 in this chapter) where 1/ 2
2 2 4 1 w w WD = 3 Rq Mw 1 + ( 4ζ 2 − 2 ) + , (4.20) wn ε wn 2 n
where M is the effective mass of the pick-up (see next section), wn the resonant frequency of the pick-up and ζ the damping term, ε represents the type of surface. For ε → 0 the surface becomes more random as in grinding. Rq is the RMS roughness. Notice that surface parameters and instrument parameters both contribute to the measurement fidelity. In Equation 4.16 the W term has been explained. The other parameter H is more difficult because it is material processing dependent. It is not the bulk hardness but the “skin” hardness which is different (see Figure 4.8). Unfortunately the index ψ is very dependent on H; far more than on the load as can be seen in Equation 4.21.
ψ=
k 1/ 3 R W . E H
−2 / 3
.
(4.21)
There is another factor which is apparent in Figure 4.16 and is contained indirectly in Equation 4.9 which is the fact that the effective radius of contact is different at the peaks than it is at the valleys: at the peaks the curvatures of the surface and the stylus add whereas the curvature of the surface subtracts from that of the tip at the valleys. This adds another factor to Equation 4.21. In other words, the interesting point is that although the area of contact at the peaks is smaller than at the valleys the dynamic load is lighter so that the differential pressure between the peaks and valleys tends to even out whereas for the skid pressure is always exerted at the peaks! However as the tip radius is small when compared with that of the surface its curvature is invariably the dominant factor in determining the contact so that subject to the criterion that the stylus is just tracking the surface W = 2F and Equation 4.21 holds. 4.2.2.7â•…Pick-Up Dynamics and “Trackability” The next problem that has to be considered is whether the stylus tracks across the surface faithfully and does not lift off. This is sometimes called “trackability.” In what follows
the approach used by surface instrument-makers [32] will be followed. The bent cantilever as found in SPM instruments will not be discussed here but late in the section. Reference [33] shows a cantilever used in a simple high-speed method of measurement. It is well known that high-effective mass and damping adversely affect the frequency response of stylus-type pickups. However, the exact relationship between these parameters is not so well known. This section shows how this relationship can be derived and how the pick-up performance can be optimized to match the types of surfaces that have to be measured in practice. The dynamics of pick-up design fall into two main areas:
1. Frequency response 2. Mechanical resonances that give unwanted outputs These will be dealt with separately.
4.2.2.7.1â•… Frequency Response The frequency response of a stylus system is essentially determined by the ability of the stylus to follow the surface. That is, the output does not fall at high frequencies, although the stylus will begin to mistrack at lower amplitudes. This concept, termed “trackability,” was coined (originally) by Shure Electronics Ltd to describe the performance of their gramophone pick-ups. A figure for trackability usually refers to the maximum velocity or amplitude for which the stylus will remain in contact with the surface at a specified frequency. A more complete specification is a graph plotting the maximum trackable velocity or amplitude against frequency. Remember that the frequency is in fact a spatial wavelength of the surface traced over at the pick-up speed. 4.2.2.7.2â•… Trackability—The Model The theoretical trackability of a stylus system can be studied by looking at the model of the system and its differential equation (Figures 4.12 through 4.14). Here M* is the effective mass of the system as measured at the stylus (see later), T is the damping constant (in practice, air or fluid resistance and energy losses in spring), K is the elastic rate of spring, F is the nominal stylus force (due to static displacement and static weight) and R is the upward reaction force due to the surface. An alternative pick-up system for high-speed tracking using a stylus system suited to areal mapping is discussed in Reference [33]. Trackability is usually quoted in terms of velocity, so initially the system will be looked at in terms of velocity. If the surface is of sinusoidal form, its instantaneous velocity will
Spring Beam Pivot
FIGURE 4.12â•… Schematic diagram of suspension system.
264
Handbook of Surface and Nanometrology, Second Edition
K
T
10
M*
ζ=0
Z
0.2 C
R
1
0.6
FIGURE 4.13 Model of system.
1.0
Spring K
2.5 Transducer Stylus
Damper T
0.1 0.1
Mass M
FIGURE 4.14 Side-acting stylus pick-up.
also be of sinusoidal form; hence the instantaneous velocity v = a sin ωt. Now, the equation representing the system is given in differential form by .. . M ∗ z + T z + Kz + F = − R (t ).
(4.22)
analysis only assumes sinusoidal waves on the surface). Here ωn = 2π × undamped natural frequency of system, that is
∫
M ∗ ν + Tν + K νdt + F = − R ( t ).
(4.23)
The first analysis will be in terms of the amplitude of velocity and the condition for trackability. Hence
K = a − M * ω cos ω t − T sin ω t − F ω
(4.25)
This can be simplified further to
| a |=
|a|=
AN K ω 2n ω 2 + + 4ζ 2 − 2 M* ω n ω 2 ω 2n
ω2 ω2 | a | = AN ω n n2 + 2 + 4ζ 2 − 2 ω ωn
− 1/ 2
−1 / 2
(4.28)
where AN is the nominal stylus displacement. The factor C is shown plotted for various values of ωn in Figure 4.15. (4.26)
but for zero trackability R(t) = 0 and hence
T . 2ω n M *
= AN ω nC ,
1/ 2 2 K 2 * R(t ) = a − M ω + T sin(ωt − ϕ ) − F ω
Tω ϕ = tan −1 K − M * ω2
ζ=
a R(t ) = − M * a cos ω t − Ta sin ω t + K ω cos ω t − F (4.24)
ω n = 2 πfn = ( K /M *)1/ 2
and the damping factor (ratio) is
or in terms of velocity
10
FIGURE 4.15 Response of system. (From Frampton, R. C., A theoretical study of the dynamics of pick ups for the measurement of surface finish and roundness. Tech. Rep. T55, Rank Organisation, 1974.)
Workpiece
1
ω/ω0
F (4.27) M * ω 0 (ω 02 /ω 2 + ω 2 /ω 20 + 4ζ 2 − 2)1/ 2
where |a| is the amplitude of the maximum followable sinusoidal velocity of angular frequency ω (Note that this
4.2.2.7.3 Interpretation of Figure 4.15 Any pick-up will have a fixed value of ωn and ζ .Hence, from the family of curves, the value of C can be found for any ratio of ωn from 0.1 to 10. Thus
ω fV = n , ωn λ
(4.29)
where V is the stylus tracking velocity and λ the wavelength of the surface. Having found a value of C, then the peak
265
Measurement Techniques
stylus velocity for which the stylus will stay in contact with the surface is given by
a = 2πfn AN C.
S=
2π fn AN C , V
(4.30)
where S is the maximum slope for which the stylus will stay in contact with the surface at tracking velocity V. Expression 4.30 is the usual expression used by manufacturers of instruments.
Trackable amplitude (µm)
This can also be written in terms of slope:
4.2.2.7.4â•… Trackability in Terms of Amplitude Equation 4.23 can be solved for a displacement which is of a sinusoidal form. This gives the result which can be directly connected with the velocity expression of Equations 4.24 and 4.27:
ω ω2 ω2 | a | = AN n n2 + 2 + 4ζ 2 − 2 ω ω ωn
10
1/ 2
.
(4.31)
Apk =
AN . 2ζ
(4.32)
(Note that for zero damping this peak is infinite.) The position of the peak is also definable as
λ pk =
V V ≈ . 2 1 / 2 fn (1 − ζ ) fn
(4.33)
Figure 4.16 shows the relationship between f n and λpk for various V. These simple equations allow a linear piecewise construction of the trackability curve to be easily drawn for a particular pick-up, providing that the resonant frequency, damping ratio, nominal stylus displacement and tracking velocity are known. The resonant frequency can be calculated from the effective mass M* and spring rate K, that is
f0 =
1 K 2π M *
1/ 2
.
Vr = 90 Mm min–1 Vr = 309
Real surface considered to be sine waves
100 Wavelength (µm)
1000
FIGURE 4.16â•… Trackability of system.
This curve is plotted for different values of ω0 (A N and ζ fixed at 0.25 and 0.2 mm, respectively). It will be noticed that the trackability at long wavelengths is only determined by the nominal stylus displacement. For example, if a stylus has a nominal displacement of 0.1 mm to obtain the correct stylus pressure, then it will be able to track sine wave amplitudes of 0.1 mm. There is also a noticeable peak in the curve, which is due to the resonant frequency of the stylus suspension. This peak is of a magnitude
5 Hz 10 Hz
–50 Hz
f0 –100 Hz
(4.34)
The damping ratio ζ = T/2ωn M* (where T is force/unit velocity). Unfortunately T is rarely known, but since in the conventional case low damping is required, ζ can be approximated to 0.2 since in practice it is difficult to obtain a value significantly lower than this. There are occasions, however, where fixed values of ζ are required (as will be seen shortly) which are much larger than 0.2 (see Figure 4.24). 4.2.2.7.5â•…Application of Instrument, Trackability Criterion to Sinusoidal Surfaces For any surface the amplitude spectrum must lie at all points below the trackability curve. Figure 4.16 shows some typical trackability curves in relation to an area termed “real surfaces”. The upper boundary represents a wavelength-toÂ�amplitude ratio of 10:1, which is the most severe treatment any pick-up is likely to receive. The lower boundary represents a ratio of 100:1, which is more typical of the surfaces to be found in practice. It is worth noting here that if a surface has a large periodic component, this can be centered on the resonant frequency to improve the trackability margin for difficult surfaces (see Figures 4.16 through 4.19). 4.2.2.8â•…Unwanted Resonances in Metrology€Instruments There are two principal regions where resonances detrimental to pick-up performance can occur (Figure 4.20): (i) the pick-up body and (ii) the stylus beam. These resonances are detrimental in that they have the effect of causing an extra displacement of the stylus with respect to the pick-up sensor (photocells, variable reluctance coils, etc).
266
Handbook of Surface and Nanometrology, Second Edition
200 90
0H z 20
50
10 =
5
10
300 mm min–1
10
f
Tracking velocity
0
Tracking velocity V(mm min–1)
Wavelength for peak trackability λ(mm)
100
1
50
0
10
100 Resonant frequency f0(Hz)
102 103 Wavelength for peak trackability λ(µm)
FIGURE 4.19â•… Wavelength for maximum trackability.
FIGURE 4.17â•… Tracking velocity—resonant frequency. Body
Stylus beam
A/2ζ
FIGURE 4.20â•… Schematic diagram of pick-up.
Amplitude
1
Short-wavelength amplitude fall-off 40 dB/decade
0.1
FIGURE 4.21â•… Vibration mode of pick-up body. 0.1
1 V/fn
for the fundamental. There are only two ways in which the effect of this resonance can be reduced:
10
FIGURE 4.18â•… Linear approximation to trackability curve.
4.2.2.8.1â•… Pick-Up Body The pick-up body can vibrate as a simple beam clamped at one end (Figure 4.21). In the schematic diagram of the pick-up (Figure 4.20) the resonant frequency is given by
π fn = 2 8L
EI A
(4.35)
1. By increased damping or arranging the sampling to minimize effects to white-noise inputs. See the treatment of damping for white-noise surfaces by Whitehouse [34]. 2. By designing the body such that the resonance frequency is well outside the frequency range of interest. This, the standard approach, will be discussed first.
This resonant frequency can be calculated for beams with one end free or both clamped, or where the beam is solid or hollow, circular or rectangular in cross-section, or whatever.
267
Measurement Techniques
4.2.2.8.2â•… Stylus Beam It is evident that the stylus beam can resonate to give an unwanted output, and again this can be removed by damping or shifting the frequency out of the useful range. Since the stylus beam is of a uniform shape the resonant frequency can be reliably predicted providing the mode of vibration can be defined. However, this is difficult to define. A beam with one end clamped and the other free is unlikely, since the stylus is presumably always in contact with the surface. On the other hand, the boundary conditions for a clamped beam do not hold either, since the slope at the beam ends is non-zero. Hence, if f n is the resonant frequency for a free beam then f′n = 9 × f n (where f n is the resonant frequency for a clamped beam) and the resonant frequency will probably be somewhere between 2f n and 9f n , since there will be nodes at both€ends. Example fn (one end free) was calculated at 576 Hz (measured at 560 Hz). fn (both ends clamped) was calculated at 5.18 kHz. Hence the true resonant frequency will be greater than 1.12 kHz, and may be as high as 5 kHz.
4.2.2.9â•…Conclusions about Mechanical Pick-Ups of Instruments Using the Conventional Approach An expression for the trackability of a pick-up has been derived [32,89] and from this a number of criteria can be inferred for a good pick-up:
1. The resonant frequency, comprising the stylus effect mass and the suspension spring rate, should be as high as possible. 2. The nominal stylus displacement should be as high as possible. 3. The pick-up damping ratio should be as low as possible within certain constraints, although in practice any value less than unity should be satisfactory and, as will be seen, the value of 0.6 should be aimed for if finishing processes such as grinding are being measured. 4. To eliminate unwanted resonances the pick-up body and stylus beam should be as short and as stiff as possible.
Some of these conditions are not compatible, for example 1 and 2. Condition 1 ideally requires a high-spring rate, whereas Condition 2 requires a low spring rate to limit the force to give an acceptable stylus pressure. Similarly, Condition 4 is not consistent with the measurement of small bores. However, a compromise could probably be obtained by using a long stylus beam to obtain the small-bore facility and keeping the pick-up body short and stiff. The beam resonance must of course be kept out of the range of interest unless the surface is completely random.
Note 1: Effective mass of a beam In the inertial diagram of a stylus beam (Figure 4.22) the effective mass is given by M* = M +
mh 2 aL ah h 2 + + × , L2 3 3 L2
(4.36)
where a = mass/unit length. If L/h = n then L2/h2 = n2, that is
M* = M +
m aL ah m ah n 3 + 1 + + = M + + . n2 3 3n 2 n2 3 n2
The effect of increasing L is (L + ΔL)/h = n’ and hence M 2* = M +
m ah n ′ 3 + 1 + . n ′ 2 3 n′ 2
(4.37)
Therefore As an example, for n = 2 and n’ = 3 ∆M * =
ah (0.86) − m(0.14). 3
Note 2: Resonant frequency of a beam (Figure 4.23) It can be shown that the small, free vertical vibrations u of a uniform cantilever beam are governed by the fourth-order equation ∂ 2u 2 ∂ 4 u +c = 0, ∂t 2 ∂x 4
(4.38)
where c2 = EIρA and where E is Young’s modulus, I is the moment of inertia of the cross-section with respect to the z axis, A is the area of the cross-section and ρ is the density. Hence it can be shown that /c 2G = β 4 F ( 4 ) /F = G
where β = constant
F ( x ) = A cos βx + β sin βx + C cosh βx + D sinh βx (4.39) G (t ) = a cos cβ 2t + b sin cβ 2t . L M
Stylus end
h m Fulcrum
FIGURE 4.22â•… Inertial diagram of stylus beam.
x
y z
FIGURE 4.23â•… Resonant frequency of beam. (Whitehouse, D. J., Handbook of Surface Metrology, Inst. of Physics, Bristol, 1994.)
268
Handbook of Surface and Nanometrology, Second Edition
If the beam is clamped at one end it can be shown that
Hence, because the input is a sine wave, comparing A with A’represents a transfer function. What are the implications of this equation? One point concerns the behavior at resonance. Putting ω = ωn shows that the dynamic force is 2AMω2nζ which is obviously zero when ζ = 0. The dynamic component is zero because the stylus is in synchronism with the surface. The force is not zero, however, because the static force F is present. This means that, even if the damping is zero and the system is tracking at the resonant frequency, it will still follow the surface. This in itself suggests that it may be possible to go against convention and to track much nearer the resonant frequency than was otherwise thought prudent. Another point concerns the situation when ω > ωn. Equations 4.42 and 4.43 show that the forces are higher when ω gets large; the force is proportional to ω2 for ω > ωn and hence short-wavelength detail on the surface is much more likely to suffer damage. Points such as these can be picked out from the equations of motion quite readily, but what happens in practice when the surface is random? Another question needs to be asked. Why insist on dealing with the system in the time domain? Ideally the input and output are in the spatial domain. It seems logical to put the system into the spatial domain also.
βL =
π (2n + 1) where n = 0, 1, 2... . 2
(4.40)
Hence, if the beam resonates in its fundamental mode β=
and fn =
π 2L
and ω n =
1 π 2 EI 2 π 4 L2 ρA
1/ 2
=
cπ 2 4 L2 π EI 8 L2 ρA
1/ 2
(4.41)
4.2.3 Relationship between Static and Dynamic Forces for Different Types of Surface The condition for stylus lift-off has been considered in Equations 4.27 and 4.30. These considerations have tended to deal with the behavior of the system to sinusoidal inputs. Only on rare occasions do such inputs happen. More often the input is random, in which case the situation is different and should be considered so. In what follows the behavior of the pick-up system in response to both periodic and random inputs will be compared. None of this disputes the earlier analysis based on the trackability criterion R = 0; it simply looks at it from a different point of view, in particular the properties, and comes up with some unexpected conclusions. First recapitulate the periodic case [32]. 4.2.3.1 Reaction due to Periodic Surface Using the same nomenclature as before for an input A sin ωt: 2
1/ 2
2 ω 2 ω AMω 1 − + 4ζ 2 ωn ωn 2 n
(4.42)
sin(ω t + ϕ ) + F = − Rv (t ) where
ϕ = tan −1
2ζ(ω /ω n ) . 1 − (ω /ω n )2
This shows that the reaction is in phase advance of the surface geometry. Hence energy dissipation can occur. For convenience the dynamic amplitude term can be rewritten as 1/ 2
2 4 ~ ω ω R max (t ) = AMω 2n 1+ (4ζ 2 − 2) + = A′. (4.43) ω n ω n
4.2.3.2 Reaction due to Random Surfaces For periodic waveforms other than a sinusoid the Fourier coefficients are well known and the total forces can be derived by the superposition of each component in amplitude and the phase via Equation 4.42 because the system is assumed to be linear[34]. However, for a random surface this is not so easy: the transformation depends on the sample. For this reason the use of random process analysis below is used to establish operational rules which allow the dynamic forces to be found for any surface using parameters derived from initial surface measurements. Using random process analysis also allows a direct link-up between the dynamic forces and the manufacturing process which produces the surface. The inputs to the reaction are made up of z ( t ), z ( t ),and z(t). For a random surface these can all be assumed to be random variables. It is usual to consider these to be Gaussian but this restriction is not necessary. If these inputs are random then so is Rv(t). The mean of Rv(t) is F because E[ z ( t )] = Ez[(t )] = E[ z (t )] = 0 ∼ and the maximum value of Rv,(t) can be estimated from its standard deviation or variance. Thus
v (t )) var(− R .. . = E[( Mz (t ) + Tz (t ) + kz(t ))2 ] .. . = M 2 σ c2 + T 2σ s2 + k 2σ 2z + 2{MTE[z (t ) z (t )] . + MkE[ z..(tt ) z (t )] + TkE[z(t )z (t )]}
(4.44)
or
σ 2R = M 2σ c2 + T 2σ 2s + k 2σ 2z − 2 Mkσ s2 .
(4.45)
269
Measurement Techniques
Equation 4.45 results because it can be shown that all crossterms are zero except, E[z¨ (t).z(t)] which is equal to –σ 2s and where σ 2s , σ 2s , σ 2z are the variances of z ( t ), z ( t ), and z(t), respectively. Letting k /M = mn2 T /M = 2ζω n and σ 2z = Rq2 = (to agree with the international standard for the standard deviation of surface heights)
(4.46)
4.2.3.3â•…Statistical Properties of the Reaction and Their Significance: Autocorrelation Function and Power Spectrum of R(t) These characteristics are actually what are being communicated to the mechanical system.
(4.47)
4.2.3.3.1â•… Autocorrelation Function The autocorrelation function is given by E[R(t)R(t + β)] where R(t) is defined as earlier. Removing the F value and taking expectations gives AR (β) where AR (β) is given by
1/ 2
σ2 σ2 σ 2 σ R = Rq M c2 + 4ζ 2ω 2n s2 + ω 4n − 2ω ω 2n s2 σz σz σ z = Rq M [ v 2ω 2 + (4ζ 2 − 2)ω 2ω 2n + ω n4 ]1/ 2 or
1/ 2
2 4 ω v2 ω σ R = Rq Mω 2n 1 + (4ζ 2 − 2) + 2 ωn ω ωn
has a value of ε = 0.58 whereas one having a Lorenzian correlation function of 1/(1 + β2) has a value of ε = 0.4. Note here that, according to Equation 4.49, as ε→0, Rmax→∞. In practice this cannot occur because Rmax is curtailed at the value F when stylus lift-off occurs.
where
σ 2 2πV ω = s2 = σ z λ q 2
2
2
σ 2 2πV and v = c2 = (4.48) σ s λ q 2
In relationship (Equation 4.48) λq is the average distance between positive zero crossings and λ q that of peaks. ~ Taking 3σR as the typical R max(t) value and writing ν/ω = 1/ε yields the dynamic force equation for a random surface: 1/ 2
2 2 4 max (t ) = 3 Rq Mω 2n 1 + (4ζ 2 − 2) ω + 1 ω . R ωn ε ωn
4 4 d ( Az (β)) 1 + . ωn dβ 4
(4.50)
It can be shown that the odd terms in the evaluation disappear because the autocorrelation function is an even function. In Equation 4.50 neither A R (β) the autocorrelation function of the reaction, nor Az(β) that of the surface profile, is normalized. The normalizing factor A(0), the variance of this vertical component of reaction, is given by
(4.49) If 3Rq for the random wave is taken to be equivalent to A for a periodic wave, then Equation 4.49 corresponds exactly to the expression given in Equation 4.43 except for the term (1/ε)2 which is a measure of the randomness of the surface. The term ε is a surface characterization parameter. It is also, therefore, a characterization of the type of reaction. Equation 4.49 allows the dynamic component of reaction for any type of surface to be evaluated. For ε = 1 Equation 4.49 reduces to Equation 4.50 and is true for a periodic wave. Strictly, ε = 1 corresponds to the result for a sine wave or any deterministic wave, although the dynamic characteristics for such surfaces will be different. This difference is related more to the rate of change of reaction, rather than the reaction force. In the next section this will be considered. However, the primary role of the ε parameter is in distinguishing random from periodic surfaces and distinguishing between random surfaces. As ε tends more and more to zero the surface becomes more random until when ε = 0 the surface is white noise; any type of random surface can be characterized but ε cannot take values larger than unity. As examples of intermediate surfaces, one which has a Gaussian correlation function
2 1 d2 AR (β) = M 2 ω 4n Az (β) − ( A (β))(4ζ 2 − 2) ω n dβ 2 z
σ2 σ2 A(0) = M 2ω 2n σ 2z + s2 (4ζ 2 − 2) + c4 . ωn ωn
(4.51)
It is clear from Equation 4.50 that there is a difference between AR (β) and Az(β) This difference can be seen more clearly by reverting to the power spectral density (PSD) P(ω) derived from the autocorrelation function and vice versa. Thus
1 A (β ) = π
∞
∫ P (ω ) cos ωβ dω .
(4.52)
0
4.2.3.3.2â•… Power Spectrum of R(t) Taking the Fourier spectrum of both sides, squaring and taking to the limit gives PR(ω) in terms of Pz(ω) Hence
4 ω 2 ω PR (ω ) = Pz (ω ) 1 + (4ζ 2 − 2) + . (4.53) ω n ωn
The expression in square brackets, designated H(ω) is the weighting factor on Pz(ω) to produce PR (ω) (see Figure 4.24). Later H(ω) is examined to decide how it needs to be manipulated to reduce surface damage. Equation 4.53 does not contain a term involving ε despite the fact that it refers to random
270
Handbook of Surface and Nanometrology, Second Edition
2.0
Critical damping weighting factor No-minimum criterion ζ = 1/√2
1.5
Weighting factor
It will be remembered that the reaction is of the same wavelength as the periodicity but out of phase (Equation 4.42). Letting A2 / 2 Rq2 = σ 2z (the variance of the surface) the energy loss per unit cycle becomes Equal-area criterion ζ = 0.59
0.8
(4.56)
This equation contains all the system parameters and two surface parameters, Rq for amplitude and ω for spacing. For a random wave Equation 4.54 gives
Min–max criterion ζ = 0.54
1.0
J = 4 R 2qπ Mζω nω.
L
1 . ( Mz + Tz + kz ) zdt J= L
∫
(4.57)
0
0.6 0.4
Evaluating Equation 4.57 and letting T / M = 2ζω n , k / M = ω 2n , σ 2s = 1/L ∫ z 2dt , σ 2z = σ 2s ω q2 , L = λ q and ω q = 2 π /λ q gives the energy loss J per unit equivalent cycle λq:
Zero damping weighting factor ζ = 0 (hkh cut region > ωv, where ωv is the highest frequency on the surface, but this is not always possible. In fact with the tracking speeds required today ωv invariably approaches ωn in value, it being difficult to make ωn sufficiently high by stiffening or miniaturizing the system. A further reason why this option is becoming difficult to achieve with modern contact instruments is the need to measure shape and form as well as texture with the same instrument. This requires the instrument to have a large dynamic range, which means that the spring rate k is kept low in order to keep the surface forces down for large vertical deflections. Having a low k value invariably requires that ωn is small. The implication therefore is that ω ≤ ωn for tactile instruments because of the potential damage caused by the forces exerted when ω > ωn. If ωn is low, as is the case for wide-range instruments, then the only practical way to avoid damage is to reduce the tracking speed V, thereby reducing ωv and consequently the forces. However, instead of insisting that ωn is made high (the effect of which is to make H(ω) ~1 for each value of ω there is an alternative criterion which relaxes the need for a high value of ωn by utilizing all the band of frequencies up to ωn In this the damping ratio ζ is picked such that ∞
∫ H ( ω ) dω = 1
(4.59)
0
so, although H(ω) is not unity for each individual value of ω it is unity, on average, over the whole range. This has the effect of making Ayr(β) = Ay(β) which is a preferred objective set out in Chapter 5. From Figure 4.24 it can be seen that the shape of H(ω) changes dramatically with damping. The curve of H(ω) has a minimum if ζ < 1/ 2. This is exactly the same criterion for there to be a maximum in the system transfer function of Equation 4.81. It also explains the presence of the term (4ζ2–2) in many equations. Evaluating Equation 4.59 shows that for unit area ζ = 0.59. With this value of damping the deviation from unity of the weighting factor is a maximum of 39% positive at ω = ωn and 9% negative for a ω = 0.55ωn. The minimax criterion gives ζ = 0.54 with a deviation of 17% and the least-squares criterion gives a ζ value of 0.57. All three criteria produce damping ratios within a very small range of values! Hence if these criteria for fidelity and possible damage are to be used the damping ratio should be anywhere
between 0.59 and 0.54. Within this range the statistical fidelity is assured. This has been verified practically by Liu [35]. It is interesting to note that the interaction between the system parameters and the surface can be taken a step further if the properties of the rate of change of reactional force are considered. Thus
Rv′(t ) =
d [ Mz..(t ) + Tz.(t ) + kz ] dt
(4.60)
from which, by the same method of calculation and letting σ c2 /σ s2 = v 2 , σ 2j /σ t2 = η2 1/ 2
2 2 4 v 1 v Rmax (t ) = 3 Mσ sω 2n 1 + (4ζ 2 − 2) + , ωn γ ωn (4.61)
where γ = v/η and η = 2π/λj and λj is the average distance between positive points of inflection of the surface profile; η = 2 π / λ q where λ is the average distance between peaks (σ 2j = E ( z.. (t )2 )). As before, putting 3δs = A’ for the derivative of the periodic wave and γ = 1 yields the corresponding formula for the rate of change of reaction for a periodic wave. In Equation 4.61 γ is another surface characterization parameter of one order higher than ε. It seems that this is the first time that the third derivative of the surface has been used to describe surface properties—this time with respect to the instrument system. 4.2.3.6â•…Alternative Stylus Systems and Effect on Reaction/Random Surface For low-damage risk R should be small (Figure 4.25). To do this F should be made small. If it is made too small the reaction becomes negative and the stylus lifts off. The signal therefore loses all fidelity. Making the maximum dynamic component of force equal to—F ensures that this never happens and that the maximum reaction possible is 2F. For minimum damage overall, therefore, F and should be reduced together and in parallel. A step in this reduction is to remove kz, which is the largest component of force. This means that the system acts as a gravity-loaded system in which F = mg where m is mass and g is the acceleration due to gravity. In this simplified system the range of “following” frequencies is given by
f ≤k
1 2π
g , a
(4.62)
where k is a constant which depends on the constituent parts of the pick-up and how they are connected; a is the amplitude of the surface signal. Furthermore, if T is removed using air bearings or magnetic bearings, a very simple force equation becomes
R(t ) = F + Mz...
(4.63)
272
Handbook of Surface and Nanometrology, Second Edition
2.0
Full second-order system critical damping ζ = 1
Second-order systemno displacement ζ=1
Weighting factor
1.6
ζ = 1 ζ = 0.59
1.2 Second-order system with no displacement or damping ζ=0
0.8
0.4
Full second-order system ζ = 0
0
0.4
0.8 ω/ωn
1.2
1.6
FIGURE 4.25 Alternative systems. (Whitehouse, D. J., Proc. IMechE., J. Mech. Eng. Sci., 202, 169, 1988.)
To get the height information from such a system does not mean that the signal has to integrate twice. The value of z could be monitored by non-intrusive means, such as an optical method that locates on the head of the stylus. Equation 4.63 implies a transfer function proportional to 1/Mω2 which drops off with a rate of 12 dB per octave. If such systems are considered then they can be compared with the full system over a bandwidth up to ωn as for H(ω) 4.2.3.7 Criteria for Scanning Surface Instruments There are a number of criteria upon which an instrument’s performance can be judged. These are (laterally and normal to the surface) range/resolution, speed or response, and, fidelity or integrity another is the wavelength–amplitude graph of Stedman. Cost is left out of this analysis. The range of movement divided by the resolution is a key factor in any instrument as it really determines its practical usefulness. At the present time in surface instruments it is not unreasonable to expect ratio values of 105 or thereabouts. One restriction previously encountered in the x and y lateral directions, was due to the relatively coarse dimensions of the probe. This has been largely eliminated by the use of styluses with almost atomic dimensions. The same is true of optical resolution limitations. The ability to manufacture these very sharp styluses has resurrected their importance in instrumentation. At one time the stylus technique was considered to be dated. Today but with the advent of scanning probe microscopes it is the most sophisticated tool! This is partly due to a swing in emphasis from height measurement, which is most useful in tribological
applications, to lateral or spatial measurements, which are more important in applications where spacing or structure is being investigated, namely in the semi-conductor and microelectronics industries and in biology and chemistry. A need for lateral information at nanometer accuracy has resulted in an enhanced requirement for the fidelity of measurement. At the same time the speed of measurement has to be high to reduce the effects of the environment. Speed has always been important in surface measurement but this is particularly so at the atomic level where noise, vibration, and thermal effects can easily disrupt the measurement. Speed of measurement is straightforward to define but fidelity is not so simple. In this context fidelity is taken to mean the degree to which the instrument system reproduces the surface parameter of interest. As has been seen, failure to achieve high fidelity can be caused by the probe failing to follow the surface characteristics owing to inertial or damping effects which are temporal effects or, alternatively, making a contact with such high pressure that the surface is damaged which are material effects or integrating the surface geometry which is a spatial effect. Hence the spatial and the temporal characteristics of the system have both to be taken into account whether the instrument is for conventional engineering use or for atomic use, furthermore the optimum use will depend on the characteristics of the surface itself. As has been explained above all factors should be considered. There is always a fine balance to be achieved between fidelity of measurement and speed. If the speed is too high fidelity can be lost, for example owing to surface damage in topography measurement. If it is too low then the environmental effects will destroy the fidelity. At the atomic level distinctions between what is mechanical and what is electrical or electronic become somewhat blurred but, nevertheless, in the microscopic elements making up the measuring instrument the distinction is still clear. In what follows some alternatives will be considered. Central to this analysis will be the basic force equation representing the measuring system. It consists essentially of a probe, a beam, or cantilever, a transducer and means for moving the probe relative to the specimen. 4.2.3.8 Forms of the Pick-Up Equation Note—It is difficult to reconcile the spatial character of the surface with the temporal character of the measuring device. In what follows alternative possibilities will be considered. 4.2.3.8.1 Temporal Form of Differential Equation The differential equation representing the force equation is linear and of second order. It is given by Equations 4.64 and 4.65:
M z + Tz + kz + F = R(t ),
(4.64)
z (t )( D 2 M + TD + k ) + F = R(t ),
(4.65)
or
where M is the effective mass of the probe relative to the pivot, T is the damping term, k is the rate of the equivalent spring,
273
Measurement Techniques
F is the static force, and D is the differential operator on z, the vertical movement of the probe. This equation is present in one form or another in all surface-measuring instruments. How it is interpreted depends on whether force or topography is being measured. Equation 4.64 can also be seen from different points of view. In force measurement R is the input and z the output, but for surface damage considerations z is the input and R, the reaction at the surface, the output. It is often convenient to decompose Equation 4.64 into a number of possibilities:
MZ + TZ
(4.66)
MZ + TZ + kZ
(4.67)
MZ + TZ + kZ + F .
(4.68)
Equation 4.66 represents the purely dynamic components involving time, whereas Equation 4.67 represents the spatial components that are those involving displacement z, and Equation 4.68 the total force. Consider next the way in which the force equation can be presented. In the form shown in Equation 4.64 it is in its temporal form and as such is very convenient for representing forces and reactions and, by implication, damage. 4.2.3.8.2 Frequency Form of the Differential Equation Considerations of speed also involve time, so this temporal form of the equation would also be suitable. However, an alternative form, which allows both speed and fidelity to be examined, is obtained if the force equation is transformed into the frequency domain. Thus Equation 4.64 becomes FR (ω) the Fourier transform of R(t), where Fz(ω) is that of z(t). So FRz (ω ) = Fz (ω ) Mω 2n
1/ 2
2 2 2 (4.69) ω ω 1 − + 4ζ 2 exp(− jϕt ) ωn ωn
where
ϕ = tan −1
−2ζω / ω n 1 − (ω / ω n )2
(4.70)
4.2.3.8.3 Statistical Form of the Differential Equation Equations 4.64 and 4.69 represent the temporal and frequency equivalents of the force equation. More realistically, to cater for the considerable noise present in atomic measurement the stochastic equivalents should be used, namely −2ζω /ω n ϕ = tan −1 1 − (ω /ω n )2
2
(4.71)
1 d2 AR (β) = M 2 ω 4n Az (β) − A (β)(4ζ 2 − 2)) ω n dβ 2 z 4 d4 1 + A (β) ω n dβ 4 z
(4.72)
where PR (ω) is the PSD of R(t) and AR (β) the autocorrelation function. Immediately it can be seen, from Equations 4.71 and 4.72, that the properties of the output PR(ω) are related intimately with the system and the input signal parameters M, ωn, ζ, and Pz(ω), respectively. In its simplest form for a random input of, say, displacement or topography to the system, consider the case when β = 0. In this case all values of ω are taken into account, not one at a time as in Equation 4.70. Thus from random process theory 1 ∫ ω 2 P(ω) AR (0) = Az (0) M 2ω 4n 1 − 2 (4ζ 2 − 2) ω n ∫ P(ω )
+
1 ∫ ω 4 P(ω ) . ω 4n ∫ ω 2 P(ω )
(4.73)
or
ω4 v 2 ω2 AR (0) = Az (0) M 2ω 4n 1 + 2 (4ζ 2 − 2) + 4 × 2 , (4.74) ωn ωn ω
– is the RMS angular frequency of 2πζ , where λ is where ω q q the average distance between positive zero crossings, and ν is the RMS peak frequency of 2π/λq where λq is the average distance between peaks. Equation 4.74 shows how the variance of R relates to the variance of the cantilever deflection taking into account the system parameters ζ and ωn and the statistics of the input given by νω. This formula enables all types of input shape to be taken into account. For vω = 0 the signal is white noise, and for νω = 1 it is a sine wave. When topography is being considered, Equation 4.73 can be written as
and ω2n = k/M, 2ζωn = T/M.
and
σ 2R = M 2ω 4n [σ 2z + σ 2s /ω 2n (4ζ 2 − 2) + σ c2 /ω 4n ]
(4.75)
Here σz,σs, and σc are the standard deviations of the surface heights, slopes, and curvatures, respectively. For investigations of fidelity involving wide bands of frequency and many varieties of profile form, Equations 4.70 through 4.75 are to be used. 4.2.3.8.4 Spatial Form of the Differential Equation It could be argued that if fidelity is to be considered, properly then it should be arranged that all issues are referred to coordinates of the waveform and not of the system as in the above.
274
Handbook of Surface and Nanometrology, Second Edition
Hence, for fidelity reasons, it may be beneficial to express all factors in terms of height z and distance x along the surface. Hence if the velocity of scan is V(x) and not constant the basic equation can be rewritten as R(x), where
it is easy to show that the phase shift between a random topography and the reaction from the tip of the scanning stylus is about 40°. This indicates that spatial fidelity between tip reaction and surface topography is nominally unity and, because of the phase shift, shear damage would be likely to be small. So far only fidelity due to damage has been considered. Also important is the fidelity between the transfer of the topographic signal zi from the tip to the transducer output zo. This can be obtained from H(ω). Thus the transfer function is given by TF where
MV 2 ( x )
d 2z(x ) dz ( x ) dV ( x ) + V (x) M + T + kz ( x ) x2 dx dx
= R( x ) − F
(4.76)
which, with certain constraints on V to be identified later, can be simplified to
d 2z dz + 2ζ( x )ω n ( x ) + ω 2n ( x )z = [ R( x ) − F ] / MV 2 (4.77) 2 x dx
TF =
1 1 = . H (ω /ω n )1/ 2 [1 + (ω /ω n )2 (4ζ 2 − 2) + (ω /ω n )4 ]1/ 2 (4.81)
where 1 dV ω ζ( x ) = ζ + and ω n ( x ) = n . 2ω n dx V
(4.78)
Expressions 4.76 through 4.78 will be used in the case for point-to-point fidelity. In Equation 4.76 R is provided by the static force F. Of the rest the force exerted by the restoring force due to the spring constant k is the next most important. Note that in open-loop topography measurement the cantilever and spring are not essential elements; they are included to keep the stylus on the surface when tracking, unlike in force measurement where they are essential.
Conventional criteria for operating this transfer function require that ω lc
(4.197)
where lc is the coherence length of approximately 100 µm. The probe geometry for surface finish and profile are shown in Figure 4.153. Measurement is made by using the reference beam effectively as a stylus instrument uses a skid. The central spot varies as the coherence modulation with roughness height. An important feature of coherence discrimination is that it allows the remote sensor to be connected to a transceiver by a common path optical link in which the interfering beams are separated only in the time domain. Such a link is inherently
Lens
Reference Links Lens
Test surface
Measurement
FIGURE 4.153â•… Surface finish variant.
immune to random noise. The sensor need not have the Michelson configuration. The interferometer output can be expressed in spectral or temporal domains:
1. Spectral: I t = T ( x , λ ) I (λ ) =
I i (σ ) [1 + cos (2 πσδ)] 2
(4.198)
where σ = 1/λ It is transmitted intensity, Ii is incident intensity, δ is path length (a function of x), I(λ) is the
345
Measurement Techniques
source spectrum and T(x, λ) is the spectral transmission. The spacing of adjacent spectral peaks in the output is given by Δσ where Δσ = 1/δ(x). The number of cycles of spectral modulation will depend on Δσ and, hence δ(x), and also the extent of the input spectrum. 2. Temporal: the output I(t) is given by
I (t ) = 2 I [1 + γ cos ϕ (t )]
(4.199)
where φ(t) is the time-varying phase and I is the intensity of the interfering beams. In this form it can be seen how the coherence function γ modulates the output. From this the phase (and x) can be determined absolutely. Such a development as the wideband interferometer is very important for the shapes and profiles of objects such as cams and gears, as well as texture. Perhaps, if the spectral shape could be fixed this technique, there is great potential here. Such a development as the wideband interferometer is very important for the shapes and profiles of objects such as cams and gears, as well as texture. Perhaps, if the Â�spectral shape could be matched to this technique, there is great potential here. One way of building an absolute interferometer resembles the way of using echelons for calibrating length standards. Basically sets of coherent envelopes have to be linked together in order to extend the range of absolute measurement. This has been done, as mentioned above. A simple way is to use a CCD camera in such a way that each pixel acts as if it were a contact probe. Then the trick is to record the maximum peak contrast as a function of the scan; this is a way of linking coherence bundles from each pixel to the frame of the image which can then be linked together or so-called “stitched” together. Frame boundaries have to be constrained to make the connection possible. “Stitching” is a technique often used to enlarge the range of measurement. One simple example is in the measurement of aspherics [155]. In this the geometry of the aspheric is divided into several diametral zones. Each zone is then specified as a surface of revolution usually in terms of ellipses and polynomials. The basic idea behind stitching interferometry is to divide the wavefront of the measured surface into several segments so that the fringe density over a sub-interferogram from each measurement remains resolvable. In other words to shift the
object with respect to the reference waveform yet retaining positional integrity. Moving the object through the reference waveform “sweeps” the fringe pattern in turn throughout the object making sure that at each position the fringes are resolvable. This sequential focusing provides a set of sub-interferograms which are combined after corrections for aberrations. Range extension can be along the axis or in the lateral direction. In the aspheric case (Figure 4.154) the synthesis is an example of normal axis stitching. The peak fringe contrast is recorded as a function of scan position. This mode of operation is called “scanning white light interferometer” (SWLI). The length unit becomes the increment between frames compared with the wavelength of light for phase interferometry. The frame scan increment can be associated with the coherence of the source as shown in Figure 4.155. There are two ways of utilizing the white light (broad band) fringes. The one method is to consider the coherence envelope (Figure 4.156a). In the former case it is considered be possible to get 3€nm positional accuracy by addressing the shape of the envelope. On the other hand use of the fringes within the envelope (Figure 4.156b) by measuring phase can be much more sensitive allowing 0.1€ nm to be resolved. In some cases it is possible to use both coherence and phase. The coherence is used for “coarse” positioning and the phase for high precision “fine” position. There is no ambiguity if the two are used together because of the absolute identification of the individual fringes. See Figure 4.152 for an example of tracking interferometry using broadband (white light) interferometry. White light interferometers have a very important feature. This is the presence within the instrument of an absolute Region 1
Stitch direction
Region 2
FIGURE 4.154â•… Stitching in the normal mode. Scan increment Stitch direction
Cohernce length of source
FIGURE 4.155â•… Lateral association of frame scan.
346
Handbook of Surface and Nanometrology, Second Edition
(a) Envelope of fringes
Interference of two simple wavefronts
(b) Phase of fringes
Reference flat
Part
FIGURE 4.156â•… Envelope and phase detection. (a) OPD Monochrome (b) OPD
Heterodyne
Speckle images (rough surface)
FIGURE 4.158â•… Transition fringe to speckle. (c) OPD
White light
Datum point
FIGURE 4.157â•… Bandwidth options.
reference for position. Sometimes this is called the positional datum or zero of the instrument. It occurs when the OPD between the arms produces maximum fringe intensity. Measurement of distance or position all relate to this point. There can be a zero point in other interferometers but these are ambiguous. Figure 4.157 shows the options (not to€scale). In Figure 4.157a, there is no discrimination between the fringes, although they are easy to count. There is no “absolute” datum. This is the conventional interferometer. In Figure 4.157b more than one wavelength is mixed. For, say, two frequencies f1 and f 2 the relative fringe intensity takes the form cos2π(f1 – f 2)cos2π(f1 + f 2) where f1 – f 2 is the envelope and f1 + f 2 the carrier. The maximum intensity occurs periodically. This option has a wide range with a number of zeros. Option (a) has no zeros but unlimited range. Option (c) is the white light alternative having a definite unique zero but a small range that can only be extended by stitching. In shape and position monitoring using coherent light there is one factor which has to be taken into account. This is the magnitude of the surface texture. The presence of surface roughness can considerably degrade any fringe pattern [145]. Figure 4.158 shows the transition from fringes produced between fine surfaces and which are typical of Twyman Green, Michelson, and even sharper with Tolanski. As the surface becomes rough the fringes degenerate into speckle in which local height variations are big enough to destroy
coherence. This degradation of the fringes can be used to measure the roughness, as will be seen later. 4.4.5.7â•…White Light Interferometer—Thin Transparent Film Measurement The increasing use for high value, high volume technology products that involve microscopic thin film structures and thin films involving nanostructures are creating a need for more versatile instrumentation than the conventional ellipsometers. Until about 5 years ago most SWLI use was limited to opaque surfaces. Now the point has been reached where interference microscopy is replacing traditional ellipsometers and reflectometers for specific structure analysis [156,157]. Three separate areas in this field have created interest mainly under the leadership of de Groot at Zyg. These are 3D top surface topography, usually in the form of a profile, over unknown thin films having thicknesses of greater than€500€nm. Full 3D film thickness measurement and interface profiling using signal modeling. Detailed multiple angle, multiple wavelength ellipsometric film stack analysis including multilayer thickness, and index information using an interference microscope in the pupil lane rather than the image plane. In the first technique only the profile signal of the top surface of the film is required but the interferogram of the transparent film from the SWLI instrument gives a signal in which information from the surface is corrupted by effects underneath, which have to be removed or reduced, so the trick, using the same instrument (or perhaps instrument type) is to record the signal-called the TopSlice—from a similar, but opaque, sample which will not have the underlying effects present signal and then match this as nearly as possible, pixel by pixel, using best fit techniques to the leading edge of the transparent film signal by amplitude and phase adjustment (See Figure 4.159). This, it is supposed, locates where the top of the transparent film is most likely to be
347
Measurement Techniques
Image capture
Processor
Image capture and processor
Test film profile
Pupil plane Broadband source
Broadband source Field stop
Mireau interferometer
Test film
Reference surface film
FIGURE 4.159 “TopSlice method.” (After de Groot, P. and de Lega, X. C., Proc SPIE, 7064, 706401, 2008. With permission.)
thereby enabling the profile to be measured. Simulations and experimental tests suggest that this “TopSlice” method works well for films having an optical path thickness of greater than a quarter of the coherence length (obtained from the source specification). The next technique [325] is for estimating the film thickness rather than the surface profile and advantage is made of the fact that as the normal signal from a transparent film is distorted already by underlying signals use should be made of them! The argument is that knowing the instrument characteristics it should be possible to model the expected response of the instrument to a range of possible film parameters and to build a library of signals corresponding to a wide range of thicknesses, materials, etc. So, the measurement process becomes reduced, in effect, to a comparison between the measured signal and the library of simulated signals: getting the best fit enables the film thickness to be deduced. It has to be said that this indirect approach seems plausible, but chancy, when it is remembered that working instruments invariably, in time, breed calibration resistant defects and vagaries unknown to and unpredicted by model builders! In some ways this “library” idea is similar to that of a neural network except that it cannot “learn” but it has the advantage that what is successfully measured is known whereas in the neural scheme it would be buried within the net. For more complex multi-layer structures or films having unknown optical properties an ingenious modification to the basic SWLI has been made as shown in Figure 4.160. The modification is such as to ensure that the pupil plane of the objective lens is imaged onto the camera and the field stop constrains the illumination of the extended source into a small area (e.g. 10 microns diameter) at the focus point, together with the polarizer these additions effectively convert the SWLI instrument into “multiple angle multiple
Polarizer Mireau interferometer
Specimen
FIGURE 4.160 SWLI with field stop and polarizer. (After de Lega, X. C. and de Groot, P., Interferometer with multiple modes of operation for determining characteristics of an object space, US Patent 7616323, 2009. With permission.)
Image plane
Angle plane (Transform, pupil)
Object plane
FIGURE 4.161 General planes in optical systems.
wavelength ellipsometer” [158,159]: The multiple wavelength attribute is a result of computing a spatial spectral analysis of the signal. Multiple angles and polarization “states” follow from the pupil plane geometry while a Fourier analysis of the interferogram gives a wavelength breakdown. According to de Lega and de Groot the pupil plane layout gives much more flexibility than the conventional SWLI. Figure 4.161 showing pupil plane viewing spot. It makes sense to use such a system because it has to be necessary, for getting thickness and stacking information, to restrict the signal if possible at all times to depth information of the film, however garbled, rather than mixing it with the influence of areal or profile topography although,
348
Handbook of Surface and Nanometrology, Second Edition
presumably, local surface slopes would have to be restricted for completely effective operation. The relatively simple addition of the polarizer and stop to the basic white light interferometer effectively closes the gap between the microscope and the ellipsometer and makes it possible to investigate not only the topography but also the underlying characteristics of thin transparent films. The capability of the white light interferometer can be extended to some extent by analytical methods as reported by de Groot [160]. Semi-conductor features such as transistor gates, line widths, and etch depths are of the order of a few hundred nm down to tens of nm and can therefore be well below the Rayleigh criterion for resolution 0.61λ/NA so unresolved features of this kind cannot be measured directly as height objects in the usual way by interference microscopy. De Groot asserts that measurement of such depths and widths are possible once it is understood that features of this kind do affect 3D images and that once the nature of the interaction has been quantified then measuring an image can lead to an estimation of the size and geometry of the unresolved features. As an example he considers a typical component shown in Figure 4.162. Figure 4.162 Semi-conductor component which produces a step height measurement due to unresolved features, using a white light interferometer. The apparatus consisted of a white light LED of mean wavelength 570 nm as part of a white light interferometer with added polarizer: the linearly polarized light being an important factor because it increases sensitivity to specific geometric features for example etch depth. A computer records intensity interferograms for each pixel as a function of scan position so that height can be deduced. De Groot prefers to use the frequency domain analysis to implement the “Rigorous Coupled Wave Analysis” (RCWA) method [161]. This is based on the work of Moharam et al. [162] on binary gratings. Other aspects of the behavior of light in semi-conductor materials have also been reported [163]. In the experiment a number of samples having features similar in shape but varying in size to Figure 4.162 were measured with the white light interferometer and the apparent step height (shown as the continuous line) evaluated. Using the results of the RCWA analysis this is translated into the
“true” etch depth of the unresolved features shown in the figure despite the fact that they have not been resolved: the interferometer has not seen the etched lines at all but the complex light scatter from these lines produces an apparent depression in height which the interferometer duly measures, as is seen. In effect the lines have been measured indirectly! Obviously the nature of the unresolved features has to known in order to apply the relevant RCWA software. Figure 4.163 shows how the RCWA prediction of step height (ordinate axis)—which should be measured on the interferometer, actually relates to the unresolved etched line depth. The point to note is that although the slope is not 45° it is linear so that the direct unambiguous relationship can be used: measuring the step height can give the etch depth! The graph acts as a translator. The conditions can be matched to a particular feature to get best results usually by changing the mode and or the direction of polarization. For example polarization orthogonal to the lines (say 450 nm structure) is good for estimating their etch depth polarization parallel to a 190 nm pitch grating produces a sensitive measure of the linewidth. The conclusion is that unresolved geometry can be evaluated albeit by using the simple linear relationship shown in Figure 4.163, the weakness is that the method can only solve for one feature at a time e.g., linewidth, sidewall angle, etc. Appropriate strategies involving suitable test structures will have to be developed in order to cover all parameters. As in all matching procedures there has to be a profound generic knowledge of the measurand, there is little room for maneuver. Obtaining the topography by the traditional method for a white light interferometer h involves calculating the fringe contrast as a function of scan position and then relates the point of maximum contrast to a surface height for each pixel in the image [164]. There are a number of ways to evaluate the fringe contrast for example by measuring the ratio of maximum to
Unresolved features
Transparent films
Predicted measured step (nm)
Measured profile
200
150
Measured step height
Substrate etch depth Line width
290 Pitch
FIGURE 4.162 Semi-conductor sub-resolution measurement with white light interferometry. (After de Groot, P., de Lega, X. C., Leisener, J., and Darwin, M., O.S.A. Optics Express 3970, 16(6), 2008. With permission.)
310 330 350 Si etch depth (nm)
370
FIGURE 4.163 Rigorously coupled wave analysis (RCWA). (After Raymond, C. J., Scatterometry for semiconductor metrology, In Handbook of Silicon Semiconductor Metrology, Ed. A. J. Deibold, Marcel Dekker, New York, 2001.)
349
Measurement Techniques
minimum contrast. However there is an alternative which has some advantages namely the method using the Fourier transform of the fringe interferogram. This uses the premise that a broadband interference pattern can be regarded as the superposition of several simple frequency patterns each of which can be represented by the raised cosine form
I = 1 + cos ( ϕ ) where ϕ = Kz
(4.200)
K is the wave number and z is the phase velocity optical path difference. So, the distance z is found for each constituent wavelength by calculating the rate of change of phase with spatial frequency.
z=
dϕ . dK
(4.201)
Phase values are needed over a range of spatial frequencies dK which are readily available from the Fourier transform. De Groot points out that it is relatively easy to get a value of φ even if it is difficult to get a meaningful value of fringe contrast such as when the interferogram has been undersampled: much of the relevant information in the signal is preserved even when the shape is distorted. Remember that this argument is valid here simply because the signal i.e., the interferogram is basically of a periodic nature and the Fourier transform is inbeing to identify and quantify parameters of periodic signals such as phase. If the interferogram is undersampled the signal to noise suffers but here de Groot suggests that to some extent this is offset by the fact that there are less data giving the opportunity to take more scans and get a reasonable average. For a good explanation of the frequency domain analysis method see de Groot et al. paper Reference [165]. The use of more than one frequency for measurement interferometrically is not confined to profilometry as is seen below with a few examples. Mansfield [157,322] also has been very innovative in thin film metrology. He explains the reason for one of his techniques based on what he calls the HCF or helical complex field. It is well known that films of optical thickness in excess of the coherence length of the source can be measured by taking advantage of the fact that such films exhibit interference maxima corresponding to each interface. It is relatively trivial to locate these maxima and therefore determine the film thickness assuming that the refractive index is known. For thin films with films having thicknesses between about 20 nm and 2 μm, the SWLI interaction leads to the formation of a single interference maxima which is an amalgamation of the two. Here Mansfield introduces the new function, which he describes in terms of optical admittances, to unravel the pattern to get the thickness. This function in effect provides a signature of the layer or multi-layer so that by using an optimization routine the thin film thickness is obtained. The considerable advantage of the method is that it is possible to get “local” values of the thickness i.e., of the order of
2.5 × 2.5 μm which is a requirement of the industry in addition to the global thickness of the film which can be obtained by conventional ellipsometry or photo spectrometry. The HCF function is obtained by performing SWLI measurements on the thin film coated substrate, whether single or multi-layer, together with a reference substrate of which he mentions the glass BK7 as an example. His description of the HCF function is as follows. It is a complex function that exists over the Fourier bandwidth of the SWLI instrument, typically 480–720nm. It is the product of the conjugate of the electric field reflectance from the surface and an exponential term with a phase that is proportional to wavenumber. The exponential arises from the height difference between the measured, coated, and the reference substrate. The HCF function therefore corresponds to a helix (generated by the exponential term) modified in terms of phase and amplitude by the conjugate of the electric field reflectance. This modulated or distorted helix then enables the thin film structure to be extracted by using a modified thin film optimization program. The HCF can be written as HCFFIT ( ν ) = r ( ν ) exp ( jϕ ( ν )).exp ( j 4 πν∆z HCF cos θ ) (4.202) where r(v) exp (jφ(v)) is the net electric field relectance over the numerical aperture and Δz is the difference between the height of the thin film and the reference height. The cosine term depends on the NA. Recent improvements give a lateral resolution of 1.25 μm and a nrms noise of about 0.08 nm. 4.4.5.8 Absolute Distance Methods with Multi-Wavelength Interferometry De Groot describes four less well-known techniques involving multiple wavelengths although not strictly surface metrology such techniques give an idea of the breadth of possibilities using optics for measuring length and by extension surfaces [166]. 4.4.5.8.1 Multi-mode Laser Diode Interferometry This method which is usually used for gauge block measurement uses only two wavelengths giving a “synthetic” wavelength
Λ=
1 . σ1 − σ 2
Where σ is the reciprocal wavelength. Similarly the synthetic phase is the difference between the interference phases for the two wavelengths. The reason for working the synthetic phase is that it evolves very slowly with distance more slowly than the interference phases because the effective wavelength is so long making it easier to sort out ambiguities involving 2π. and
Φ=
4πL Λ
(4.203)
350
Handbook of Surface and Nanometrology, Second Edition
It is possible to get longer synthetic wavelengths by compounding two or more multi-mode laser diodes as shown in Figure 4.164 [167]. Figure 4.164 illustrates how two mutimode laser diodes can be used to produce three wavelengths based on a fiber coupled Michelson interferometer. Three wavelengths from the wavelength possibilities offered by the dual laser system are selected and separated by means of the diffraction grating. An array of detectors measures the interference pattern. Electronic processing provides the phase shift algorithms to deliver the synthetic phase. With this system a resolution of about 0.5 nm can be achieved over a range of 360 µm.
of the light scattered from the object and the natural reflection from the end of the single mode fiber. The signal has a value related to the average wavelength separation of the lasers and a modulation frequency given by the chirp rate F. Combining the frequency and phase information in the time dependent fringe contrast determines the absolute distance. The frequency information is used to get a range estimate 2πF/Λ which is then used to remove the 2π ambiguities in the synthetic phase Φ. Not shown is a calibrator comprising a fiber coupled Michelson interferometer with three parallel operating paths. The three paths provide enough information for to enable unambiguous determination of the length to be made. The reader is reminded that the information given above is only meant to give an idea of the methods. For more detail consult de Groot’s papers. Whilst on the subject of fine surface measurement, one technique which has only been hinted at so far is a kind of microscopy and a kind of interferometry [169, 170], and this is the method called fringes of equal chromatic order (FECO). The basic interference is illustrated in Figure 4.166 and relies upon multiple beam interferometry [171]. The plates are coated with reflecting films and mounted parallel and close together. Instead of the monochromatic light normally used, white light is used.
I = a0 + a1 cos (2 πf1t + ϕ1 ) + a2 (2 πf2t + ϕ 2 ) Pac = a12 + a22 + 2a1a2 cosΦ
(4.204) (4.205)
This is a measure of the fringe contrast with a two wavelength source. If f1 is made equal to f 2 then the synthetic phase can be obtained directly and if the synthetic phase is changed it is possible to get ϕ and hence the distance L from Equation 4.203. So, this is a direct measurement of length or it could be of surface height, hence a point of the topography by a comparatively simple method. It needs the two frequencies to enable the absolute measurement to be made. If the synthetic wavelength Λ is chirped i.e., it changes with time the resulting signal can be used to get length as shown In Figure 4.165. The two tunable laser diodes generate a continuously varying value of Λ. The interference results from the mixing
LD1 Modulated drive current
Isolator
Fiber coupler
Diffraction grating
Fiber
PZT
Electronic processing
Detector Object
1.2
0
0
0.5 Time (sec)
Chirped synthetic wavelength interferometer Reference
Object
Detectors
Grin lens
LD2
Single-mode fiber
Multimode laser diodes
Optical isolator
Contrast
4.4.5.8.2 Chirped Synthetic Wavelength In an alternative approach called the super-heterodyne method in this example two different wavelengths beat against each other on the detector giving [168].
Phaseshifting PZT
FIGURE 4.165 Interferometer using chirped wavelength -also shows the fringe contrast. (After de Groot, P. and McGarvey, J., US patent 5493394, 1996.) R2o1
XY Stage for profiling R1p1
Wavelength-sensitive detection Three-wavelength profilometer
FIGURE 4.164 Three wavelength interferometer compounding two multimode laser diodes. (After de Groot, P., Appl. Opt., 30(25), 3612–6, 1991. With permission.)
t
FIGURE 4.166 Basic multiple beam interference.
R'1p'1
351
Measurement Techniques
Slit of prism spectrograph
White light
Substrate surfaces
FIGURE 4.167â•… Simple schematic diagram of FECO fringes.
FIGURE 4.168â•… Moiré fringes created by two successively positioned periodic structures. (After Weinheim, V. C. H., Wagner, E., Dandliker, R., and Spenner, K., Sensors, vol 6, Springer, Weinheim, 1992.) Cens of dark moiré fringes
The basic interference equation for the fringes is
(m + 1) λ = 2t + (ρ1 + ρ1′ )λ/2 π
m
(4.206)
where m is an integer and λ is the wavelength at the minimum of the reflected intensity distribution. Thus the lowest order which is obscurable is for m = 1 (the dispersion of wave change can be neglected) [172,173]. Results using this method indicate that measuring Rt involves determining the peak-to-peak roughness from the extreme width of one fringe and the order number of the fringe [174]. This may be only a rough approximation and, considering the surface as a zσ spread, it could be better. Surfaces having roughness of about ±1 nm and slopes of ±2 × 10 –4rad have been evaluated but the technique does require interpretation and automation. Whether this is now necessary remains to be seen. The typical apparatus is shown in Figure 4.167.
4.4.6â•…Moiré Methods 4.4.6.1â•…General Any system which has the superposition of two periodic structures or intensity distributions can be called a moiré system [175] (Figure 4.168). The name moiré probably originates from the French textile industry and has the meaning of wavy or watered appearance. Moiré fringes have been used in the silk industry since the middle Ages for quality control. Moiré fringes can be used as an alternative to interferometry and holography and are becoming increasingly used for measuring form, but not as yet surface texture to any large extent because of the difficulty of making gratings whose width is much less than 10 µm. Figure 4.169 shows the principle. If there is a rotational mismatch between two identical gratings of, say, δ it produces a new system of equidistant fringes which are easily observed. These are called moiré fringes. The pitch of the fringes is Pm where
pm
2
n
2
1
1
0
0
–1
–1
N = –1
N=0
N=1
δ
p
Cens of bright moiré fringes
FIGURE 4.169â•… Moiré fringes created by rotational mismatch. N Moiré index, m, n number of grating lines, p pitch, σ intersection angle. (After Weinheim, V. C. H., Wagner, E., Dandliker, R., and Spenner, K., Sensors, vol 6, Springer, Weinheim, 1992.)
Pm = ( P/2) sin (δ/2)
(4.207)
where P is the spacing of the original gratings (Figure€4.183). This equation is similar to that obtained describing the pattern of two interfering plane waves. For this reason the moiré effect is often called “mechanical interference.” There is a difference, however, because in interference the interference term only consists of the difference of the optical path length. The moiré superposition, like acoustic beats, contains two terms, the sum and the difference frequencies. The sum term in the moiré fringes has nearly half the pitch P so that when using dense gratings (~10 line pairs per millimeter) it is not possible for the eye to observe these fringes. Usually, however, the additive moiré fringes are rarely used and it is the difference fringes which are used. The number of fringes produced are related to the number of lines on both of the other gratings. If they are m and n, respectively, then the number of moiré fringes N are given by
N = m − n.
(4.208)
352
Handbook of Surface and Nanometrology, Second Edition
Along each moiré fringe the index is constant. Moiré fringes can also be generated by a pitch mismatch, not an angular one. The resulting moiré fringe is then given by
This equation gives the displacement only in the y direction. To get it in the x direction both gratings have to be rotated in the x direction.
PP Pm = 2 1 . P2 − P1
4.4.6.3â•…Moiré Contouring Out-of-plane methods are triangulation techniques; they are used to measure the form of plane or low converse shapes of mostly diffuse reflecting surfaces.
(4.209)
These pitch mismatch fringes are sometimes called Vernier fringes. Sometimes, even though an optical system cannot resolve the pitch of either of the two generating gratings, it will easily see the moiré fringes. Gratings utilized in moiré applications usually have a density of between 1 and 100 line pairs (1p) per millimeter [176]. In application the fringes are projected onto the surface under test. Any distortion of the shape or form of the surface shows itself as a deviation from the ideal shape of fringe which is projected. The moiré fringe projection is used mainly in three surface applications:
1. Strain analysis for determining in-plane deformations and strains of a surface. 2. The shadow and projection moiré methods for the determination of the contour of an object or for the comparison of a displaced surface with respect to its original state. 3. The reflection moiré and the moiré deflectometry methods for the measurement of the slope (outof-plane) distortion of a surface with respect to its initial state or the measurement of waveform distortion produced by specularly reflecting objects or by phase objects.
4.4.6.2â•…Strain Measurement In strain measurement the grating is usually placed in contact with the surface. The orientation of the grating has to be in the direction of the assumed deformation [177]. Prior to deformation the grating with the pitch P is given by
y = mP m = 0, ± 1, ± 2 .
(4.210)
After deformation the shifted surface distorted the grating€to
y + s [u ( x , y), v ( x , y)] = nP n = 0, ± 1, ± 2 .
(4.211)
In this equation the displacement s[u(x, y)] of each point x, y with the two independent components u(x, y), v(x, y) can be observed with the moiré effect. Positioning an undistorted grating with the same pitch near the distorted grating gives an optical magnification due to the moiré effect. The subtractive moiré fringes give the fringes of displacement: s [u ( x , y), v ( x , y)] = (m − n) P = NP N = 0, ± 1, ± 2 . (4.212)
4.4.6.4â•…Shadow Moiré In this the master grating is positioned over the surface (Figure 4.170). It is then illuminated with light at angle α. The shadow region is thus viewed at angle β through the same grating. Obviously there are m lines in the incident beam and n ≠ m in the emergent beam. Hence the moiré fringe number N = m–n. The difference in height between the surface and the master grating is CE:
CE =
( N + 12 )P NP = (tan α + tan β) (tan α + tan β) bright fringe
(4.213)
dark fringe
and the height sensitivity δ(CE) = CE/N. In this case there is a linear relationship between the height difference and the moiré order N. This type of configuration is mostly used for small parts. Limits to this method are the resolving power of the optics, which can cause a fading of the fringe contrast. This can be controlled by making sure that a point on the surface is not greater than five times the grating pitch from the grating. Another factor is the “gap effect,” which is the distance between the object and gives a lateral displacement of xCE2 P where x is the point on the surface. This effect is reduced by using telecentric projection. Diffraction can also be a problem on high-density gratings. Again, the surface and grating should not be separated much. 4.4.6.5â•…Projection Moiré Unlike shadow moiré the master grating is remote from the surface. Here there are two gratings: the projection grating and the reference grating. The projection grating can be of
n lines
m lines
To detector 0
C
Grid
O''
α β Shadow of grid
Surface
E
FIGURE 4.170â•… Shadow moiré. (After Hofler, H. and Seib, M., Sensors, vol 6, Springer, Weinheim, p. 574, 1992.)
353
Measurement Techniques
Projector
Projection grating
Test surface used as mirror
Camera
θ
Grating
Reference grating Distorted grating image
Semi-silvered mirror
FIGURE 4.171â•… Projection moiré. (After Weinheim, V. C. H., Wagner, E., Dandliker, R., and Spenner, K., Sensors, vol 6, Springer, Weinheim, 1992.)
any sort, such as one etched onto glass and projected with white light, or it can be a Michelson fringe pattern. The pattern made, however, is thrown onto the surface. The pitch P0 on the object is P0 = mP, where m is any magnification introduced by the projection (Figure 4.171). In this method the fringe is formed by the viewing of the deformed object fringe through the reference grating in front of the camera. If the distorted grating on the surface has ns line pairs per millimeter the height sensitivity is 1/(ns tan θ). This is strictly non-linear. The usual best resolution of this technique is 100 line pairs per millimeter. A variant of this is not to use a master reference grating before the camera but to use the columns of the CCD array in the camera to provide the reference. This is cheaper but the observer cannot control the moiré fringe density N. This is decided by the chip-maker of the CCD. In the methods outlined diffuse surfaces are needed. In the case of a mirror-like surface, a different strategy is used. The surface is treated as a mirror despite its deformations (Figure 4.172). The specular reflection combined with the deformation, often of the grating on the surface, is viewed via a semisilvered mirror by means of a camera. This method can be used in a similar way to double-pulse holography in which the “before and after” scenes of the grating on the surface are recorded on the same film. 4.4.6.6â•…Summary Moiré techniques have been used up to now for estimating rather crude deformations on surfaces. Typical gratings have had spacings of 25 µm. They have been regarded as a coarse extension of the other fringe methods that is interferometry and holography. This role is changing with the advent of much better gratings, better resolving optics and pattern recognition of digital fringe interpolation methods.
4.4.7â•…Holographic Techniques 4.4.7.1â•…Introduction The vital concept of producing a phase record of light which has been diffracted by some object can be attributed to Gabor [178]. The thinking behind the technique which is now called holography was developed over many years by Gabor, in connection with the resolution limits of electron microscopes [179].
Diffuser Image of grating
Camera
FIGURE 4.172â•… Moiré using surface as mirror. (After Weinheim, V. C. H., Wagner, E., Dandliker, R., and Spenner, K., Sensors, vol 6, Springer, Weinheim, 1992.)
Uniform fringes
Modulation fringes
FIGURE 4.173â•… Types of fringe produced by interference.
It is convenient to regard holography rather as one would a single-plane interferometer. That is to say that by using a recording medium such as a photographic plate, it is possible to record a spatial slice of reflected light from the object in such a way that an amplitude-modulated record is produced which exactly represents the phase structure emanating from the object (see Figure 4.173). The amplitude-modulated data recorded on this slice of material can subsequently be used to phase-modulate a plane wavefront to recreate the original complex mapping of intensity information in the object space. Thus the hologram may be thought of as a grating or set of gratings each having some particular spacing and orientation, and it is this which has the ability to reconstitute the wavefront so that the object appears to occupy its original position in space.
354
Handbook of Surface and Nanometrology, Second Edition
Apart from the spectacular three-dimensional characteristics of the reconstructed object field, it is important to recognize that, for the first time since the inception of photography, a data recording technique which completely exploits the information-holding capabilities of the photographic medium is possible. Indeed, the technique goes far beyond the modulation capabilities of silver halide emulsions. In digital computing terms it would be possible to get more than 1013 bits of information onto a perfect amplitude-recording medium which is infinitely thin and a mere 10x12 cm2 in area. Even greater information storage is possible if a perfect device with some finite thickness is considered. In realistic terms it is now possible to record about 1010 bits on present-day emulsions, which goes a long way toward solving the problems of recording and comparing area texture on machined parts. In many instances it may be found that the full fidelity of the holographic technique is not needed, and in such cases it is possible to reduce the information-recording capacity of the system to suit. The detail that a hologram has to record is a function of the maximum angle θ between the reference beam and the signal beam. If δ is the average spacing of the fringe pattern and λ is the wavelength then
an object. The intensity of light hitting the detector or film at, say, P is simply
δ = λ/θ .
(4.214)
For example, if θ = 30° then the detector has to be capable of resolving about 1000 lines per millimeter. The introduction of the laser was the key to the development of holography [180]. A typical arrangement for the recording and reconstruction of an object is shown in Figure 4.174. How the phase is retained is simply shown, as in Figure 4.175. Consider the ordinary scattering of light from
I P = A12 .
In this signal there is nothing to tell P from which direction the ray r1 is coming. Now consider the case when, in addition to the illuminating beam reflection t1, there is superimposed another beam r 2 which is from the same source but which is directed onto the film and not the object. The intensity at P is now given by
I p = A12 + A22 − 2 A1 A2 cos θ .
Laser
Illumination
(a) P
Object
Photographic plate
Object A2r2
Hologram
Recording hologram
(4.216)
Now because the source is coherent the signal received at P contains some information about the direction of the ray r1 because the direction of the ray r 2 is known. It is this term which produces the 3D effect. Notice, however, that it only does so in the presence of the reference beam r 2. To see the object the configuration is as shown in Figure 4.175c. So it is seen that holography is no more than two-stage photography. At the intermediate stage after the photograph has been taken in the presence of the reflected and reference beams (the hologram) no direct information is visible; the holographic plate looks a mess and no image is apparent. However, once the hologram has been “demodulated” by the presence of the reference beam then the image is visible and as shown in Figure 4.174. Two possibilities exist: the virtual image which cannot be projected and the real image which can. The most popular view of holography is derived from the work done on the recording and reconstruction of the spatial
r1A1 Light scattered object
(4.215)
A1r1
Reference Fringes of hologram
(b) P θ
ct
Obje
Hologram
Hologram (c) Real object
Virtual image
FIGURE 4.174 Reconstruction of image.
P
Image
FIGURE 4.175 Difference between hologram and photograph.
355
Measurement Techniques
elements of an object. A great deal of work has gone into the quantification of depth information over comparatively large areas [181–186]. Probably the first attempt to measure roughness (as opposed to form) was due to Ribbens [187,188] using the system shown in Figure 4.176. In the figure u1 is the reference beam and u2 is the light from the surface. If the texture is small there is a high correlation between u1 and u2 and an interference pattern is formed. The point to make here is that the contrast ratio of the fringes is determined by the ratio of the two light intensities and the coherence between the light amplitude components u1 and u2. The latter is influenced by the spatial and temporal coherence of the incident light beam and by the roughness of the test surface. For a flat surface and a highly correlated source the coherence is determined very largely by the roughness. The roughness value in principle is found by measuring the fringe contrast ratio in the image plane. Ribbens showed that under certain conditions the contrast ratio Rc is given by ρ + exp (− k 2 Rq2 /2) Rc = ρ − exp (− k 2 Rq2 /2)
1/ 2
(4.217)
where ρ is the ratio of light amplitude of u1 to u2 as determined by the reference beam and the hologram diffraction efficiency, and k is determined by the reference beam intensity, Rq is the RMS surface finish. He subsequently extended the method to the use of two wavelengths which theoretically extends the range of measurement [188]. Equation 4.217 then becomes (1 + ρ)2 + (2πRq /λ eff ) 2 Rc = (1 − ρ)2 + (2πRq /λ eff )2
(4.218)
where λ eff =
λ1λ 2 . λ1 − λ 2
(4.219)
Attractive as these methods may seem, they require many assumptions. The latter technique is a way of extending the
Object
Laser
Hologram
u1
Mirror
Semi-reflecting mirror
u2
Mirror
FIGURE 4.176 Roughness measurement. (After Ribbens, W. B., Appl. Opt., 11, 4, 1972.)
range of measurement in theory without the need to use a laser having a very large λ and introducing all sorts of complications of detector sensitivity, lens transmission problems and so on, and yet still produce an effective wavelength sufficiently large to allow a small phase modulation approximation of the surface to the wavefront. Problems with these methods include the facts that a hologram has to be made, that the method is best for plane surfaces and, more importantly, that the system has to have a relatively large bandwidth. As in all small-signal phase modulation, the information lies in the sidebands (diffraction orders); it is necessary that the resolution of the hologram is sufficient to record higher diffraction orders from the surface. Ribbens suggests that as the method is meant for rough surfaces, the effective system bandwidth is relatively low, of the order of ~200 lines per millimeter—which means that most holographic plates can record enough diffraction orders to preserve the statistics of the surface at the image plane. He points out that the system will also work with exposures of the plate for liquids of two different refractive indices rather than two wavelengths: both are entirely rigorous methods of changing the ratio of the wavelength of light to the surface roughness asperities. 4.4.7.2 Computer Generated Holograms Holography plays an important role in the measurement of complex parts such as free form and aspherics. For example a hologram can be used as a reference wavefront and thereby enable a null test to be carried out between the test part and the ideal wavefront nerated by the hologram—assuming that this has been generated correctly in the computer. A computer generated hologram usually consists of a planar substrate covered with a diffractive microstructure. This has first to be calculated and then produced by lithography the phase of the reference wavefront is determined by the structure of the hologram: the accuracy is limited by the lithographic process. One example by sawyer is shown in Figure 4.177. Sawyer describes the use of a computer generated hologram to create a broad reference beam and a focused measurement beam both incident on the surface. The reflected beams return through the hologram and create an interferogram on a CCD. In the Figure 4.177 HDE is the holographic dispersive element, ZORB is the zero order beam and FORB is the first order beam. S is the surface, and OL is the objective lens. The authors use this as a profiler with a high-numerical aperture and managed to get noise levels down to sub-nanometer. An application of holographic form testing particularly suited to cylindrical parts with rough surfaces is grazing incidence interferometry mentioned earlier. Typical parts examined include fuel injectors, roller bearings, and cylindrical lenses. One configuration is shown in Figure 4.178 [190]. This operates in the following way. The collimated beam of a laser is split by a circularly symmetrical diffractive axicon. The un-diffracted beam is the reference beam of the interferometer and the first order diffracted beam is directed onto the workpiece at an angle of about 5°. After reflection
356
Handbook of Surface and Nanometrology, Second Edition
from the surface a second axicon recombines the beam s and interference results. In this system the beam is passed through a rotating glass disc having a frosted surface roughness before hitting the CCD camera. This set up can measure parts with diameters between 3 mm and 25 mm. The accuracy with this device is 50 nm. Workpieces with roughnesses of up to 0.8 μ Ra can be measured without violating diffraction limits due to the fact that the effective wavelength is extended by the obliquity factor of the first order ray hitting the workpiece. Burova and Burov [191] have built what they refer to as a “pseudo-holographic” system for measuring rough surfaces. In their system, the surface is decomposed into its Fourier components by means of what appears to be a diffractive
element. They use a photographic array to measure the amplitudes of the diffracted orders with both polarizations and reconstruct the surface to get the roughness. They get reasonable agreement with ground surfaces having Rq of the order of 0.7 μm as measured with a stylus instrument. Other methods are sometimes called holography but the derivation is not always clear, one such is called “Conoscopic holography” [192].
Interferogram HDE
4.4.7.3â•…Conoscopic Holography In this method a point of light is focused on the rough surface at P. PR polarizes the light through an angle of 45°. The two orthogonal components traverse the bi-refringent crystal BM at different angles and experience different phase shifts. The analyzer AR shifts the rays back by 45° allowing interference to take place (Figure 4.179). The differential phase ∆ϕ i =
OL
S
r λLk i d
2
(4.220)
BM
FORB
ZORB
2π
Σ
P
S
AR
PR
A holographic system
FIGURE 4.177â•… Holographic system. (After Sawyer, N. B. E., See, C. W., Clark, M., Somekh, M. G., and Goh, J. Y. L., Appl. Opt., 37(28), 6716–20, 1998. With permission.)
FIGURE 4.179â•… Conoscopic surface measurement. (After Lonardo, P. M., Lucca, D. A., and De Chiffre, L., CIRP Annals, 51/2, 701–23, 2002. With permission.) Aperture stop
Tilted mirror
Tilted mirror Rotating frosted glass panel
Achromatic lens Beam combining axicon Workpiece Beam splitting axicon Aperture stop Fiber coupler
Measuring beam
Zoom lens
Reference beam Sample stage (Fixture) Collimator
CCD-camera HeNe-laser
Glass fiber
FIGURE 4.178â•… Grazing interference holographic system. (After Zeiss, Tropel, Product information No 60-20-801-d, 1998.)
357
Measurement Techniques
“d” is the distance from P to the screen, λ is the wavelength ri is the radius of the ith fringe, and k is a constant dependent on the principal refractive indices of the bi-refringent material. The pattern is very sensitive to the distance d. Examining the pattern as the point P scans can give a profile. A lateral resolution of 5 μm has been claimed. This method has been found useful in measuring surfaces produced by selective laser sintering: a technique used often in rapid prototyping. In all the methods involving the reflection of light from surfaces some idea of the theory of light scattering is essential. This will be considered later. 4.4.7.4â•…Holographic Interferometry Whereas the holographic methods outlined above have been suggested as methods for measuring the amplitude of roughness, other possible uses exist. These are concerned with surface classification possibilities. Holograms have been used as cross-correlation filters for the automatic recognition of various spatial parameters [194,195]. It may be that this approach could be used to recognize various machine-fault conditions. However, there would be enormous practical difficulty in implementing such a technique. The most important application of holography is not essentially in surface metrology as such, but in strain and vibration measurement. The reason for this is profound. Therefore some explanation will be given here in simple terms. Conventional interferometry involves fringe formation between optical wavefronts that have been divided either by wavefront or by amplitude. Both methods involve spatial separation of the beams and their subsequent recombination. Interferometry using holograms can be different. In ordinary interferometers the two beams are spatially separated at some point before recombination and one of the beams is modulated either by a movement of a surface or a refractive index change. In holographic interferometry the two beams can be separated in time. The modulation is caused by a change of a single-spatial path with time. Hence, holography allows the possibility of interferometry in time. In Figure 4.180 temporal path 2 could be different
(a)
Spatial path
Normal interference pattern
Laser
Semi-silvered mirror
Hologram
Original position Position under vibration
Hologram
Laser
Fringes showing strain
FIGURE 4.181â•… Holography for vibration for strain.
from path 1 because of a spacing change between the object and the mirror. The technique for strain measurement or vibration measurement using holographic interference (Figure 4.181) is sometimes called double-pulse holography because the two exposures of the interferogram are taken with pulses a short time apart. There is an extensive literature on this subject, so it will not be pursued here. However, some very interesting variants have emerged, such as that developed by Abramson [196], who uses double photographic plates for each exposure. With this extra plate a degree of freedom is made available which would not be otherwise. This is the effective directioning of the reference beam after the hologram has been made. But this “sandwich” holography has problems with exposure times. Holography has also been used as a means of making lenses and various other optical devices such as space frames in 3D measurement. Basically holography has not been used effectively as a general purpose tool in surface measurement, but there are other possibilities open with coherent light, such as the use of the speckle phenomenon. Holographic methods are therefore not specifically useful for surface roughness evaluation. They are useful in the same sense that interferometry is useful but no more. No instrument intended specifically for roughness measurement using holograms has been made commercially, nor is it likely to be. Surface texture is more important for its property of degrading holograms.
4.4.8â•…Speckle Methods Spatial path 2 (b)
Temporal path
Time 1 Holographic interference Time 2 pattern
FIGURE 4.180â•… Conventional (spatial) interference (a) and temporal interference (b).
One of the problems encountered with holography is the granular effect seen from diffuse objects when illuminated by coherent light. This granularity, called speckle, is now a useful metrological tool rather than the nuisance it was originally branded as. It has been point out by Erf [197] that speckle phenomena were known long before the laser was invented but it has only been recently that applications have been found for it. There are two basic methods; direct laser speckle photography and speckle interferometry.
358
Handbook of Surface and Nanometrology, Second Edition
In principle there are three functions that control the nature of the observed speckle:
where m is the magnification and f is the f-number of the lens. This is a gross simplification which demonstrates only the influence of the optical system. To get some idea of the influence of the surface, it is necessary to consider some more complex ideas. Following here the introduction given by Jones and Wykes [198], Figure 4.183 shows in principle what happens. The path difference of many of the waves hitting P from the surface will vary by more than λ because the surface roughness may be larger than this. Also, because the source of the rays could be from anywhere in the region of the illumination, there will be a massive summation of rays at any one point in the viewing plane. Because of the randomness of the surface (usually implied) the way in which the summation shows itself as a resultant intensity will vary point by point on the viewing plane. Speckle is this variation in intensity in examining the intensity difference at P1 or P2 or anywhere else on the screen. Goodman [199] has shown that the probability density of the intensity at a point P lying between I and I + dI is
1. Coherence of the source 2. Nature of the scattering medium 3. Aperture of the imaging system
Speckle has two very important properties. These are contrast and number of spots per unit area. The contrast can be shown to have a relationship to the correlation length of the surface, whereas the density is more concerned with the resolving capability of the imaging system. Surface information has been obtained by using the contrast of speckle patterns produced in the first instance near to the image plane and, second, near to the diffraction or defocus plane. Polychromatic speckle patterns have also been used, as have the correlation properties of two speckle patterns. From these methods roughness values from 0.01 to 25 µm have been estimated. Simply put, the theory is as follows. The way in which speckle is viewed is shown in Figure 4.182. The wavefronts reflected from the surface and picked up by the optical system will interfere with each other to create randomly varied speckles of various size throughout the space covered by the reflected light and projected onto the screen. The pattern is unique to the surface texture of the object, the illumination and viewing direction. The size of the speckles depends only on the angular extent over which the scattered light is received. Thus, if an imaging system is used to record the speckles, the resulting speckle size will be inversely proportional to the resolution of the system. The speckle size Sp is S p = 1.2λ/(d /D)
(4.221)
for the surface far removed, and
aperture ratio =
(4.222)
distance to image aperture diameter
(4.223)
= f × (1 + m)
(4.224)
σ = (〈 I 2 〉 − 〈 I 〉2 )1/ 2 = 〈 I 〉
(4.225)
If the intensities of the scattered light at two points are compared, then if they are close the intensities will be correlated but if far apart they will not. The mean speckle size is related to the autocorrelation function of the intensity distribution A(r1, r 2). P1
Screen Surface
1 −I exp 〈I 〉 〈I 〉
where 〈I〉 is the intensity of the speckle pattern averaged over the received field—the variance of which depends on the number of scattering points within the illumination area— which depends critically on the correlation length of the surface. This obviously assumes a Poissonian probability within the interval. Using this breakdown, the mean square value of the intensity is 2〈I〉2 so that the standard deviation of the intensity is
S p = 1.2λF
where F is
p( I )dI =
Q Q'
D d Viewing or imaging device
FIGURE 4.182 Formation of speckle partterns. (From Mizugaki, Y., Hao, M., and Kikkawa, K., CIRP Annals, 50/1, 69–73, 2001; Mizugaki, Y., Kikkawa, K., Terai, H., and Hao, M., CIRP Annals, 52/1, 49–52, 2003; Bouzakis, K-D., Aichouh, P., and Efstathiou, K., Int. J. Mach. Tools Manufact., 43, 499–514, 2003; Chen, J-S., Huang, Y-K., and Chen, M-S., Int. J. Mach. Tools Manufact., 45, 1077–84, 2005.)
P2
FIGURE 4.183 Average speckle. (From Mizugaki, Y., Hao, M., and Kikkawa, K., CIRP Annals, 50/1, 69–73, 2001; Mizugaki, Y., Kikkawa, K., Terai, H., and Hao, M., CIRP Annals, 52/1, 49–52, 2003; Bouzakis, K-D., Aichouh, P., and Efstathiou, K., Int. J. Mach. Tools Manufact., 43, 499–514, 2003; Chen, J-S., Huang, Y-K., and Chen, M-S., Int. J. Mach. Tools Manufact., 45, 1077–84, 2005.)
359
Measurement Techniques
The correlation function of intensity pattern points separated by Δx and Δy in the observation plane is L∆x L∆y 2 A( ∆x , ∆y) = I 1 + sin c 2 sin λz λz
(4.226)
where z is the distance between the object and viewing plane and L × L is the size of the illuminated object (assumed to be uniform). The average speckle is therefore Δx (or Δy) for which the sine function becomes zero, yielding Ax = λz/ L
(4.227)
Physically this corresponds to the explanation given by Reference [198] (Figure 4.184). The path length is S = P1 Q − P2Q ~
xL 1 L2 + z 2 z
(4.228)
and the difference to Q′ is approximately xL 1 L2 ∆xL + + . z 2 z z
(4.229)
The change from Q to Q′ is therefore ΔxL/z from which Equation 4.427 results. So from Equation 4.227 as the area of illumination increases the “size” of the speckle increases. This type of speckle is termed “objective” because its scale depends only on the position in which it is viewed and not the imaging system used to view it. As a screen is moved from the surface, the speckle pattern will continually change. Some mention of this will be made when fractal behavior is being considered. Image plane speckle is shown in Figure€4.184. Here QQ ′ = 1.22λv/a.
(4.230)
The size of the speckle can be taken to be twice this (2.4λv/a). The distance P1P2, which is the radius r of the element of the object illuminating Q, is 1.22λu a
where u is the object distance. Goodman [199] has discussed an expression for the autocorrelation function of the image plane speckle;
(4.231)
1 + 2 J πar 1 λv C (r ) = 〈I 〉 2 πar λv
from which the speckle size is taken when the separation between the two first minima of the Bessel function is measured. dspeckle = 2.4 λv/a .
Q Q'
P1
(4.233)
The maximum spatial frequency is determined by the size of the lens aperture and v, the lens-image distance. This is called subjective speckle because it depends on the viewing apparatus. In principle, some idea of the correlation length of the surface can be found by varying the aperture size and measuring the standard deviation of the intensity. If the aperture is small, there are fewer independent points on the surface which contribute to the speckle. Following Asakura [200], the basic theory is as follows. The speckle pattern formed at the image and diffraction planes is shown in Figure 4.185. The object at P1 is transmitted via the imaging system by a lens system having a spread function (the inverse transform of the optical transfer function (OTF)) given by S(P1P2). The speckle intensity contrast is given by I sp ( p ) = A ( P2 ) = 2
∫
2
S ( P1P2 ) A ( P1 ) dp
A( P1 ) = A exp( jϕ )
(4.235)
so if the surface is smooth (and assuming that the amplitude factor is more or less constant with angle) the smoother the surface the smaller the variation in phase. This resulted in a reduction in the contrast V of the speckle defined as V = [〈 I sp2 (P2 )〉 − 〈I ( P2 )〉2 ]1/ 2 /I sp ( P2 )
Coherent light
r
P1
FIGURE 4.184â•… Image plane speckle.
(4.236)
where 〈〉 indicates an average value of the ensemble of scattering points. Screen
P2 u
(4.234)
where A(P1) is the complex amplitude reflection at P1 and A(P2) the complex amplitude at P2. In general
a
(4.232)
f
f
Ap
f
f
P2
FIGURE 4.185â•… Speckle pattern at image and diffraction planes.
360
Handbook of Surface and Nanometrology, Second Edition
Consider surfaces which are relatively smooth: A(P2 ) = a( P2 ) + c( P2 )
(4.237)
where A(P2) is the amplitude of the speckle corresponding to a certain point P2 of the observation plane, a(P2) is the diffuse component of the speckle and c(P2) is the specular component. Assume that Gaussian statistics apply, which in turn presupposes that the correlation length of the surface is small compared with the illuminated area (failure to follow this is treated in Ref. [201]). The feature that determines the contrast variation in the speckle pattern is the number of independent surface features within the illumination area, in much the same way that the variance of a statistical parameter is dependent on the number of observations. Letting Re( A( P2 )) = ar + cr
Im( A( P2 )) = ai + ci
σ r2 = 〈( Ar2 ) − ( Ar )2 〉 = 〈ar2 〉
σ i2 = 〈( Ai2 ) − ( Ai )2 〉 = 〈ai2 〉
(4.238)
(4.239)
the average diffuse component ID and specular component Is are I D = 〈| A(P2 )2 | − | A( p) |2 〉 = σ 2r + σ i2
I s = | 〈 A( P2 )2| 2
= c 2r + ci2 .
(4.240)
The probability density function of the speckle is obtained from the joint probability density of Ar and Ai and is assumed to be Gaussian: p ( Ar , Ai ) =
1 ( A − C )2 ( A − C )2 exp − r 2 r + i 2 i (4.241) 2πσ i σ r 2σ i 2σ r
assuming the cross-correlation between Ai and Ar is zero, from Equation 4.225. The average contrast V of speckle contrast is given by V = [2(σ r4 + σ 4i ) + 4(cr2σ r2 + ci2σ i2 )]1/ 2 (σ r2 + σ i2 + cr2 + ci2 ). (4.242) Asakura [200] transforms Equation 4.225 using Ar = I1/2 cos Ψ and Ai = I1/2 ln Ψ giving p( I ) 1 = 4 πσ r σ i
2π
∫ 0
cos2 Ψ + sin 2 Ψ I − 2 2σ r2 2 σ i dΨ exp − 2 2 ci cr I 1/ 2 + cr + ci cos + sin Ψ Ψ σ 2 σ i2 2σ r2 2σ i2 r (4.243)
These equations represent those required for speckle investigation. If the forms for σ2ρ and σ2, and cr and ci are known, the variations in speckle and the joint density can be found. As will be seen later the p(I) term is useful in describing the nature of the surface spectral characteristics. The important question is whether or not it is possible to extract surface finish information from the speckle pattern. There are two basic features that influence the speckle variability: one is the likely phase variations resulting from the amplitude of the roughness heights, and the other is the likely number of scatterers that could contribute to the pattern at any point. It has been highlighted by researchers what a critical role the optimal imaging system has in determining the actual number of independent surface features which make up the intensity at any one point in space [155–157]. It is obvious that the speckle intensity is considerably influenced by the number of scatterers contributing to the intensity. The amount of variation will be lower the larger the number of contributors. The surface roughness parameter which determines the unit scattering element is the correlation length of the surface defined previously. Obviously this is relatively straightforward to define and measure for a random surface but not easy for a periodic surface or, worse, for a mixture of both. The weakness of all the approaches has been the problem of attempting to estimate the roughness parameters without making global assumptions as to the nature of the statistics of the surface under investigation. Beckman and Spizzichino [205] started by suggesting a Gaussian correlation function within the framework of Kirchhoff’s laws. This is not a good idea, as will be shown in the section on optical function. Probably the confusing influence of a short-wavelength filtering produced by the instrumental verification of the surface model prompted this. Far more comprehensive models should be used. However, this produces problems in mathematical evaluation [206]. Pedersen [203] used such complex models in a general form and showed that the speckle contrast is highly dependent on the surface height distribution, thereby tending to limit the usefulness of the method in practice. Partially coherent light has been used with some success, in particular by Sprague [202] who found out that if the surface roughness height was comparable with the coherence length of the light—which implies quite a large bandwidth in fact—then the speckle pattern in the image plane near to the lens did bear some relationship to the roughness. Furthermore, a meaningful correlation was not confined to one process [204]. This work has been carried further with polychromatic light resulting in
P 12 = P1 ( K 0 )2 / [1 + (2WRq )2 ]1/ 2
(4.244)
and
P1 ( K 0 )2 = 1 − {1 + N −1[exp( Rq2 K 02 − 1)]}−2 . (4.245)
361
Measurement Techniques
Contrast
Rq being the RMS roughness of the surface, W the bandwidth of the light, PI(K0) the contrast development of the speckle pattern for monochromatic light at mid-band. N corresponds to the number of independent events (facets) within the illuminated area. Other methods based on Sprague’s work [202] have been attempted but so far do not appear to be very successful. This is because they are basically not versatile enough to be generally applicable. It is this that counts in instrumentation! The results by Sprague have been queried by suggesting that perhaps the method has not been completely verified. Fundamentally the technique depends on plotting the contrast ratio as a function of Ra (or Rq) or alternatively the number of effective scatterers (Figures 4.186 and 4.187). The argument is that using these graphs the surface roughness (Rq or Ra) can be found. Figure 4.187 will be explained later when very few scatters are considered or when phase effects which are large compared with the wavelength of light are present. Leonhardt introduces a number of variants, one of which is the use of a certain amount of incoherent background light together with some degree of defocus. He calls the methods white-light phase contrast detection and gives some results for different surfaces with varying degrees of added intensity. This is shown in Figure 4.188 in which intensity is plotted on a log scale, t is the amount of white light added. The basic idea behind this is that the ratio of constant light to scattered light is less favorable for a high degree of surface roughness, with the result that the measured contrast is the smallest, so the graph is steeper for larger values of t. The whole theory is complicated but much effort is being used to try to explain the problems encountered in radar and similar transmissions through media which introduce both amplitude and phase variations. One of the best ways of understanding problem phenomena is to build a model for testing.
Ra(µm)
3.0
Many workers have done this using either real surfaces in reflection as a model or a phase screen having a rough surface to produce the phase changes: from the observed scattering some conclusion can be expected to be obtained about the scatterer—the so-called “inverse scatter problem.” Goch [207, 208] attempted two other ways of getting roughness information from speckle patterns: these are polychromatic speckle autocorrelation and double scattered speckle. In the first, a rough surface is illuminated by three laser diodes the tri-chromaticlight scattered from the surface is collected by a camera as shown in Figure 4.189. This arrangement was subsequently modified to use a super bright laser diode whose spectrum spanned the spectrum of the three laser diodes as shown in Figure 4.190. The method is based on an observation made many years ago by Parry [209]. He observed what he called a speckle elongation. The fact is that the roughness affects each of the wavelengths slightly differently, but similarly, which results in a slightly asymmetric scatter pattern whose elongation seems to by influenced significantly. A large roughness gives three independent speckle patterns superimposed on each other in which the asymmetry disappears. For the theory see Ref. [209]. According to Goch one method of characterizing the roughness is to take the ratio of the mean speckle diameter in the x direction DX to that in the y direction DY these diameters can be estimated from the autocorrelation length of the surface in the orthogonal directions. Figure 4.191 gives some typical results. The first curve is for ground surfaces and the second for EDM. Both demonstrate a significant sensitivity to the value of roughness. The ordinate value shown is arbitrary: it depends on the roughness but also the optics and the detector geometry so it is not to be compared directly with other similar attempts unless all the apparatus and software variables have been matched properly. The apparatus shown in Figure 4.190 is in the classic “4f” configuration used for optical filtering, seen later in the section on diffraction. The light source this time is a Super light emitting diode (SLED) to replace the Â�tri-chromatic set up used earlier Figure 4.192 shows the difference between the way in a surface (Ra = 0.8 μm) reacts to the two methods of illumination. Both speckle patterns reveal€ a nearly
Contrast
Log intensity
FIGURE 4.186â•… Variation in speckle contrast with roughness. t = 0.0 0.05
1.0 0.14 N
FIGURE 4.187â•… Speckle contrast as a function of reflectors.
log Ra
FIGURE 4.188â•… Intensity dependence of white-light addition.
362
690 nm 30 mW 680 nm 30 mW
Fiber coupler
670 nm 10 mW
Fiber coupler
Beam splitter
Fiber collimator
Lens f
Single mode fibers Roughnesscharacterization
Workpiece
SM-pigtailed laser diodes
Handbook of Surface and Nanometrology, Second Edition
CCDarray
Frame grabber
Digital image processing
FIGURE 4.189 Poly-chromatic (tri) speckle autocorrelation. (After Lehmann, P. and Goch, G., CIRP Annals, 49/1, 419–22, 2000. With permission.) CCDarray f: Focal length
f Biconvex lens Pinhole
Condenser
Biconvex lens
Pinhole
Lightsource f
f
f
f
f
Biconvex lens
Workpiece
FIGURE 4.190 Bright diode speckle pattern apparatus (After Goch, G., CIRP Annals, 57/1, 409–12, 2008. With permission.) (a)
(b) 6
Optical roughness parameter (a.u.)
20
Optical roughness parameter (a.u.)
5
10
4
5
3
2
2
1
1
0.5 0.2
0.3
0.5
1
2
3
Roughness Ra (µm)
0.3
0.5
1 2 Roughness Ra (µm)
3
5
FIGURE 4.191 Results of optical roughness characterization by means or the trichromatic speckle (a) ground, (b) EDM. (After Lehmann, P. and Goch, G., CIRP Annals, 49/1, 419–22, 2000. With permission.)
circular speckle shape near the optic axis (region A). In region B, the roughness dependent elongation forms a fibrous structure but there is a difference between the two because in the tri-chromatic case speckles of different wavelength separate from each other with increasing distance from the
optic axis with less elongation (region C) the continuous spectrum shows not unexpectedly a continuous elongation, but more resolution. It is possible to get a measure of the roughness by using just the DX. A comparison is shown in Figure 4.192.
363
Measurement Techniques
Figure 4.193 shows the comparison of the measured estimate over a range of roughnesses. It is clear that the bright laser of narrow bandwidth shows considerably more sensitivity than the tri chromatic spectrum of the same nominal bandwidth. Another benefit is the fact that the apparatus is B
C
much smaller and so lends itself more to in-process gauging. The remaining problem is that of light intensity which is still relatively low despite the use of the super bright LED. There is a clear advantage to using the continuous laser but having to measure the degree of variation rather than a direct parameter value is a disadvantage. It would be useful in such an investigation to see whether there is any discrimination at all in either method between surfaces which have the same Ra value. The other approach also used by Lehmann and Goch is called the double scatter method in which the test surface is illuminated by a speckle pattern generated on another surface as seen in Figure 4.194. For details see Ref. [211] This again uses the “4f” system giving the correct matched phase relationship of Fourier diffraction at the object plane and the second scattering is picked up in the Fresnel region by a CCD camera. The former part of the configuration makes in-process measurement possible. The basic idea is that when the speckle pattern generated at the diffuser is imaged onto the workpiece it modulates the speckle pattern formed by the interaction of the coherent light with the surface roughness. The degree of modulation depends on the roughness. Figure 4.195 shows some results. The ordinate unit in Figure 4.195 is based on the areal correlation function slopes of the speckle intensity near to the
C A
Evaluation segments B A
Evaluation segments
FIGURE 4.192 Comparison between tri chromatic and continuous (narrow band) speckle patterns. (After Lehmann, P., Patzaelt, S., and Schone, A., Appl. Opt., 36, 2188–97, 1997. With permission.) (a)
3
40
X-direction
2.5 2
ACF-width (pixel)
Standard deviation of Dx in a.u.
3.5
1.5 1 (b)
0.5 0
0
0.4
0.8 Ra in µm
20
10
1.2 0
FIGURE 4.193 Measurement results of ground surfaces using (a) SLED diode, and (b) tri-laser 659 nm, 675 nm, and 690 nm. (After Lehmann, P. and Goch, G., CIRP Annals, 49/1, 419–22, 2000; Goch, G., CIRP Annals, 57/1, 409–12, 2008. With permission.) Optical roughness characterisation
5
10 20 50 Roughness Ra (nm)
100
200
FIGURE 4.195 Double scatter speckle roughness characterization. (After Lehmann, P., Appl. Opt., 38, 1144–52, 1999. With permission.)
Digital image processing
Diffuser
2
Frame grabber
Lens
CCD array
Lens
Pinhole
Workpiece
Laser beam
Y-direction
30
f
f
f
f
FIGURE 4.194 Double exposure speckle characterization of surfaces. (After Balabois, J., Caron, A., and Vienot, J., Appl. Opt. Tech., August, 1969; Lehmann, P., Appl. Opt., 38, 1144–52, 1999. With permission.)
364
Handbook of Surface and Nanometrology, Second Edition
origin. These correlation functions appear to have been simulations rather than practical functions, which are difficult to obtain. This is a pity because no amount of simulations make up for practical experiments. However, these graphs are meant to demonstrate that there is some information concerning the surface roughness which could be used as a basis for surface characterization but whether they could be of general use remains to be seen. Optical methods are notorious for being sensitive—too sensitive for practical usage: they respond to unwanted stimuli. Notwithstanding these reservations it seems that the double scatter method could be useful over the range of surface 1 n2 the tunneling photons will couple, thus frustrating the total internal reflection. A linear change in the separation z causes an inverse and exponential change in the energy transfer so that if the surface of medium 3 is a rough surface the light reflected at the boundary in medium 1 is modulated according to the roughness. Hence the microtopography is transformed into a greyscale image. This greyscale image has to be translated into height information by means of a calibration surface of known geometry; usually a hemisphere [296]. Figure 4.258 shows the usual configuration. Contact or near contact between transducer plane and specimen causes frustrated total internal reflection by photon tunneling. This shows up as black spots on the image. This method has been used to ascertain the contact between gears by means of perspex two disc machines. While the vertical resolution of photon tunneling microscopy is about the same as the SEM, the shorter wavelength of the electron yields superior lateral resolution. The photon tunneling microscope does, however, have one advantage. This is that dielectrics can be viewed directly without coating
(a) Configuration
Microscope objective
Light Lens Oil
Transducer Air
Sample Points of contact
(b) Image
Vibrational spectroscopy is concerned with the spectra produced when photons of a give frequency impinge on a material, which might be one or many molecules, causing vibration which causes photons to be “elastically” scattered at a different frequency. This is not the same as fluorescence in which the photons are absorbed and sometime later reemit a photon at a lower frequency. There are two types of vibration spectra: IR spectroscopy and Raman spectroscopy. Both are responses to photon irradiation but are due to different characteristics brought about by the ensuing vibration of the molecule or molecules. IR spectroscopy is sensitive to a change in dipole moment as a function of vibration whereas Raman spectroscopy is sensitive to the degree of “polarizability” as the molecule undergoes vibration. Polarizability is taken to mean the ease with which a molecule placed in an electric field redistributes its charge. IR and Raman spectroscopy are often complementary. Their theory and use were developed mainly by physical chemists investigating the intricacies of the structure and dynamics of molecules and atoms. Features such as bond lengths and angles are examples of structural details of interest. The basic physics is as follows and is concerned with the interaction between a dipole moment and an electric field: in this case the dipole is induced by the field. Let the induced dipole be µind = αE where α is the polarizability and E is the field then using Taylor’s theorem
α = α 0 + α ′x + α ′′x 2 + ........
If the electric field E = E0 cos 2πν0 t where ν0 is the excitation frequency and neglecting second order terms µind = (α + α′x) E0 cos2πν0t. The time dependent displacement is x = xm cos2πννt where xm, vv are the vibration amplitude and the vibration frequency, respectively,
Image
µ ind = α 0 E0 cos 2 πν0t + FIGURE 4.258 TIR mode of microscope.
(4.322)
E0α ′x m 2
[ cos 2 π ( ν0 − ν ν ) t + cos 2π ( ν0 + ν ν ) t ].
(4.323)
401
Measurement Techniques
The first term is Rayleigh scattering, which is at the same frequency as the excitation frequency, and is large in value. The second term is the Raman Scattering and has frequencies which are the sum and difference of the excitation and vibration frequencies. The frequency bands on the low frequencies are called the Stokes lines and those above the excitation frequency are called the anti-Stokes lines. The Raman scattering is much smaller than the Rayleigh scattering and as a result the Rayleigh usually has to be suppressed. It is usually the Stokes lines which are of most interest in Raman scattering. One problem which sometimes arises is a confusion with fluorescence because they occupy roughly the same possible range of visible light but there is a fundamental difference because in their photon emission. Raman emits within femtoseconds whereas fluorescence emits in nanoseconds which is slow for many applications. Raman spectroscopy has many applications in surface science. For example one use is in the investigation into carbon surfaces, which has revealed that the Raman spectra correlate with electrochemical reactivity [297]. It is also proving to be invaluable for looking at the structural properties of carbon nanotubes and graphenes. Raman spectroscopy is used in non-destructive testing for example checking ceramic coatings and interfaces of organic–aqueous materials in colloids as well as metal– liquid-like films (mellfs). It is of especial use in the investigation of the water molecule and its chemical and physical behavior for example in the intra molecular vibration modes and in the properties of hydrogen bond stretching and bending all important in hydrophilia and phobia. It is also used in the quality control of cleaning fluid for semi-conductors. Raman spectroscopy was noticed first in 1928 by C V Raman and K S Krishnan [298] and for some years was hardly noticed but with the growth of molecular engineering and miniaturization its use has been expanding exponentially. One reason for it being taken up was due to the use of the Fourier transform [299]. In this a long wavelength laser like Nd YAG of 1.064 wavelength was focused onto the specimen. The scattered light is then sent into a Michelson Interferometer where an interferogram is produced which is digitized and then processed to get a spectrum of high resolution and having a wide range of wavenumbers. This made it suitable for spectra subtraction so that background noise could be removed. It is being increasingly used because of its facility in being incorporated into microscopes. It is more convenient than IR spectroscopy because of its shorter wavelength: it is better for backscatter use whereas IR is more suitable for transmission. An example of how Raman can be incorporated into a conformal microscope is shown in Figure 4.259 [300].
4.7 COMPARISON OF TECHNIQUES— GENERAL SUMMARY A basis for comparison of instruments used for research purposes is often taken as the diagram proposed by Margaret Stedman [301].
Eyepiece
Coupling optics D2
Removable mirror Pinhole spatial filter
Filter
D1
Pinhole spatial filter Beamsplitter
Spectrometer entrance slit
Microscope objective Sample
FIGURE 4.259 Conformal Raman microscope. (After Turrell, G., and Corset, J. (Eds), Raman Spectroscopy: Developments and Applications, Academic, New York, 1996.)
This basic plot is the of the vertical amplitude as the ordinate and the wavelength as the abscissa—the WA plot. Various attributes of the various instrument classes can be conveniently displayed on this picture which is useful in comparing some aspects of their performance. Jones and Leach [302] of the NPL have indicated some of these attributes in Figure 4.260a. In Figure 4.260b of the same figure three of the most important instrument families have been compared. It is assumed that these curves refer to commercial instruments rather than special cases of instruments developed in research or standards laboratories. Another Stedman type figure has been prepared by Jeol (UK) Ltd and reported here by Smith [285] in which the SEM and TEM and Auger spectrometry are also represented but the stylus technique omitted. The general trends identified in these figures look plausible even though there are some differences. Figure 4.260b shows some interesting but strange features such as the slopes contained in the extreme left of the AFM and stylus graphs, which indicate that the amplitude resolution or sensitivity changes as the square or more of the wavelength limit. Is this meant to be due to stylus shape? Nevertheless as such figures inevitably have some subjective element in them as well as the compiler being dependent on sometimes optimistic, out of date or revised specifications, as for the SEM in Figure 4.261 in terms of height fidelity, the graphs serve a useful purpose in putting some aspects of performance into perspective. One of the problems with such presentations is that there is no precise definition of terms for setting out the boundaries of instrument performance, which makes any comparison somewhat indefinite. What is glaringly obvious with any Stedman figure is the omission of any reference to time. One of the most critical factors in any practical situation is the measurement time yet it is often completely neglected. Presumably this is because it is laboratory performance which is being reported rather than usage in the field. Another omission is the overall capability of the instrument rather than just the limitations of certain aspects of its use on sine waves! Although an acceptable way of comparing the basics.
402
Handbook of Surface and Nanometrology, Second Edition
Typical constraints in traditional AW space plots adapted from Stedman (a) Length of side = criticality of component or parameter
Amplitude range
Equality of sides = balance of design
Amplitude range/resolution
Area = versatility
Scan length Surface wavelength discrimination
Sensitivity
Bandwidth
AW plot of the generic performance of three categaries of instrumentation (b)
mm 10
Stylus
1
Amplitude
100 µm 10 1 100
AFM
µm 10 1 0.1
Optical 1
10 nm
100
1
10 µm
100
Wavelength
1
10 mm
100
FIGURE 4.260 Stedman plot of instruments (a) general possibilities and (b) three instrument classes figure. (After Jones W.J. and Leach, R.K., Meas. Sci. Technol. 19 055015 (7pp), 2008. With permission.)
A much more representative of performance limitation of instruments would be to use their response to random surfaces and not sinusoids as in the current WA graph and have the abscissa as the average wavelength of a moving bandwidth. The difference in approach can be demonstrated by referring to a comparison of performance between optical and stylus instruments by Whitehouse in Figure 4.233, from the 1970s repeated below as Figure 4.262 in which the criteria are response time and vertical range to resolution ratio. The former, Whitehouse method is basically an instrument practical application guide whereas the Stedman method is an instrument calibration specification guide. It could be argued that this figure is at fault because he omitted wavelength but this is to some extent implicit in the measurement of response time. Considerable effort has been expended in this chapter on the dynamics of measurement covering different sensors as well as different types
of surface. It seems sensible therefore to attempt to fuse the two comparison methods, or something similar, together in a meaningful way. Fortunately metrologists have been aware of this discrepancy and are making attempts to develop a more integrated comprehensive approach to instrument performance specification. Notable amongst these are Leach and Jones of the NPL [302] who have been proposing the addition of a dynamic axis to the AW basic graph. They have applied this to an instrument developed at NPL with interesting results. One version is shown in Figure 4.263 in which a third axis tracking velocity has been incorporated to the Stedman amplitude and wavelength axes. Because their approach has been very comprehensive it is quite complicated to unravel. It remains to be seen how well this method of presentation and evaluation can be used in practice but it is definitely a step in the right direction!
403
Measurement Techniques
Vertical resolution
SEM OM
1 mm STM/AFM : Atomic force microscope; SEM : Scanning electron microscope; TEM : Transmission electron microscope; AES : Auger electron spectroscopy; XPS : X-ray photoelectron spectroscopy OM : Optical microscope
1 µm
1 nm
TEM
0.01 Å 1 pm
1 µm
1 mm
STM/AFM
AES
1 nm 1Å
Lateral resolution
FIGURE 4.261 Comparison of specification including the SEM and TEM but excluding the stylus method. (Courtesy of Jeol UK; Smith, G. T., Industrial Metrology, Springer, Berlin, London, 2002.)
In-process measurements response (Hz)
Scatter 105
103
10
Diffraction
Optical
Ideal
Scanner Mechanical probe Optical probe 103 106 Range/resolution (integrated measurement)
FIGURE 4.262 Trends in performance between stylus and optical methods.
4.8 SOME DESIGN CONSIDERATIONS 4.8.1 Design Criteria for Instrumentation Some of the major points of design are given here. For a fuller exposition see Smith and Chetwynd [303] and Moore [304]. The basic questions that need to be addressed in any instrument or machine tool design are as follows:
1. Is it correct according to kinematic theory, that is, does it have the required number of constraints to ensure that the degrees of freedom conform to the movements required? 2. Where are the metrology and force loops? Do they interact? Are they shared and, if so, how? 3. Where are the heat and vibration sources and, if present, how can they influence performance?
4. Is the design symmetrical and are the alignment principles obeyed? 5. What other sources of error are present and can they be reduced by means of compensation or nulling?
These points have been put forward to enable checks to be made on a design. It is not the intention here to investigate them fully. Some basic principles can be noted:
1. Any unconstrained body has six degrees of freedom, three translational x, y, z and three rotational α, β, γ. 2. The number of contact points between any two perfectly rigid bodies are equal to the number of constraints. 3. Any rigid link between two bodies will remove a rotational degree of freedom. 4. The number of linear constraints cannot be less than the maximum number of contacts on an individual sphere. Using the rules for linking balls and counting contacts it is possible to devise a mechanism which has the requisite degrees of freedom. 5. It should be noted that a movement is that which is allowed in both the positive and negative direction. Similarly, a constraint is such that movement in one direction or both is inhibited. For example, it is well-known that a kinematic design can fall to pieces if held upside down. 6. A specified relative freedom between two subsystems cannot be maintained if there is underconstraint. 7. Overconstraint does not necessarily preclude a motion but invariably causes interaction and internal stresses if used and should therefore be avoided.
404
Handbook of Surface and Nanometrology, Second Edition
1 1
“Front/over” view
r = logv
0
2
1
–1
3
–2
4
–3
6
–4
5 3
–5
8
–6
7
–2
–8 p = log10 A
6
–4
–6 –4
8
q = log10 λ
–6
7
r < c7 z latency r > rmin Minimum resolvable v p < pmax Maximum A p < pmax Maximum λ p > pmin Minimum resolvable A 2p – 2r – p – c5>0 Maximum force q – p – c1 > 0
Probe tip geometry q > pmin
Minimum resolvable λ
pmin = –9
1
–1 r = logv
–2
(λmin = 10–6 m)
rmin = –6
(λmax = 10–2 m) (vmin = 10–6 m s–1)
rmax = 1
(vmax = 101 m s–1)
c1 = log10(2π)
–3
c4 = log10(4π2m/m*g)
4
–4
c5 = log10(4π2m/[Fmax–m*g]) c6 = log10(Vz,limit/2π)
5
–5
c7 = log10(δxmax/L)
–6 –2
c8 = log10(2πL/δzmax)
2 –4
q = log10 λ
(Amax = 10–4 m)
qmin = –6 qmax = –2
‘Rear/under’ view
0
(Amin = 10–9 m)
pmax = –4
–6
–4
–5
–6
–7
–8
p = log10 A
–9
g = 9.81 ms–2 m*/m = 0.002 Fmax = 0.1 mN Vz,limit = 1 mm
m=5g δzmax = 1 nm
s–1
δXmax = 5 nm L = 1 µs
Plot of constraint volume in parameter space. Each face represents a constraint, as labelled. Note that not all constraints form part of the final constraint polygon. For this set of parameters, the maximum velocity and minimum force constraint planes only come close to the polygon.
FIGURE 4.263â•… Jones and Leach instrument performance specification incorporating a dynamic element. (After Jones W.J. and Leach, R.K., Meas. Sci. Technol. 19 055015 (7pp), 2008. With permission.)
4.8.2â•…Kinematics Kinematics in instrument design is an essential means to achieve accurate movements and results without recourse to high-cost manufacture. Figure 4.264 shows simple examples of all degrees of freedom. In surface metrology instruments kinematics are most often used in generating straight line movements or accurate rotations. In both cases there have to be five constraints, the former leaving a translation degree of freedom and the latter a rotation. By far the largest class of mechanical couplings is for
one degree of freedom. Couplings with more than one degree of freedom are seldom required, the obvious exception being the screw, which requires one translation and one rotation. Despite its apparent simplicity, two translations cannot be achieved kinematically. x, y slides are merely two systems of one degree of freedom connected together. A typical translation system of one degree of freedom is shown in Figure 4.265. This has five pads on the carriage resting on the slide. The pads are usually of polymer. Another simple system has a tube, which needs to move linearly resting on four balls (Figure 4.266). The tube has
405
Measurement Techniques
a slot in it into which a pin (connected to the frame of the instrument) is inserted to prevent rotation. For one degree of rotational freedom the drive shaft has a ball or hemisphere at one end. This connects against three flats suitably arranged to give the three points of contact. The other two are provided by a vee made from two rods connected together (Figure 4.267a). Another version (Figure€4.267b) is a shaft in two vees and resting on a flat. As mentioned earlier, these would have to be encouraged Z
(a)
1) in z
5) α,β,γ,x,y
Ball on flat
z
(b)
y
x
Points of contact
2) in x,z
4) α,β,γ,y
3) in x,y,z
3) α,β,γ
Ball on vee
z
(c)
y
x
to make the contacts either by gravity or by a non-intrusive spring. There are numerous possibilities for such designs, which really began to see application in the 1920s [1,2]. The other main use of kinematics in surface metrology is in relocation. In this use it is necessary to position a specimen in exactly the same position after it has been removed. Such a requirement means that the holder of the specimen (or the specimen itself) should have no degrees of freedom—it is fully constrained. The most usual arrangement to achieve this is the Kelvin clamp (Figure 4.268) [305]. The six degrees of freedom are constrained by three points of contact in hole C which, ideally, is a tetrahedron. Two points of contact in a vee groove at A and one on a flat at B. Another version is that of two non-parallel vees, a flat and an edge. Another version just has three vees. The relocation requirement is usually needed when experiments are being carried out to see how the surface changes as a result of wear or some similar phenomenon. Couplings with two degrees of freedom require four constraints. A bearing and its journal with end motion, Hooke’s joint, the so-called “fixed” knife edges and four-film suspensions are common examples of this class. One coupling with three degrees of freedom is the ball and socket joint. The tripod rests on a surface or a plane surface resting on another plane and three filar suspensions. Alternatively, the tripod could rest on a sphere in which only rotations are possible.
Points of contact
Pin to stop rotation
Ball in trihedral hole α (d)
α
β 4) in x,y,z,α
2) α,β
Ball link in two vees
FIGURE 4.266â•… Linear movement tube. (a)
(b)
α (e) 5) in x,y,z,β,γ
1) α
Shaft in trihedral hole against a vee
FIGURE 4.264â•… Kinematic examples. Side view 1
FIGURE 4.265â•… Linear movement carriage showing support pads.
FIGURE 4.267â•… System showing single rotational degree of freedom. Side view 2
End view
406
Handbook of Surface and Nanometrology, Second Edition
Specimen
(a)
Nut Tetrahedron hole
Platform Vee groove
Bolt
Flat Specimen
(b)
Specimen table Round end feet Flat Edge Platform
Vee grooves
FIGURE 4.268 Kinematic clamp—six constraints. Collar
Specimen
Location points Stud Location points
FIGURE 4.269 Specimen modified to allow relocation.
Relocation configurations are sometimes a little complicated by the fact that the specimen has to relocate exactly in more than one piece of apparatus. An example could be where the surface of a specimen has to be measured on a surface roughness measuring instrument and also has to be located in a wear machine. In these cases the specimen holder has to cater for two systems. Under these circumstances the specimen itself may have to be modified in shape or added to as shown in Figure 4.269. In this case the specimen is a cylindrical specimen. Attached to it is a collar which has a stud fixed to it. This collar allows a constraint to be applied to the specimen rotationally. This would not be possible with the simple specimen. A collar or fitting such as this makes multiple relocation possible. Couplings with four degrees of freedom require two constraints. A knife edge contacting with a plane bearing is a simple example of this class. Couplings with one degree of freedom include a ball on a flat. What has been said above about kinematics is very elementary but necessary and is the basis from which designs should originate. Often the strict kinematic discipline has to be relaxed in order to meet other constraints such as robustness or ease of manufacture, for example by using a conical hole instead of a trihedral hole for the location of spherical feet. Nevertheless the principles still hold true.
Kinematic contacts
FIGURE 4.270 Nut and bolt kinematic constraints (shared contact not shown).
Above all, the hallmark of a kinematic design is its simplicity and ease of adjustment and, not insignificantly, the perfection of motion which produces a minimum of wear. In general, kinematic instruments are easy to take apart and reassemble. Kinematic designs, because of the formal point constraints, are only suitable for light loads—as in most instruments. For heavy loads pseudo-kinematics are used in preference because larger contacts areas are allowed between the bearing surfaces and it is assumed that the local elastic or plastic deflections will cause the load to be spread over them. The centroids of such contacts are still placed in positions following kinematic designs. This is generally called pseudo or semi-kinematic design. An example of a case where there is some ambiguity is that of a nut and bolt shown in Figure 4.270. Obviously it could be considered to have five constraints because it can rotate and move axially but not freely in either movement. There is not a full constraint for either movement unless on the one hand the pitch is infinitely small in which case there would be one constraint axially and no axial movement or the pitch is infinitely long in which case there would be one constraint rotationally and no rotation. So, it seems plausible to apportion the extra constraint as a function of the pitch angle. A simplistic way to see this is to allocate the constraints in x, y, z, α, β, γ as 1,1,cos2θ,1,1,sin2θ. In other words two movements can share a constraint.
4.8.3 Pseudo-Kinematic Design This is sometimes known as elastic or plastic design. Practical contacts have a real area of contact, not the infinitesimal area present in the ideal case in kinematics. As an example a four-legged stool often has all four legs in contact, yet according to kinematics three would be sufficient. This is because the structure flexes enough to allow the other, fourth, leg to contact. This is a case of overconstraint for a good reason that is safety in the event of an eccentric load. It is an example of elastic design or elastic averaging. Because, in practice, elastic or eventually plastic deformation can take place in order to support a load, semikinematic or pseudo-kinematic design allows the centroid of an area of contact to be considered as the position of an
407
Measurement Techniques
ideal point contact. From this the rules of kinematics can be applied. In general, large forces are not a desirable feature of ultraprecision designs. Loads have to be supported and clamps maintained. Using kinematic design the generation of secondary forces is minimized. These can be produced by the locking of a rigid structure and when present they can produce creep and other undesirable effects. For kinematics the classic book by Pollard [305] should be consulted. In general another rule can be recognized, which is that divergence from a pure kinematic design usually results in increased manufacturing cost.
FIGURE 4.271â•… Body with zero degrees of freedom.
FIGURE 4.272â•… Single pivoted link.
4.8.4â•…Mobility The mobility of a mechanism is the total number of degrees of freedom provided by its members and joints. In general a mechanism will have n members of which (n–1) provide degrees of freedom and j joints which provide Ci or (6−fi) constraints. This leads to the Kutzback criterion:
Degree of freedom
j
M = 6(n − 1) −
∑C
i
i =1
= 6(n − j − 1) +
(4.324)
∑f
where Σf is the total freedom of the joints. Mobility is a useful concept in that it can give a warning of bad design. It does not, however, indicate a good design if the mobility worked out simply agrees with that which is expected. The Kutzback criterion can be modified in the case of planar motion, that is all elements and movements are in one plane only. In this case it becomes Grubler’s equation.
M = 3 (n − 1) − 2 j1
(4.325)
where n is the number of links including the base or earth and j1 is the number of joints having one degree of freedom. This does not mean that only one joint can be at a certain point; more than one can be on the same shaft—in fact two are allowed without the mobility sum being violated. In cases where the mobility is being worked out it should be remembered that no two dimensions can be made to the same size that is two links can never be the same length. It may be that, if two or three links are nearly the same, then some movement is possible, but this is only the result of an overconstraint (see Figure 4.271). This figure is constrained to be considered in the plane of the paper. Here n = 5, f1 = 6, so M = 3(4)–12 = 0. It seems, therefore, that the mobility is zero—which is absolutely true because the lengths of the links cannot be equal, meaning that each upper pin describes a different arc. In practice, however, some movement would be possible. In cases where m ≤ 0 common sense should be used. There are rarely practical cases.
FIGURE 4.273â•… Peaucellier linkage-one degree of freedom.
The case for linear and arcuate movement using leaf springs and their elastic bending capability is important. In particular, they have been used for the two movements fundamental to surface metrology: translation and rotation. Both can be provided using pivoted links although they are not used in practice. Obviously the simplest is that of rotation in which a simple pivoted link is used (Figure 4.272). There are quite complicated mechanisms, which were devised in the late nineteenth century for describing linear motion as well as arcuate motion in a plane. One such is shown in Figure 4.273 due to Peaucellier. In principle, this provides a linear movement. Application of Grubler’s mobility test to this give a mobility of M = 3(7)–2(10) = 1, there being eight links and 10 joints. The difficulty of manufacture and tolerancing really exclude such designs from consideration in precision applications but they may soon re-emerge in importance in micromechanics and microdynamics where such mechanisms could be readily fabricated.
4.8.5â•…Linear Hinge Mechanisms In instruments measuring in the nanometer regime there is one type of motion that can be used which avoids using sliding mechanisms and their associated problems of assembly and wear. This is in the field of small translations or small arcuate movements. In these the bending of elastic elements in flexure mechanisms can be exploited (Figure 4.274). Much of the early work on flexures was carried out by R V Jones (see e.g., Ref. [306]). Figure 4.274a shows a simple example of such a flexible system. Although the table moves in a slight arc, by careful symmetry of the legs, the movement can
408
Handbook of Surface and Nanometrology, Second Edition
where K = 0.565t/R + 0.166. It can be seen that the effective stiffness of the notch is very much dependent on the thickness t so that great care is needed in the drilling (Figure€4.276). In the case of the leaf spring the movement is only a fraction of the length of the leaf. The amount is governed by the geometry and the maximum allowable tensile strength σmax of the leaf material. For the simple spring mechanism of Figure 4.274 the load drive is split between two springs, so the maximum moment in each will be Fl/4 if the load is applied in the line of the mid-point of the springs. Given that λ the stiffness, is given by
(a)
(b)
(c)
λ=
FIGURE 4.274â•… Leaf spring elements with normal one degree of freedom: (a) simple leaf spring movement, (b) compound leaf spring, (c) symmetrical double-compound leaf spring.
FIGURE 4.275â•… Split leaf spring hinge.
be made to be parallel to the base. The curved path is reduced in (b) which is the compound leaf spring and further reduced in (c) the symmetrical double-compound rectilinear spring. Manufacturing errors will reduce the performance of such schemes when compared with the theoretical expectation. Parasitic movements are also caused by component parts having different tolerances, although use of monolith devices minimizes this effect. Such a case is shown in Figure 4.276. The problem with the leaf spring approach is that it has a low stiffness out of the desired plane of movement. This can be reduced somewhat by splitting the spring (Figure 4.275). The compound spring has half the stiffness compared with the simple spring in the drive direction and therefore twice the deflection for a given maximum stress. The monolith approximation to the leaf spring shown in Figure 4.276 has a number of advantages. First, it is easy to manufacture, its stiffness being determined by drilling holes, a task wellsuited for CNC machines. Second, because it is made from a monolith it must already be assembled, so there can be no strain due to overconstraints. The notch hinge rotation as a function of the applied bending moment is
θ 24 KR = M Ebt 3
(4.326)
12 EI l3
and I =
bh 3 12
(4.327)
the permissible displacement δ is δ=
σ max L3 . 3Eh
(4.328)
σmax, and E are material properties σmax can usually be taken as 0.3, the yield stress for the metal), L and h are the geometrical parameters of interest (Figure 4.277). This movement is usually rather a small fraction of L (~ 0.1) and, as can be seen from Equation 4.328, scales down rather badly with size. Consequently this type of motion, although having some advantages, does result in rather large base sizes for small movements. Two-axis linear systems are also possible using the leaf spring approach. The system in Figure 4.278 does have two degrees of “limited” freedom. However, the drive axis is not stationary and so this can cause problems. Another possibility is the parallel kinematic design mechanics (PKM). Notch hinge monolith
FIGURE 4.276â•… Notch hinge approximation to leaf spring.
R h
t θ
b
FIGURE 4.277â•… Notch hinge calculation: κ/m = 24kR/ebt3.
409
Measurement Techniques
t L
FIGURE 4.278â•… Two-dimensional ligament hinge movement. (a)
FIGURE 4.280â•… Angular motion flexible hinge: θmax = σmax2L/Et. (a) (b)
(b)
FIGURE 4.279â•… Two-dimensional notch-type hinges.
Such flexures used for two axes have the following stiffnesses [25]: λ=
Et 7 / 2 5L2 R 1/ 2
for the simple spring (4.329)
Et 7 / 2 λ= 10 L2 R 1/ 2
FIGURE 4.281â•… Angular crossed; hinges: (a) crossed ligament hinge and (b) fabricated rotary hinge.
for the compound spring.
Joints of two degrees of freedom based on the notch are shown in Figure 4.279.
4.8.6â•…Angular Motion Flexures For angles less than about 5° elastic designs may be used. The simplest is the short cantilever. This has an angular stiffness
λ θ = PEI cot( L P /EI )
compressive
λ θ = PEI coth( L P /EI ) tensile.
(4.330)
The maximum angle is
θmax =
σ max 2L . Et
(4.331)
Simple cantilevers have limited use as pivots because their center of rotation moves (Figure 4.280). This is not nearly as pronounced in the notch hinge, which is why it is used in preference.
FIGURE 4.282â•… Cruciform angular hinge.
Figure 4.280 angular motion flexible hinge: θmax = σmax2L/Et. A common angular hinge is the crossed strip hinge and its monolith equivalent (Figure 4.281). The other sort of angle system, conceived by Jones, is the cruciform angle hinge shown in Figure 4.282. Summarizing the characteristics of flexure hinges: Advantages:
1. They do not wear—the only possibility is fatigue 2. Displacements are smooth and continuous
410
Handbook of Surface and Nanometrology, Second Edition
Mechanical link
3. Displacements can be predicted 4. No hysteresis
Reference
Disadvantages:
1. Small displacements require large mechanisms 2. Cannot tolerate high loads in compression, otherwise buckling occurs 3. The stiffness in the line of action of applied force is high and the cross-axial stiffnesses out of the plane of movement are relatively low 4. Can have hysteresis losses
Angular transducer Test piece Metrology loop
FIGURE 4.283â•… Caliper principle.
Nevertheless they have considerable use in instrument design.
Heat
4.8.7â•…Force and Measurement Loops The notion of measurement loops and force loops is fundamental in the design of instruments for surface metrology. The force loop is a direct consequence of applying Newton’s third law successively to different parts of a load-carrying static structure. For equilibrium, balanced forces must follow a continuous closed path often involving the ground (mechanical earth). Also, metrology loops come as a direct consequence of linking the object to be measured via a detector to the reference with which it is being compared (Figure 4.283) The caliper principle equally applies to the deflection of the cantilever beam in SPM systems i.e., the AFM as it does to the standard stylus system for conventional surface metrology: the angular deviation can be at the end of the cantilever as well as a pivot in a transducer! Because the force loop is necessarily under load it is therefore strained, so the dimensional stability is improved if it is small. Similarly the metrology loop is better if smaller because it is less likely to have a heat source or vibration source within it. Also, the smaller the loop the more likely it is that external heat sources or vibration sources or loads act across the whole of the loop. It is only when these act on part of the loop that instability and other errors occur. The rules concerning loops are therefore:
1. Any changes that occur to a part or parts within a measurement loop but not all of it will result in measured results that are not distinguishable from the measurement. 2. What is of importance is that force loops should be kept as far as possible away from measurement loops and, if possible, the force loop should not be allowed to cross into or out of a measurement loop.
So a primary consideration in any design is to determine where the metrology and force loops interact and, if possible, to separate them. Usually the best that can be achieved is that they are coincident in just one branch of the measurement loop. This makes sense because any strain in a metrology loop will cause a distortion which is an error, even if small.
Caliper
Numerical noise Force
Process
Vibration Metrology loop
FIGURE 4.284â•… Metrology loop.
Figure€ 4.284 shows factors affecting the typical metrology loop. 4.8.7.1╅Metrology Loop 4.8.7.1.1╅ Introduction There are a number of major problems to be overcome
1. Design problems 2. Improving accuracy 3. Set-up problems and usage 4. Calibration and traceability
The basic questions that need to be addressed in any instrument or machine tool design are as follows, with particular reference to the metrology and force loops.
1. Is it correct according to kinematic theory, that is, does it have the required number of constraints to ensure that the degrees of freedom conform to the movements required? 2. Where are the metrology and force loops? Do they interact? Are they shared and, if so, how? 3. Where are the heat and vibration sources and, if present, how can they influence performance? 4. Is the design symmetrical and are the alignment principles obeyed? 5. What other sources of error are present and can they be reduced by means of compensation or nulling?
411
Measurement Techniques
4.8.7.1.2â•… Metrology Loop Properties The measurement loop must be kept independent of the specimen size if possible. In Figure 4.285 it is not, however, Figure 4.286 shows how it can be done. In Figure 4.21b and c the measurement loop is determined in size by the length of the specimen. Clearly it is beneficial from the point of view of stability to have the loop small. This can sometimes curtail the ease of measurement, but this problem can be resolved in a completely different way as shown in Figure 4.286. 4.8.7.1.3â•… Solution In the above solution, the measurement loop is independent of the size of the specimen. The measurement loop and also the force loop which moves the carriage etc. should be kept small. This is easy to see if the effective stiffness of a cantilever, representing the opened up metrology loop in considered. The small loop is effectively stiffer. The same force F produces a much smaller deflection (Figure 4.287), e.g., consider a cantilever with a force on the end and the same force in the middle. Small loops usually are beneficial (Figure 4.288), but sometimes make the operation of the instrument more difficult. Thin elements in the loop should be short. Cross-sectional shape should be hollow, if possible, like a tube.
Force F Loop opened up Large deflection
Force F
Small deflection
F F
Bent cantilever
FIGURE 4.287â•… Linear movement tube.
Small loop
Draughts and temperature
(a)
Big measurement loop
Differential deflection
Vibration
(b)
Very small measurement loop
FIGURE 4.285â•… Measurement loop-with respect to workpiece.
FIGURE 4.288â•… Small metrology loop advantages.
Table supporting specimen
FIGURE 4.286â•… Inverted instrument.
Small loop
Same deflection
Upside-down specimen
Small loop
Large loop
Small loop
412
Handbook of Surface and Nanometrology, Second Edition
Another point is that, if more than one transducer is in the metrology loop or touching it, then the transducers should, if possible, be the same. Such a situation arises if auxiliary transducers are used to monitor the change of shape of the main metrology loop e.g., in scanning capacitative microscopes.
is with respect to a corresponding point on the reference—no integration is involved. Figure 4.291 shows a closer look at the design. No detailed explanation is necessary.
4.8.7.1.4â•… Other Effects The larger the loop is the more likely it is to be influenced by effects within the loop, e.g., differentially by heat, whereas with a small loop, all of the loop heats up so the shape does not change; similarly with vibration (Figure 4.289). Important Note: Surface metrology has one big advantage over dimensional metrology. This is that it does not matter if the size of the metrology loop increases as long as the shape is preserved (i.e., as long as the two arms of the caliper are in the same proportion). Consider the skid stylus system (Figure€4.290). 4.8.7.1.5â•… Likelihood of Source in Loop The smaller the loop the less likely heat or noise sources will be within it. 4.8.7.1.6â•… Symmetry If there is a source within the loop it is not serious, providing that it is symmetrically placed relative to the two arms of the “caliper.” The point is that any asymmetry has to be avoided.
4.8.8â•…Instrument Capability Improvement It is always the case that more is demanded of equipment/ people/procedures than can reasonably be asked. One question often asked is whether, say, an instrument’s performance can be improved retrospectively i.e., after purchase. The quick answer is usually no, because the instrument has been designed to meet the specification and no more. It could be argued that over-design is bad design. However, requests for improvement will continue to be made. One general rule of metrology is that it is not usually possible to make a bad instrument into a good instrument. Good design improves the capacity to improve performance. Figure 4.292 shows the obvious fact that all instruments will drop in performance with time. The limiting performance is determined by the precision or repeatability of the system. The stability of the instrument determines the repeatability of measurement. The stability also determines how often the instrument needs to be calibrated (Figure 4.293). 0 (a)
4.8.7.1.7â•… Coordinate System in Instrument The obvious way to configure a coordinate system for an instrument working in the nanometer region is to use Cartesian coordinates. This is probably always true if the probe is stationary and the workpiece is moved underneath it. However, some of the best results have been achieved with the probe moving and the workpiece stationary. This can be seen in Figure 4.291. The beauty of the arcuate design used for Talystep is that at every point in the arc of movement the accuracy is determined by one point. This point is the integrated position of line 0 0′. In any translation system the error in movement at any point
Workpiece 0'
Big inertia– low resonant frequency
Small loop
High resonant frequency–small inertia
FIGURE 4.289â•… Attributes of loop size.
Moving probe
Plan view showing arcuate movement
0 Top of probe Fixed probe
(b) Large loop
Long and stiff ligament hinge
Workpiece Moving table Base
FIGURE 4.291â•… Arcuate movement (a) and (b). δθ
δθ ×2 =
Metrology loop
Loop twice as big
FIGURE 4.290â•… Surface metrology loop size change shape is the same: the angle change δθ is the same for both systems.
413
Measurement Techniques
Example of design philosophy: Figure 4.294 shows what is, at first sight, a good design for a rotary bearing. The load of the quill is taken by male and female hemisphere, A and B. The resultant reaction acts through Figure 4.295 shows a better alternative. The position of the hemispheres has been altered, (i.e., B and High precision
Precision
Improvement capability possible with good design
4.8.9 Alignment Errors
Normal precision No capability improvement possible
Lifetime
FIGURE 4.292 Increase in capability still depends on the quality of the original design.
Ideal
Precision
1.0
Re-calibration level Not acceptable
1
2
A reversed). The dynamic stability of the bearing has been improved. Having established the fact that it is only possible to improve accuracy on an already good instrument poses the question of how to determine the systematic or repeatable error of the instrument. Once found it can be removed from all subsequent measurements. The general philosophy is shown in Figure 4.296. Methods of achieving the estimation of the errors will be given shortly.
If a measurement of distance is required it is fundamental that the measuring system is parallel to the axis along which the displacement is required and preferably collinear. There is one basic concept that can be applied to loops of force or measurement and this is symmetry (or balance or matching). If opposite sides of a loop can be made symmetrical in material and in size then the chances are that it will be a good design. If a heat source or vibration source has to be in a measurement loop then arranging that the paths to the transducer are symmetrical will ensure that no differences are seen. Hence, heat sources or vibration sources should be placed at an axis of symmetry if possible. This would suggest that particularly good results will be obtained if the Abbé axis lies on an axis of symmetry of the stress distribution (e.g., the balance). This is illustrated in Figure 4.297. Thus what is measured is d′ not d:
Acceptable
d ′ − d = d ′(1 − cos θ).
Years Lifetime
This is called the cosine error and it is concerned with “angular mismatching” of the object to the measuring system.
FIGURE 4.293 Calibration period.
Plain bearing Quill Length of stability “a”1 Spindle 0 Thrust bearing
A B
Workpiece Table
FIGURE 4.294 Bad design.
(4.332)
Shear force moment ''b''2 Pick-up Leverage against top bearing b1/a1
414
Handbook of Surface and Nanometrology, Second Edition
Plain bearing Quill
Bearing
Spindle Reversed thrust bearing
Length of stability ''a''1
B A 0
Shear force moment ''b''2
Pick-up Workpiece
Leverage against top bearing b1/a1
Table
FIGURE 4.295 Good design.
New accuracy ≡ old precision
Spread in traces is precision
Trace of systematic error
Original accuracy
Departure from true roundness is old accuracy
Typical improvement is between 5:1 and 10:1
FIGURE 4.296 Replacing the accuracy value of the spindle by its precision.
4.8.10 Abbé Errors Micrometer gauge
θ
FIGURE 4.297 Cosine error.
This is concerned not so much with angular mismatching but with displacement of the object to the measuring system. Thus the rule is: When measuring the displacement of a specific point it is not sufficient to have the axis of the probe parallel to the direction of motion; the axis should also be aligned with the point that is it should pass through it. In Figure 4.298 if there is a slight bow in the measuring instrument and its axis is misaligned by l then it will read an error of 2lθ If l is zero then the same bow in the instrument would cause at most second-order or cosine errors. This sort of error due to the offset is called Abbé offset.
415
Measurement Techniques
(a)
(b)
d
Precision spindle
Length gauge
Reference point l θ
A ϕ
ϕ B
d (c)
A
Sensitive direction sensing point
Sensitive direction
B
FIGURE 4.298 (a) Abbé alignment error, (b) abbé error in roundness and (c) abbé error-Vernier gauge.
In its original form, as set out by Abbé in 1890, the alignment error referred to one dimension. However, it is clear that this oversimplifies the situation. There is a need for the accuracy of alignment in two and three dimensions, particularly for specific cases such as roundness and form measurement, etc. A number of people have investigated these principles and their attempts can be summarized by Zhang [307] who, working on the modification of the principle to straightness measurement made by Bryan [308], suggested the following definition: “The line connecting the reference point and the sensing point should be in the sensitive direction.” This definition applies to all cases of dimensional measurement including 1D, 2D, and 3D measurements, straightness and roundness and even run-out measurements. The form of the error calculation is the same, namely δ = Δl sinψ where l is the distance between the reference and ψ is the Abbé angle. Although this new formulation is correct it still has its problems. These are concerned with the identification of the reference point, sensing point, and sensitive direction in complicated systems. Note that the basic idea of the caliper measurement system is maintained. Some cases are shown in Figures 4.298b and c.
4.8.11 Other Mechanical Considerations Other design principles for instruments obviously exist but can be difficult to formulate. One is the idea of matching. If the instrument (or machine tool) provides a linear movement then there are three forces involved: the drive force, the frictional force, and the inertial force. Ideally, these should balance along the same axis, otherwise it is inevitable that unwanted moments or torques will result and hence create twist in the system. Hence it is essential to ensure that the centers of inertia and friction are coincident with the drive axis. The center of inertia can be found by calculation when the drive axis is known. What is more difficult to find is the
(a)
(b)
FIGURE 4.299 Balance of inertial, drive, and friction forces.
center of frictional forces. This is determined by the position and load carried by the support pads for the specimen or the pick-up (or tool) [304]. For example, Figure 4.299 shows two alternative slideway configurations, a and b. It is necessary to ensure that the drive system and the slideway axis are collinear. The link between the drive motion and the slideway containing the probe or specimen should be non-influencing by having free or self-aligning nuts connecting the two members; then non-collinearity will not impose unwanted forces on the traversing mechanism. Obviously the center of inertia will depend on the size and shape of the specimen so that some provision should be made to ensure that the center of inertia always remains on the same axis, for example by the provision of weights to the carriage.
4.8.12 Systematic Errors and Non-Linearities These always occur in any system and they conflict with the achievement of high accuracy. However, there are strategies which can be used to overcome them. Briefly, two of the most important are nulling and compensation. Nulling is a method well-known in the weight balance, for example. A working range, which would carry the measurement operation into non-linear regions of the input–output characteristic of the system is avoided by always ensuring that only one working point is used. Essentially, the reference
416
Handbook of Surface and Nanometrology, Second Edition
in the “caliper” system is continuously altered so that it always equals the test value. This is best embodied in the closed-loop “follower” servo system. This is often used in optical scanners where the image of the surface is always kept in focus—the distance between the objective lens and the surface is a constant. Any error of focus is used to move the objective lens back into focus. The amount of movement is taken as the output signal. The point to note here is that it does not require a calibrated relationship between the outof-focus and the position of the lens because the out-of-focus displacement characteristic is never needed, except at only one functional position. Very often high accuracy is not achieved because of systematic errors in movements such as slideways etc. There are techniques that can be used to evaluate such errors and cancel their effect by compensation. Invariably the errors are evaluated by making more than one measurement and in some way changing the sense of the measurement of the test piece relative to the reference from the first measurement. In its simplest form this is known as “reversal” (see Figure€4.300). Take the case of a test cylinder having unknown straightness errors t(x), being measured with reference to a reference cylinder with unknown errors R(x). On the first traverse the gap between them as seen by the invariable caliper is
Another way to deal with errors is averaging. As an example, if the result from an experiment is subject to random unknown errors then simply averaging many readings will reduce the uncertainty. This is taken to a limit in the design of bearings. Sometimes the journal, for example, need not be especially rounded if it is separated from the shaft by a full oil film. The separation between the journal and shaft is an integration of the out-of-roundness over an arc, which is always much better than point-to-point.
S1 ( x ) where S1 ( x ) = R( x ) + t ( x ).
(4.333)
The test bar is rotated through 180° and the second measurement taken. Thus S2 ( x ) = R( x ) − t ( x ).
From these R(x) and t(x) are found. Subsequently the error of the reference is stored and subtracted from any further measurement of test parts. The compensation method obviously holds if the errors are stable over long periods but is not much use if they change considerably with time, in which case the assessment of the reference error will have to be taken for each measuring process. Other alternatives to the reversal method exist, for example the error indexing method used by Whitehouse [309]. The compensation of clocks by using metals with different thermal coefficients of expansion is well known.
4.8.13â•…Material Selection This is a very wide ranging topic and is vitally important in the design of instruments. The criteria for instrument design are not necessarily the same as for other uses. For instance, some of the most used criteria, such as strengthto-weight ratio, are not particularly important in instrument design. In most cases material cost is not critical because the actual amount of specialized material used in any instrument design is relatively low. What do tend to be important are characteristics such as stiffnesses and thermal properties. There have been many attempts to provide selection rules for materials [310–313] which use property groupings and charts to form a selection methodology. The properties of most importance are taken to be mechanical and thermal. In mechanical properties a number of possibilities arise depending on the application. One of special note appears to be the ratio E/ρ because self-weight deflection is minimized by taking a high value of E/ρ. Also, the resonant frequency of a beam of fixed geometry is proportional to (E/ρ)1/2 and it is usual to aim for high-resonant frequencies in an instrument system, ρ is the density, E is the elastic modulus. There are other ratios such as y/E for minimum strain or y/ρ for any application involving high accelerations (y is the yield strength). On balance, if one “grouping” is to be chosen then E/ρ seems best. Thermal property groups again are numerous but, for those occasions where expansion should be kept at a minimum, Chetwynd suggests that the grouping of α/K should be low if the element is minimally constrained or αE/K if rigidly clamped. Should there be thermal shock problems it makes sense to consider the behavior of the thermal diffusion equation
(a)
x0
(b) x0
Test piece
S1(x)
x
Reference cylinder
FIGURE 4.300â•… Error inversion method.
S2(x) x
∂θ K ∂ 2θ = ∂t Cρ ∂x 2
(4.334)
The temporal behavior of the diffusion of heat in the body is obviously governed by the grouping K/Cρ (the diffusivity). For rapid response a high value would be preferable. When these three groups, specific stiffness E/ρ expansion/conduction α/K and diffusivity K/Cρ are considered, they represent some attempt to rationalize the material properties for instrument design. In such groupings it is usual to
417
Measurement Techniques
plot them on a logarithmic scale with the values or reciprocals arranged so that high (or low) values are good (bad). Three graphs have been presented from which a good idea of material selection is possible and are shown in Figure 4.301b from [311,312]. Hence, if they are low value/high merit the best materials will be near to the origins of the graphs. These three graphs show that silicon is particularly good as a Â�material, as is silicon carbide. An unexpectedly good one is beryllium—pity that it is a hazard to machine! More comprehensive groupings have been used in the form of a material profile in which up to 11 properties or groupings are presented in a line (Figure 4.301a). The idea is that a profile which indicates good behavior of the material would lie consistently above the reference line (usually arbitrarily picked). Relative merits of materials are easily seen at a glance from such profiles. But even with 11 groupings the list is by no means exhaustive.
(a)
Some typical properties are given in Table 4.6 for some well-known materials. This covers the materials in Figure€4.301.
4.8.14â•…Noise This subject has been discussed in some great detail by Jones [314,315]. The Brownian movement of internal particles will cause fluctuations in the dimensions of any mechanical object. This effect is very small indeed. Jones has considered this and has estimated that the RMS length change in a 10 mm brass block of 1 mm 2 area is likely to be 10 –6 m when averaged over a 1 second interval. Other inherent mechanical instabilities are material creep, in which the defects of the structural lattice slowly rearrange under stress to produce dimensional changes. Also, in dynamic applications
–8 –7 10–6
P1
P2
P3
P4
P5
P6
Good Bad
(b)
10–8
Al
Bra Bro Be W SiC Si
Sin
Cl
Inv
104
α/K (mW–1)
FQ Sin MS
10–7
Al Be SiC Si 10–9
ULE
Al 10–8 Si
(ii) Bra
W
SiC Be
Cl MS
Inv StSt
FQ ULE
Sin Al2O3
10–9 104
Cl
105 106 Cp/K (s m–2)
(iii)
StSt Al2O3
Bra
ULE
105 106 Cp/K(s m–2)
10–6
10–8
FQ
10–7 ρ/E (s2m2)
α/K (mW–1)
Al2O3
MS 10–7
(i)
StSt
10–6
Medium
Bra
Bro Inv
W
10–8 10–7 ρ/E (s2m–2)
FIGURE 4.301â•… (a) Material feature profile and (b) material property guide: (i) material property map of expansion/conductivity versus inverse diffusivity, (ii) material property map in inverse specific stiffness versus inverse diffusivity, and (iii) material property map of expansion/cÂ�onductivity versus inverse specific stiffness. Key: AL, aluminum (duralumin is similar); A12O3, alumina; Be, bryllium; Bra 70/30 brass; Bro, 90/10 bronze; CI, cast iron; FQ, fused quartz; Inv, Invar; MS, mild steel; Si single-crystal silicon; SiC, silicon carbide (reaction bonded); SiN, silicon nitride (reaction bonded); StSt, 18/8 stainless steel; SiULE, ultra-low expansion glass (Zerodur); W,tungsten. (Reprinted from Chetwynd, D. G., Prec. Eng., 11, 203–9, 1989. © 1989, with permission from Elsevier.)
418
TABLE 4.6 Material Properties for Design ρ Density (kgm−3)
71
2710
23
200
913
Alumina Beryllium copper Brass 70/30
340 126 100
3700 8250 8500
8 17 18
21 120 110
1050 −40 −50
Bronze 90/10
105
8800
17
180
−40
Copper
130
8954
16.6
386
−50
1200
3500
1.2
1590
170
8000
4
SpanC(CrFc) Fused quartz
70
2200
0.5
Fused silica
70
2150
Graphite Imar (36% Ni-Fe)
4.8 150
2250 8000
Magnesium Molybdenum
41.3 325
1740 10,200
Steel-mild Spring steel SiC SiN Steatite Si Ceramic
210 205 410 165 110
Material Aluminum
Diamond Eilinver/Ni
7860 7860 3100 2500 2700
α Expansion Coefficient ( × 106)
K Thermal Conductivity (Wm−1K−1)
10
β Thermoelastic Coefficient ( × 105)
C Specific List (J kg−1 K−1)
−350
D = K/ρC Diffusivity (m2/5 × 106)
E/ρ (m2s−2 × 10−6)
K/α (Wm−1x10−6)
80
26
8.7
5.4 41 35
92 15 12
2.6 7.1 6.1
56
12
10.6
14.5
23
383
112
510
890
342
1325
0
460
2.7
21
2.5
1.5
10
840
1.0
34
2.7
0.5
2.1
10
2 0.5
23.9 10.7
−50
691 460
15 2.9
2.1 18.8
12 21
1020 251
86 54
24 32
5.9 27.6
420 400 1040 710
19 20 62 8.5
27 26 132 66 40.7
4.2 5.5 47 4.7 0.3
26 5
153 138
15 11.5 4.3 3.2 8.5
63 63 200 150 2.9
−30 −20
Comments Cheap, easy to machine Hard/brittle Toxic in powder Cheap, easy to machine Cheap, easy to machine Cheap, easy to machine Very hard and can be made in fibres Moderately expensive Hard, brittle, and cheap Hard, brittle, and cheap Moderately expensive and machinable Difficult and toxic in powder form
Hard, brittle Hard, brittle Hard, brittle
Handbook of Surface and Nanometrology, Second Edition
E Modulus of Eelasticity (GPa)
300
3950
345
19300
3–5.6 4.5
20
630
166
133
Titanium ULE Titanium Silica Zeradur/Marium glass Carbon fiber Silicon (pure)
110 70
4500 2210
8.9 0.05
91
2530
0.05
150 190
1580 2300
−0.7 2.3
157
Beryllium Wood (pine)
280 15
1850 500
11 5
209 0.15
21.6
1.64
528
821
7.6 65 9.1
0.8
76
5
Hard, brittle
18
37
Difficult to machine
24 32
59
36
33
Hard, brittle
68
Hard, brittle, and very pure Toxic Very workable
706
97
95 83
1675 2 80
67 1
151 30
Hard, brittle
19 0.03
Measurement Techniques
Syalon Si ceramic Tungsten
419
420
mechanical hysteresis can give differing strain values for a given stress. Joints are also important and can be a source of instability. On clamped joints creep and hysteresis have been observed as significant. Bolted joints appear to be satisfactory for nanometer resolutions but hardly for subnanometer work. The most significant mechanical effect is thermal expansion [316,317]. Absolute size changes with temperature in the range of 0.1–20 ports in 106 k depending on the material. Differential temperature effects may also lead to errors as they produce geometric shape changes in the framework of an instrument. This can be very important in instruments not properly sited. Temperature changes need not be gradual. The effects of the sun sporadically falling onto an instrument can be devastating to its operation. Also, discontinuous airconditioning can cause a problem. Even the presence of the human body in the vicinity of an instrument can disturb it, especially at high accuracy. Compensation with electric light bulbs has been used to good effect in the past. A good design aims to reduce thermal effects by the combination of different thermal coefficient materials arranged in re-entrant configurations or used to provide counter stress. Baird [318] used steel wire wound around an Invar rod. A rise in temperature decreases the circumferential stress provided by the steel on the Invar which, in turning, invokes Poisson’s ratio and decreases the length of the Invar (obviously only within a relatively small range). 4.8.14.1â•…Noise Position If a noise source such as a gearbox vibration is located it should be removed or absorbed as close to the source as possible otherwise the noise can dissipate into the system via many routes as shown in Figure 4.303. 4.8.14.2â•…Probe System Possibilities Another factor is barometric pressure. The magnitude of the effect obviously depends on the elastic modulus and Poisson’s ratio for the material. It is of the order of 1 part in 108 for a typical diurnal pressure variation. Constant-pressure environments can if necessary avoid this problem, usually by providing a vacuum. The size of the mechanical loop between the transducer and the specimen can be critical (Figure 4.302). Using Rayleigh’s criterion for the cantilever is a crude rule [319], but the vibrational effects from sources such as motors, gearboxes, drives, etc., can be critical. Presented simply, the situation is as shown in Figures 4.303 and 4.304. (See the section on design shown earlier.) The shorter the distance around the mechanical loop the more stable the situation. The cantilever approximation is an upper bound in the sense that it overemphasizes the possible departure from the geometry shown in Figure 4.304 as a simple diametral representation. By carefully picking the mechanical configuration the mechanical loop can be closed on the workpiece itself rather than in the instrument. Such an example has been given
Handbook of Surface and Nanometrology, Second Edition
Mechanical loop Probe and transducer Workpiece ~ Specimen support ~
Mechanical noise
Mechanical noise
FIGURE 4.302â•… Mechanical loop with extraneous noise.
Instr.
Output
Noise Remove here
FIGURE 4.303â•… Path of noise and vibration. Mechanical loop
Mechanical loop Stepped part
Stepped part Single-probe system
Double-probe system
Output profile
Out profile
FIGURE 4.304â•… Reduction of noise by double-probe system.
before in the multi-probe methods of looking at roundness, straightness, etc. Another example is the accurate measurement of step height and surface texture using two probes. 4.8.14.3â•…Instrument Electrical and Electronic Noise As an example take a typical instrument for measuring fine surfaces (Talystep). Consider some of its electrical noise characteristics, especially the electrical limitations. This is only meant to be concerned with typical values. Better performance in individual cases can always be expected. 4.8.14.3.1â•… General The electrical noise is generated in the transducer and the input stage of the electrical amplifier. The transducer is typically of the balanced inductive bridge type. At the center
421
Measurement Techniques
top of the transducer the sensitivity is about 1 mV RMS per micrometer and thus the current sensitivity for supplying current to the input transpedance is about 0.5 RMS × 10−6 Aµm–1. Inherent noise due to sine wave excitation can be ignored. Also, any noise sidebands that might cross over from the sine wave driver only have a modulating effect to the extent that the transducer bridge is out of balance. 4.8.14.3.2 Transducer Circuit Shot Noise The current fluctuation given by the shot effect (Brownian motion) is
I n2 = 2 I aveB ( A 2 )
(4.335)
where B is the bandwidth in hertz, assumed here to be 25, and e is electronic charge (1.6 × 10−19 C). Iav is 0.5 × 10−3 A (the carrier current). Only a limited fraction (say, 10%) of the noise in Equation 4.336 gets to the amplifier because of the high mismatching of the pick-off. Also, the bandwidth has to be multiplied by √2 because the transducer system is one of amplitude modulation of a carrier, so Equation 4.335 becomes
I n2 =
2 2 I aveB ( A2 ). 10
(4.336)
where R is the coupling resistance (2.5 kΩ Rf is the amplifier feedback resistor and Vn is typically 10 nVHz−1/2. The amplifier noise current as an RMS value is therefore about 2.4 × 10−11 A. 4.8.14.3.5 Total Electrical Noise The total noise input current is the square root of the sum of the noise powers and yields about 3.7 × 10 –11 A. However, as the input current for a 1 nm displacement of the sensor is 0.5 × 10 –6 A, the resolution limit in equivalent displacement terms is about 0.2 nm, which is entirely consistent with measured values as shown by Franks who, in fact, claims even lower results. 4.8.14.3.6 Possible Improvements An obvious improvement is to increase the transducer excitation. This doubles the sensitivity (yet increases shot noise by √2; see Equation 4.337) but may well increase the heating effect within the transducer. A trade-off between resolution and bandwidth is possible but also doubtful as the resolution increases only as Hz–1/2. It seems therefore that only marginal improvements are likely on the limit quoted above. Thus a calibration system that can resolve 0.02 nm should be the aim. The use of capacitative transducers is also a possibility but they do suffer more from stray fields and there is less energy change per unit displacement than in inductive devices.
Hence (In) RMS = 2.4 × 10−11 A.
4.8.15 Replication
4.8.14.3.3 Johnson Noise of Transducer Circuit Because of the relatively low value of the transducer bridge output impedance, the main Johnson noise contribution comes from the coupling resistor and results in
There are situations in which it is difficult to get to the surface in order to measure it. The part may be too large to get even a portable instrument to it or the part may be needed for use. In other circumstances the component may change (i.e., due to wear) and some record of the original surface is needed. In such cases a replica is used [320,321] which gives a “negative impression” of the surface geometry. Surface replica materials fall into two categories. One is a chemical which cures when mixed with another, and the other is a thin acetate-type film which is used with a solvent. The requirements for a replica material are:
I n2 =
4 kT B (A 2 ), R
(4.337)
where k is Boltzmann’s constant (JK−1) (1.4 × 10−23), T is temperature (K) (300 K), B is the bandwidth allowing for the carrier system and R is 2.5 kΩ from which (In)RMS is 1.5 × 10−11 A. 4.8.14.3.4 Amplifier Noise This originates mostly at the input and is expressed as lowfrequency reciprocal “popcorn” noise, which is mostly removed by a suitable choice of carrier, and broadband noise, which is virtually related to shot noise. It is often quoted as a voltage of magnitude Vn volts RMS per √Hz bandwidth across the input port. The equivalent amplifier noise output for a typical system is
Vn BR f /R,
(4.338)
1. High fidelity between the roughness on it and on that of the surface recorded. 2. Replication does not damage the recorded surface. 3. Debris is not left on the recorded surface. 4. The replication process is quick, permanent and safe.
Consider first the compound method. The first step is to put some kind of release agent on the surface to allow easy removal. This should be minimal so that valleys are not filled. Preferably the release agent coverage should be molecularly thin and should not react with either the replica material or the surface under test.
422
Handbook of Surface and Nanometrology, Second Edition
Direction of blow
Replica wall
1.0 Araldite type
Fidelity
Shaped wall
Workpiece Surface to be measured
Wavelength
FIGURE 4.305â•… Replication method. (After George, A. F., Proc. Conf. on Metrology and Properties of Surfaces, Leicester, Elsevier, Lausanne, 1979.)
Film type
FIGURE 4.306â•… Fidelity of replica methods.
The next step is to build a ring barrier around the area of interest with plasticine. The chemicals, which are typically some kind of Araldite compound, are mixed with the curing agent and then poured within the barrier area and left to cure, sometimes helped by gentle heating. The length of time taken to cure depends on the chemicals and can vary from a few minutes to a few days. To ensure a good, quick release the barrier should be shaped as shown in Figure€4.305. The following very simple strategem suggested by George [320] saves much time and frustration. For general use he recommends Acrulite. This has a quick curing time of 20 minutes. For cases where form as well as texture is required Araldite CY219 with a mixture of 40% aluminum powder is recommended, presumably to facilitate heat movement in the exothermic reaction. The problem here is the long curing time, which can extend into a few days. Generally, in practice it has been found that this sort of replication method preserves the longer wavelengths on the surface despite some evidence of bowing when the curing takes place. Some materials that have been used here include Acrulite Microtech A, which is a polymethylmethacrylate resin, Araldite CY219, a casting epoxy resin, glassfiber resins, etc. The alternative method, using various thicknesses of film, has certain advantages, one of which is that the films can be made transparent. This makes the evaluation of the roughness possible using a liquid gate and diffraction methods. Typical films are cellon or acetyl cellulose. These are used in submillimeter thicknesses of about 0.05 mm, 0.1 mm, and 0.2 mm. The solvent, usually acetone or methyl acetate, is smeared over the specimen, and then the film is lightly pressed onto the surface making sure that any trapped air is removed. Using this technique and applying cross-correlation techniques it is possible to get high fidelity—especially using Cellon. Also, medium-thickness film performs better than the very thin (0.05 mm). Some problems can occur with elongation when the film is peeled off. In summary, both methods are useful—the former for higher fidelity and the latter for ease of use and for getting a transparent replica. It seems that despite all the high technology of today there is still a need for such techniques (Figure€4.306).
REFERENCES
1. Schmalz G. 1929. Z. VDI 73 144–61. 2. Berndt G. 1924. Die Oberflachenbeschaffen heiger bei verschiedenen Bearbeitungs methoden Loewe. Notizen 9 26. 3. Andrews P. 1928. The contouring of smooth surfaces. J. Sci. Instrum. V 209. 4. Harrison R E W. 1931. A survey of surface quality standards and tolerance costs based on 1929–30 precision grinding practice. Annual Meeting ASME 1930, Trans. ASME 53╯25. 5. Abbott J and Firestone A. 1933. A new profilograph measures roughness of finely finished and ground surfaces. Autom. Ind. 204. 6. Clayton D. 1935. An apparatus for the measurement of roughness. Mech. Eng. 27 321. 7. Zeiss C. 1934. Methods and instruments to illuminate surfaces. Jena Mach. 78 502. 8. Linnik W. 1930. Ein Apparat zur Messung von Verschiebungen in der Sehrichtung. Z. Instrum. 50 192. 9. Perthen J. 1936. Ein neues Verfahren zum Messen der Oberflachengute durch die Kapazitat eines Kondensators. Maschinenbau Betr. 15 669. 10. Nicolau P. 1939. Quelque recent progrès de la microgeometrie des surfaces usinées et de l’integration pneumatique des rugosités superficielles. Mecanique 23 152. 11. Von Weingraber H. 1942. Pneumatische Mess und Prufgerate. Maschinenbau 21 505. 12. Schlesinger G. 1940. Surface finish. Machinery 55 721. 13. Schlesinger G. 1942. Research on surface finish. J. Inst. Prod. Eng. 29 343. 14. Schlesinger G. 1942. Surface finish. Rep. Res. Inst. Prod. Eng. (London). 15. Guild A. 1940. An optical smoothness meter for evaluating the finish of metals. J. Sci. Instrum. 17 178. 16. Tuplin V. 1942. Autocollimator test for flatness. Machinery 61 729. 17. Tolansky S. 1947. Application of new precision interference methods to the study of topography of crystal and metal surfaces. Vertrag Tag. Dsche Phys. Ges. 5–7. Gottingen 18. Jost P. 1944. A critical survey of surface finish parameters. Machinery 66 690. 19. Timms C and Schole S. 1951. Surface finish measurement. Metal Treatment 18 450. 20. Timms C. 1945. Measurement of surface waviness. Proc. IMechE 153 337. 21. Ransome M. 1931. Standards for surface finish. Am. Mach. 74 581. 22. Peale J. 1931. Standardisation of machined finishes. Mech. Eng. 53 723.
Measurement Techniques 23. Bodart E. 1937. Les etats de surfaces standards HI, 1–18. 24. Broadston J A. 1944. Standards for surface quality and machine finish description. Prod. Eng. 627. 25. Agullo J B and Pages-Fita. 1970. Performance analysis of the stylus technique of surface roughness assessment—a random field approach. Proc. 15th MTDR Conf. 349. 26. Whitehouse D J. 1979. A theoretical investigation of stylus integration. Ann. CIRP 23 181. 27. Radhakrishnan V. 1970. Effect of stylus radius on the roughness values measured with a stylus instrument. Wear 16 325–35. 28. Walton J. 1961. Gramophone record deformation. Wireless World 7 353. 29. Church E. 1978. Proc. Opt. Soc. Am. (San Francisco). 30. Scarr A J and Spence A R. 1977. Microtechnic XXII 130. 31. Whitehouse DT. 2000. Stylus damage prevention index. Proc. Inst. Mech. Engrs. 214 975–80. 32. Frampton R C. 1974. A theoretical study of the dynamics of pick ups for the measurement of surface finish and roundness. Tech. Rep. T55, Rank Organisation. 33. Morrison E. 1996. The development of a prototype high speed stylus profilometer and its application to rapid 3D Surface measurement. Nanotechnology 37–42. 34. Whitehouse D J. 1988. A revised philosophy of surface measuring systems. Proc. IMechE. J. Mech. Eng. Sci. 202 169. 35. Liu X. Gao F. 2004. A novel multi-function tribological probe microscope for mapping surface properties. Meas. Sci. Technol. 15 91–102. 36. Hunt P and Whitehouse D J. 1972. Talystep for multiphysical measurement of plasticity index. Rank Taylor Hobson Technical report no 12. 37. Whitehouse D J. 1994. Handbook of Surface Metrology (Bristol: Inst. of Physics). 38. Taylor Hobson EU pat 0586454B1. 39. Peters J, Vanherck P and Sastrodsnoto M. 1978 ‘Analysis of the kinematics of stylus motion. Annals CIRP Aacis Reference doc R29 Sept. 40. Reason R E, Hopkins and Garrott. 1944. Rep. Rank Organisation. 41. Radhakrishnan V and Shunmugam M S. 1974. Computation of 3D envelope for roundness Int. J. Mech. Tool. Des. Res. 14 211–6. 42. Hunter A G M and Smith E A. 1980. Measurement of surface roughness. Wear 59 383–6. 43. Whitehouse D J. 1982. Assessment errors of finishing processes caused by skid distortion J. Phys. E: Sci. Instrum. 15 1337. 44. Von Weingraber H 1956 Zur Definition der oberflachen Rauheit Werkstattstech. Maschonenbau 45. Rubert. 1967/1968. Proc. IMechE 182 350. 46. Fugelso M and Wu S M. 1977. Digital oscillating stylus profile measuring device. Int J. Mach. Tool. Des. Res. 17 191. 47. Deutschke S I, Wu S M and Stralkowski C M. 1973. A new irregular surface measuring system. Int. J. Mach. Tool. Des. Res. 13 29–42. 48. Williamson J B P. 1967/1968. The microtopography of solid surfaces. Proc. IMechE 182. 49. Sayles R C and Thomas T R. 1976. Mapping a small area of a surface. J. Phys. E: Sci. Instrum. 9 855. 50. Tsukada T and Sasajima K. 1981. 3D measuring technique for surface asperities. Wear 71 1–14. 51. Idrus N. 1981. Integrated digital system for 3D surface scanning. Precis. Eng. 37. 52. Whitehouse D J and Phillips M J. 1985. Sampling in a 2D plane. J. Phys. A: Math. Gen. 18 2465–77.
423 53. Li M, Phillips M J and Whitehouse D J. 1989. Extension of two dimensional sampling theory. J. Phys. A: Math. Gen. 22 5053–63. 54. Baker L R and Singh J. 1985. Comparison of visibility of standard scratches. Proc. SPIE 525 64–8. 55. Synge E H. 1928. Phil Mag. 6 356. 56. Synge E H. 1932. Phil. Mag. 13 297. 57. Wickramasinhe H K. 1992. Scanned probe microscopy. In Proc AIP Conference, 241 Ed. Wickramasinghe, Santa Barbara, CA. 58. O’Keefe Report, US national survey office. 1953. 59. Ash E A and Nicholls G. 1972. Nature 237 510. 60. Binnig G,Quate C F, Gerber Ch and Weibel. 1982. Surface studies by scanning tunnelling microscopy Phys. Rev. Lett. 49 57–61. 61. Binnig G, Quate C F. and Gerber Ch. 1986. Atomic force microscope. Phys. Rev. Lett. 56 930–3. 62. Tomlinson P. 1919. NPL Research Report (UK: National Physical Laboratory). 63. Ling F. Private communication. 1965. 64. Young R, Ward J and Scire F. 1972. The topographiner: An instrument for measuring surface topography. Rev. Sci. Inst. 43 999–1011. 65. Talystep. Instrument specification 1972. (Leicester, UK: Taylor Hobson). 66. Dupuy O. 1967/1968. High precision optical profilometer for the study of micro-geometrical surface defects. Proc. Inst. Mech. Engrs. 180(Pt 3K) 255–9. 67. Vorburger TV, Dagata J A, Wilkening G, and Lizuka K. 1997. Industrial uses of STM and AFM, CIRP Annals 46 (2) 597–620. 68. Sullivan NT. 1995. Current and future applications of in-line AFM for semiconductor water processing, 2nd Workshop on ASPM, Gaithersburg, MD. NISTIR 5752 NIST. 69. Windt D L, Waskiewilz W K and Griffithe J E. 1994. Surface finish requirements for soft x-ray mirrors. Appl. Optics 33 2025. 70. Jungles J and Whitehouse D J. 1970. An Investigation into the Shape and Dimensions of some Diamond Style. Inst. Phys. E. Sci. Intst. 3 437–40. 71. Wang W L and Whitehouse D J. 1995. Application of neural networks to the reconstruction of SPM images by finite tips. Nanotechnology 6 45–51. 72. Musselman I H, Peterson P A and Russell P E. 1990. Fabrication of tips with controlled geometry for scanning tunneling microscopy. Prec. Eng. 12 3–6. 73. Stevens R M D, Frederick N A, Smith B L, Morse D E, Stucky G D and Hansma P K. 2000. Nanotechnology 11 1–5. 74. Nguyen C V et al. 2001. Carbon nanotube tip probes, stability and lateral resolution in scanning probe microscopy and application to surface science in semi conductors. Nanotechnology 12 363–7. 75. Tie L et al. 1998. Science 280 1253–6. 76. Haffner J H, Cheung C L and Leiber C M. 1999. Nature 398 701. 77. Wong E W et al. 1997. Science 277 1971–5. 78. Walter D A et al. 1999. Appl. Phys. Lett. 74 3803–5. 79. Kislov V, Kolesov I, Taranov J and Saskovets A. 1997. Mechanical feautures of the SPM microprobe and nanoscale mass detector. Nanotechnology 8 126–31. 80. Oh Tsu M. 1998. Near Field/Atom Optics and Technology (Tokya: Springer). 81. Pilevar S, Edinger F, Atia W, Smolyaninov I and Davis C. Appl. Phys. Lett. 72 3133–5.
424
Handbook of Surface and Nanometrology, Second Edition
82. Lambelet P, Sayah A, Pfeffer M, Phillipona C and MarquisWeible F. 1998. Chemically etched fibre tips for near field optical microscopy, a process for smoother tips. Appl. Optics 37 7289–92. 83. Peterson C A et al. 1998. Nanotechnology 9 331–8. 84. Butt H J and Jaschke M. 1995. Calculation of thermal noise in atomic force microscopy. Nanotechnology 6 1–7. 85. Marti O et al. 1990. Topography and friction measurement on mica. Nanotechnology 2 141. 86. Burnham N A. et al. 1997. How does a tip tap? Nanotechnology 8 67–75. 87. Gibson C T, Watson G S and Myhra S 1996. Determination of H spring constant of probes for force microscopy/spectroscopy. Nanotechnology 7 259–2. 88. Xu Y and Smith S T. 1995. Determination of squeeze film damping in capacetance based cantiveler force probes. Prec. Engrng 17 94–100. 89. Whitehouse D J. 1990. Dynamic aspects of scanning surface instruments and microscopes. Nanotechnology 1 93–102. 90. Xu Y, Smith S T and Atherton P D. 1995. A metrological scanning force microscope’s Prec. Eng. 19 46–55. 91. Stevens R M D. 2000. Carbon nanotubes as probes for atomic force microscopes. Nanotechnology 11 1–7. 92. Thomson W T. 1988. Theory of Vibrations with Applications (London: Unwin Hyman). 93. Tersoff J and Hamann D R. 1983. Theory and applications for the scanning tunnelling microscope. Phys. Rev. Lett. 50 p 1998. 94. Heyde M et al. 2006. Frequency modulation atomic force microscopy on MgO(111) thin films. Interpretation of atomic image resolution and distance dependence of tip-sample interaction. Nanotechnology 17 S101–6. 95. Sader J E and Jarvis S P. 2004. Accurate formulas for interaction force and energy in frequency modulation force spectroscopy. Appl. Phys. Lett. 84 1801–3. 96. Giessibl J. 1997. Force and frequency shifts in atomic resolution dynamic force microscopy. Phys. Rev. B 56 16010–7. 97. Isra[elachvili J N. 1992. Intermolecular and Surface Forces, 2nd ed (London: Academic Press). 98. Gugglsberg M, Bammerlin M, Loppacher C et al. 2000. Phys. Rev. Lett. B 61 11151. 99. Barth C, Pakarinen O H, Foster A Sand Henry C R. 2006. Nanotechnology 17 s128–36. 100. Meli F. 2005. Lateral and vertical diameter measurement on polymer particles with a metrological AFM. In Nano-scale Calibration Standards and Methods, Ed. G. Wilkenen and L. Koenders, 361–74. (German: Wiley-VCH Weinheim). 101. Sark R W. 2004. Spectroscopy of higher harmonics in dynamic atomic force microscopy. Nanotechnology 15 347–51. 102. Derjaguin B V, Muller V M and Toporov Y P. 1975. Effect of contact deformation on the adhesion of particles. J. Colloid Interface Sci. 53 31–26. 103. Trevethan T, Kantorovich L, Polesel-Maris J and Gauthiers. 2007. Is atomic scale dissipation in NC-AFM real?: InvestiÂ� gation using virtual atomic force microscopy. Nanotechnology 18 084017 (7pp). 104. Gauthier M and Tsukada M. 1999. Phys. Rev. B 60 111716. 105. Trvethan T and Kantorvich L. 2006. Nanotechnology 17 S205. 106. Duig U. 1999. Surface Interface Anal 274 67–73. 107. Pfeiffer O, Nony L, Bennewitz R, Baratoff A and Meyer E. 2004. Distance dependence of force and dissipation in non contact atomic force microscopy on Cu(100) and Al (111). Nanotechnology 15 S101–7.
108. Giessibl F I et al. 2006. Stability considerations and implementations of cantilevers allowing dynamic force microscopy with optimal solutions: The q plus sensor. Nanotechnology 15 S79–86. 109. Giessibl F I. 2000. Physical interpretation of frequency-modulation atomic force microscopy. Appl. Phys. Lett. 76 1470–2. 110. Tabor D and Winterton R S H. 1969. The direct measurement of normal and retarded van der Waals forces. Proc. Roy. Soc. A312 935. 111. Yacoob A, Konders L and Wolff H. 2007. An atomic force microscope for the study of the effects of tip-sample interactions on dimensional metrology. Nanotechnology 18 350–9. 112. Young R D, Vorburger T V and Teague E C. 1980. In process and on line measurement of surface finish. Ann. CIRP 29 435 113. Young R D. 1973. Light techniques for the optical measurement of surface roughness. NBS IR 73–219 (Gaithersburg, MD: National Bureau of Standards). 114. Church E. 1978. Proc. Opt. Soc. Am. (San Francisco). 115. Whitehouse D J. 1996. Selected Papers on Optical Methods in Surface Metrology. SPIE Milestone Series No 129. Many reviews have been made of the various optical techniques [114, 115]. 116. Dupuy. 1967/1968. Proc IMechE 180(pt 3k). 117. Granger E M. 1983. Wavefront measurement from a knife edge test. Proc SPIE 429 174. 118. Whitehouse D J. 1983. Optical coding of focus positioning—Hadamard method. VNIIMS Conf Surface Metrology (Moscow) paper E2. 119. Simon J. 1970. Appl. Optics 9 2337. 120. Sawatari T and Zipin R B. 1979. Optical profile transducer. Opt. Eng. 18(2) 222–22. 121. Tolansky S. 1948. Multiple Beam Interferometry of Surfaces and Films (Oxford: Oxford University Press). 122. Milano E and Rasello F. 1981. An optical method for on line evaluation of machined surface quality option. ActaW 111–23. 123. Bennett H E Bennet J M and Stanford J L. 1973. Surface corregulanties and scattering. Proc Infrared Symposium, Huntsville, Alabama, January 1973. 124. Agilent. 2004. www.agilent.com 125. Hamilton D K and Wilson T. 1982. 3D surface measurement using confocal scanning microscopes. J. Appl. Phys. B 27 211. 126. Bennett S D, Lindow J T and Smith I R. 1984. Integrated circuit metrology using confocal optical microscopy. R. Soc. Conf. on NDT, July. 127. Sheppard C J R and Wilson T. 1978. Opt. Lett. 3 115. 128. Young J Z and Roberts T. 1951. The flying spot microscope. Nature 167 231. 129. Anamalay R V, Kirk T B and Pahzera D. 1995. Numerical descriptions for the analysis of wear surfaces using laser scanning confocal microscopy. Wear 181–3 771–6. 130. Zhang G X et al. 2004. A confocal probe based on time difference measurement. Ann. CIRP 53/1 417–20. 131. Tiftikci K A. 2005. Development and verification of a micromirror based high accuracy confocal microscope. Ph.D. Thesis, Eindhoven University of Technology. 132. Pohl D W, Deuk W and Lanz M. 1984. Optical stethoscopy: Image recording with resolution/20. Appl. Phys. Lett. 44 651. 133. Pohl D W and Courjon D. 1993. Near Field Optics (Amsterdam: Kluwer). 134. Mayes et al. 2004. Phase images in the near field. Ann. CIRP 53/1 483–6.
Measurement Techniques 135. Hudlet S et al. 2004. Opt. Comm. 230 245–51. 136. Huang F M, Culfaz F, Festy F and Richards D. 2007. The effect of the surface water layer on the optical signal in apertureless scanning near field optical microscopy. Nanotechnology 18 015501 (5pp). 137. Esteban R,Vogelgesang R and Klein K. 2006. Simulation of optical near field and far fields of dielectric apertureless scanning probes. Nanotechnology 17 475–82. 138. Yin C, Lin D, Lui Z and Jiang X. 2005. New advances in confocal microscopy. Meas. Sci. Technol. 17 596. 139. Mitsuik Sato. 1978. H frequency characteristics of the cutting process. Am. CIRP 27 67. 140. Bottomley S C. 1967. Hilger J. XI(1). 141. Remplir device—Opto electronics Stenkullen, Sweden. 142. Michelson A A. 1890. Philos. Mag. 5 30–1. 143. Gehreke G. 1906. Anwendung der Interferenzen. p 120. 144. Jenkins F A and White H E. 1951. Fundamentals of Optics (New York: McGraw-Hill), p 328. 145. de Groot P. 2000. Diffractive grazing incidence interferometer. App. Opt. 39(10) 1527–30. 146. Sommargren G E. 1981. Appl. Opt. 20 610. 147. Sommargren G E. 1981. Precis. Engl. 131. 148. Huang C C. 1983. Proc. SPIE 429 65. 149. Wade H. 1967. Oblique incident micro interferometer. Bull. Jpn Soc. Precis. Eng. 4 234. 150. Chapman G D. 1974. Beam path length multiplication. Appl. Opt. 13 679. 151. Tanimura Y. 1983. CIRP 32 449. 152. Wyant J C, Koliopoulos C L, Bushan B and George 0 E. 1983. An optical profilometer for surface characterisation of magnetic media. 38th Meeting ASLE, print 83-AM 6A-1. 153. Wyant J C. 1975. Appl. Opt. 11 2622. 154. Breitmeier U. 1990. Optical follower optimized. UBM GmbH. 155. Fanyk, Struik K G, Mulders P C and Veizel C H F. 1997. Stitching interferometry for the measurement of aspheric surfaces. Ann. CIRP 46 459. 156. de Groot P and de Lega X C. 2008. Transparent film profiling and analysis by interference. In Interferometry XIV: Applications, Ed. E L Novak et al. Proc SPIE 7064, 706401. 157. Mansfield D. 2006. The distorted helix: Thin film extraction from scanning white light interferometer. Proc. SPIE 6186 paper 23. 158. de Lega X C and de Groot P. 2009. Interferometer with multiple modes of operation for determining characteristics of an object space. US Pat. Appl.7616323. 159. de Lega X C and de Groot P. 2008. Characterization of materials and film stacks for accurate surface topography measurement using a white light optical profilometer. Proc SPIE 6995 paper number 1. 160. de Groot P, de Lega X C, Leisener J and Darwin M. 2008. Metrology of optically unresolved features using interferometric surface profiling and RCWA modelling. O.S.A. Optics Express 3970 16(6). 161. de Groot P and Deck L. 1995. Surface profiling by analysis of white light interferograms in the spatial frequency domain. J Mod. Opt. 42 389–401. 162. Moharam M G, Grann E B and Pommet D A. 1995. Formulation for stable and efficient implementation of the rigorous coupled wave analysis of binary gratings. J. Opt. Soc. Am. A 12 1068–76.
425 163. Raymond C J. 2001. Scatterometry for semiconductor metrology. In Handbook of Silicon Semiconductor Metrology, Ed. A J Deibold (New York: Marcel Dekker). 164. Balasubramanian N. 1982. Optical system for surface topography measurement. US patent 4303061982. 165. de Groot P, de Lega X. C, Kramer J and Turzhitsky M. 2002. Determination nof fringe order in white light interference microscopy. Appl. opt. 41(22) 4571–8. 166. de Groot P. 2001. Unusual techniques for absolute distance measurement. Opt. Eng. 40 28–32. 167. de Groot P. 1991. Three color laser diode Interferometer. Appl. Opt 30(25) 3612–6. 168. de Groot P and McGarvey J. 1996. US patent 5493394. 169. Hodgkinson I J. 1970. J. Phys. E. Sci. Instrum. 3 300. 170. Bennett H E and Bennet J M. 1967. Physics of Thin Films Vol 4 (New York: Academic). pp 1–96. 171. Church E L, Vorburger T V and Wyant J C. 1985. Proc. SPIE 508–13. 172. Williams E W. 1954. Applications of Interferometry. Methuen, London 61 173. Klein M V. 1990. Optics, John Wiley, New York. 174. King R J, Downes M J, Clapham P B, Raine K W and Talm S P. 1972. J. Phys. E: Sci. Instrum. 5 449. 175. Weinheim V C H, Wagner E, Dandliker R and Spenner K. 1992. Sensors, Vol 6 (Weinheim: Springer). 176. Hofler H and Seib M. 1992. Sensors, Vol 6 (Weinheim: Springer), p. 570. 177. Hofler H and Seib M. 1992. Sensors, Vol 6 (Weinheim: Springer) p. 574. 178. Gabor D. 1948. A new microscope principle. Nature 161 40–98. 179. Gabor D. 1948. The Electron Microscope (Electronic Engineering Monograph). 180. Leith E N and Upatnieks J. 1963. J. Opt. Soc. Am. 53 1377. 181. Abramson N H and Bjelkhagen H. 1973. Appl. Opt. 12(12). 182. Brooks R E. 1917. Electronics May. 183. Fryer P A. 1970. Vibration analysis by holography. Rep. Prog. Phys. 3 489. 184. Henderson G. 1971. Applications of holography in industry. Electro. Opt. December. 185. Ennos A E and Archbold E. 1968. Laser Focus. (Penwell corp. Nashua NH US.) October. 186. Kersch L A. 1971. Materials Evaluation. June. 187. Ribbens W B. 1972. Appl. Opt. 11 4. 188. Ribbens W B. 1974. Surface roughness measurement by two wavelength holographic interferometry. Appl. Opt. 13 10859. 189. Sawyer N B E, See C W, Clark M, Somekh M G.and Goh J Y L. 1998. Ultra stable absolute phase common path optical probe based on computer generated holography. Appl. Opt. 37(28) 6716–20. 190. Zeiss, Tropel. 1998. Product information No 60-20-801-d. 191. Burova M and Burov J. 1993. Polarizationally non-sensitive pseudo-holographic computer reconstruction of random rough surface. J. Mod. Opt. 40(11) 2213–9. 192. Sirat G Y. 1992. Conoscopic holography basic principles and physical basis. J. Opt. Soc. Am. A 9(10) 70–83. 193. Lonardo P M, Lucca D A and De Chiffre L. 2002. Emerging trends in surface metrology. Ann. CIRP 51/2 701–723. 194. Balabois J, Caron A and Vienot J. 1969. Appl. Opt. Tech. August. 195. Vander Lugt A. 1968. Opt. Acta 15 1. 196. Abramson N H and Bjelkhagen H. 1973. Appl. Opt. 12(12). 197. Erf R K. 1978. Speckle Metrology (New York: Academic), p. 2,
426
Handbook of Surface and Nanometrology, Second Edition
198. Jones R and Wykes C. 1983. Holographic and Speckle Interferometry (Cambridge: Cambridge University Press). 199. Goodman J W. 1975. Laser Speckle and Related Phenomena, Ed. J C Dainty (New York: Springer) ch 2. 200. Asakura T. 1978. Speckle Metrology (New York: Academic), p. 11. 201. Jackeman E and Pussey P N. 1975. Non-Gaussian fluctuation in electromagnetic radiation scattering by a random phase screen. J. Phys. A: Math. Gen. 8 392. 202. Sprague R W. 1972. Appl. Opt. 11 2811. 203. Pederson H W. 1976. Opt. Commun. 16 63. 204. Fuji H, Asakura T and Shindo Y. 1976. Measurement of surface roughness properties by means of laser speckle. Opt. Commun. 16 68. 205. Beckmann P and Spizzichino. 1963. The Scattering of Electromagnetic Waves from Rough Surfaces (Oxford: Pergamon). 206. Mandelbrot B B. 1977. The Fractal Geometry of Nature (New York: Freeman). 207. Lehmann P and Goch G. 2000. Comparison of conventional light scattering and speckle techniques concerning an in process characterization of surface topography. Ann. CIRP 49/1 419–22. 208. Goch G. 2008. Super bright light emitting diode for optical roughness characterization. Ann. CIRP 57/1 409–12. 209. Parry G. 1974. Some effects of surface roughness on the appearance of speckle in polychromatic light. Opt. Com. 12 75–8. 210. Lehmann P, Patzaelt S and Schone A. 1997. Surface roughness measurement by means of polychromatic speckle elongation. Appl. Opt. 36 2188–97. 211. Lehmann P. 1999. Surface roughness measurement based on the intensity correlation function of scattered light under speckle pattern illumination. Appl. Opt. 38 1144–52. 212. Patzelt S, Tausendfreund A, Osmer J, Goch G and Brinksmeier E. 2008. Integral optical characterization of functional surface properties. XII International Colloquium on Surfaces, Chemnitz, January. 213. Toh S L, Quan C, Woo K C, Tay C J and Shang H M. 2001. Whole field surface roughness measurement by laser speckle correlation technique. Opt & Laser Technol. 33 427–34. 214. Jakeman E and McWhirter J G. 1981. Appl. Phys. B 26 125. 215. Fuji H. 1980. Opt. Acta 27 409. 216. Dainty J C. (Ed.) 1975. Laser Speckle and Related Phenomena ( Berlin: Springer). 217. Jakeman E. 1988. JOSA 5 (10) p1638–48. 218. Butters J N and Leendertz J A. 1974. Proc. Electro Optics Conf. Brighton. 219. Ogilvy J A. 1991. The Theory of Wave Scattering from Random Rough Surfaces (Bristol: Hilger). 220. Maystre D. 1988. Scattering by random surfaces in electromagnetic theory. Proc. SPIE 1029 123. 221. Bass G G and Fuchs I M. 1963. Wave Scattering of Electromagnetic Waves from Rough Surfaces (New York: Macmillan). 222. Chandley P J. 1976. Determination of the autocorrelation function of surface finish from coherent light scattering. Opt. Quantum Mech. 8 329. 223. Welford W T. 1977. Optical estimation of the statistics of surface roughness from light scattering measurements. Opt. Quantum Electron. 9 269. 224. Bennett H E and Porteus J 0. 1961. J. Opt. Soc. Am. 51 123. 225. Davies H. 1954. Proc. IEE 101 209. 226. Torrence K E. 1964. MS Thesis, University of Minnesota.
227. Milana E and Rosella F. 1981. Opt. Acta 28 111. 228. Rakels J H. 1986. J. Phys. E: Sci. Instrum. 19 76. 229. Thwaite E G. 1980. Ann. CIRP 29 419. 230. de Groot P, Delega X C and Stephenson D. 2000. Opt. Eng. 39 8. 231. Bjuggren M, Krummenocher L and Mattson L. 1997. Non contact surface roughness measurement of engineering surfaces by total integrated infrared scattering. Prec. Eng. 20 33–4. 232. Sawatari T. 1972. Surface flaw detection using oblique angle detection. Appl. Opt. 11 1337. 233. Whitehouse D J. 1972. Some modem methods of evaluating surfaces. Proc. SME, Chicago. 234. Pancewicz T and Mruk I. 1996. Holographic controlling for determination of 3D description of surface roughness. Wear 199 127. 235. Konczakowski A L. 1983. Int. J. Mach. Tool Des. Res. 23 161. 236. Konczakowski A L and Przyblsy W. 1983. Wear 92 1–12. 237. Whitehouse D J, Vanhert P, De Bruin W and van Luttervelt C. 1974. Assessment of surface topology techniques in turning. Am. CIRP 23 265–82. 238. Ramesh S and Rammamoorthy B. 1996. Measurement of surface finish using optical diffration techniques. Wear 195 148–151. 239. Bloom A L. 1965. Spectra Phys. Tech. Bull. 4. 240. Church E L, Jenkinson H A and Zavada J M. 1977. Measurement of the finish of diamond turned metal surfaces by differential light scattering. Opt. Eng. 16 360. 241. Elson J E and Bennett J M. 1979. Vector scattering theory. Opt. Eng. 18 116. 242. Wang Y and Wolfe W L. 1983. Scattering from micro rough surfaces: comparison, theory and experiment. J. Opt. Soc. Am. 73 1596. 243. Bonzel H P and Gjostein N A. 1968. J. Appl Phys. 39 3480. 244. Jungles J. 1979. Private communication. 245. Bosse J C, Hansoli G, Lopez J and Mathina T. 1997. Wear 209 328–37. 246. US Patent 4092068. 247. Takayama H, Sekiguchi H and Murate R. 1976. In process detection of surface roughness in machining. Ann. CIRP 25 467. 248. Tanner L H and Fahoum M. 1976. Wear 36 299–316. 249. Stover J C, Serati S A and Gillespie C H. 1984. Calculation of surface statistics from light scatter. Opt. Eng. 23 406. 250. Guenther K H and Wierer P G. 1983. Surface roughness assessment of ultra smooth mirrors and substrates. SPIE 401. 251. Masurenko M M, Skrlin A L and Toporetts A C. 1979. Optico 11 1931. 252. Bennett J M, Guenther K H and Wierer P G. 1983. Surface finish measurements on low scatter laser mirrors and roughness standards. 15th Symp. on Optical Materials for High Power Lasers, Ed H E Bennett (Washington, DC: NBS). 253. Bennett J M, Guenther K H and Wierer P G. 1984. Appl. Opt. 23 3820. 254. Elson J M, Rahn J P and Bennett J M. 1983. Appl. Opt. 22 3207. 255. Jiang X, Lin D, Blunt L, Zhang W and Zhang L. 2006. Investigation of some critical aspects of on line surface measurement by a wavelength- division- multiplexing technique. Meas. Sci. and Technol. 17 483–7. 256. Jiang X and Whitehouse D J. 2006. Miniaturized optical measurement methods for surface nanometrology. Ann. CIRP 55/1 577–80.
Measurement Techniques 257. Lin D, Jiang X, Xie F, Zhang W, Zhang L and Benion I. 2004. High stability multiplexed fibre interferometer and its application on absolute displacement measurement and on line surface metrology. Optics Express 12 5729–34. 258. Sorides G N and Brandin D M. 1979. An automatic surface inspection system for flat rolled steel. Automatika 15 505. 259. Pryer T R, Reynolds R, Pastorious N and Sightetc D. 1988. 4th Conf. on Met. and Props of Surfaces (Washington, DC: NBS). 260. Baker L R, Kirkham A J, Martin S, Labb D R and Rourke C P. 1975. US Patent 3892492. 261. Baker L R. 1986. Standards for surface flaws. Opt. Laser Technol. February 19–22. 262. Baker L R. 1984. Microscope image comparator. Opt. Acta 31 611–4. 263. Cerni R H. 1962. Analogue transducers for dimensional metrology. Proc. Int. Prod. Eng. Res. Conf. (Pittsburgh) p. 607. 264. Sherwood K F and Crookall J R. 1967/1968. Surface finish instrument by electrical capacitance method. Proc. IMechE 182(pt 3k). 265. Thomas T R (Ed.). 1982. Roughness measurement alternatives to the stylus. In Rough Surfaces (Harbour: Longman). 266. Franson R E, Brecker J N and Shum L Y. 1976. A universal surface texture measuring system. Soc. Mgn. Eng. paper IQ, 76–597. 267. Matey J R and Bland J. 1985. Scanning capacitance microscanning. J. Appl. Phys. 57 1437. 268. Lee D T, Pelz J P and Bhushan B. 2006. Scanning capacitance microscopy for thin film measurement. Nanotechnology 17 1484–91. 269. Moore D F. 1965. Drainage criteria for runway surface roughness. J. R. Aeronaut. Soc. 69 337. 270. Bickerman J J. 1970. Physical Surfaces (New York: Academic). 271. Graneck M and Wunsch H L. 1952. Application of pneumatic gauging to the measurement of surface. Machinery 81, 701–707. 272. Radhakrishnan V and Sagar V. 1970. Surface roughness by means of pneumatic measurement. Proc. 4th Indian MTDR Conf. (Madras: Indian Institute of Technology). 273. Tanner L H. 1978. A pneumatic Wheatstone bridge for surface roughness measurement. J. Phys. E: Sci. Instrum. 12 957. 274. Berry M V. 1973. The statistical properties of echoes diffracted from rough surfaces. Philos. Trans. R. Soc. A 273 611. 275. de Bily M, Cohen Tanoudji F, Jungman A and Quentin J G. 1976. IEEE Trans. Sonics Ultrasound SU-23 p. 356. 276. Cohen Tanoudji F and Quentin G. 1982. J. Appl. Phys. 53 4060. 277. Clay C S and Medwin H. 1970. J. Acoust. Soc. Am. 47 1412. 278. Chubachi N, Kanai H, Sannomiya T and Wakahara T. 1991. Acoustic microscopy. In Acoustic Imaging 19, Ed. M. Ermert and H P Harjies (London: Plenum Press). 279. Hidaka K. 2006. Study of a small-sized ultrasonic probe. Ann. CIRP 55/1 567–70. 280. Hidaka K. 2008. Study of a micro roughness probe with ultrasonic sensor. CIRP 57/1 489–92. 281. Yamamoto M, Kanno I and Aoki S. 2000. Profile measurement of high aspect ratio micro structures using a tungsten carbide micro cantilever. Proc 13th Micro Electro Mechanical Systems, Japan, 217–20. 282. Chang T H P and Nixon W C. 1966. J. R. Microsc. Soc. 88 143.
427 283. Grundy P J and Jones G A. 1976. Electron Microscopy in the Study of Material (London: Edward Arnold). 284. Bowen D K and Hall R C. 1971. Microscopy of Material (London: Macmillan). 285. Smith G T. 2002. Industrial Metrology (Berlin, London: Springer). 286. Grundy P J and Jones G A. 1976. Electron Microscopy in the Study of Material (London: Edward Arnold). 287. Oxford Instruments—Guide book. 1996. Electron Backscatter Diffraction (High Wycombe). 288. Hearle J W S, Sparrow J T and Cross P M. 1972. Scanning Electron Microscope (Oxford: Pergamon). 289. Stover J C, Serati S A and Gillespie C H. 1984. Calculation of surface statistics from light scatter. Opt. Eng. 23 406. 290. Howell P G T and Bayda A. 1972. Proc. 5th SEM Symp. Chicago, April, 1972. 291. Chistenhauss R and Pfeffekorn G. 1968. Bietr. electronen microskop Direcktabb Oberflachen, Munich 1 129. 292. Jungles J and Whitehouse D J. 1970. An investigation into the measurement of diamond styli. J. Phys. E: Sci. Instrum. 3 437–40. 293. Rasigni M, Rasigni G, Palmari J P and Llebaria A. 1981. I surface profiles, II autocorrelation functions. J. Opt. Soc. Am. 71 1124, 1230. 294. Guerra J M. 1990. Photon tunneling microscopy. Appl. Optics 29 3741–52. 295. McCutchen C W. 1964. Optical systems for observing surface topography by frustrated total internal reflection and by interference. Rev. Sci. Instrum. 35 134045. 296. Takahashi S et al. 1997. High resolution photon scanning tunneling microscope. Nanotechnology 8 445. 297. McCreery R L, Liu Y-C, Kagen M, Chen P and Fryling M, 1996, Resonance and normal spectroscopy of carbon surfaces. ICORS 96, Ed. A Ashers and P B Stein (New York: Wiley). 298. Raman C V and Krishnan K S. 1928. A new type of secondary radiation. Nature 121 501–2. 299. Herscheld T and Chase B. 1986. FT Raman spectroscopy. Appl. Spectrosc. 40 133–9. 300. Turrell G and Corset J. (Eds.). 1996. Raman Spectroscopy: Developments and Applications (New York: Academic). 301. Stedman M. 1980. Private communication. 302. Jones W J & Leach R K, 2008. Adding a dynamic aspect to amplitude-wavelength space. Meas. Sci. Technol. 19 055015 (7pp). 303. Smith S T and Chetwynd D G. 1992. Foundations of Ultra Precision Mechanical Design (London: Gordon and Breach). 304. Moore. 1970. Foundations of Mechanical Accuracy. Moore Special Tools. 305. Pollard A F C. 1929. The Kinematic Design of Couplings in Instrument Design (London: Hilger and Watts). 306. Jones R V. 1962. Some uses of elasticity in instrument design. J. Sci. Instrum. 39 193–203. 307. Zhang G X. 1989. A study on the Abbé principle and Abbé error. Ann. CIRP 38 525. 308. Bryan J B. 1979. The Abbé principle revised—an updated interpretation. Precis. Eng. (3). 309. Whitehouse D J. 1976. Some error separation techniques in surface metrology. Proc. Inst. Phys. J. Sci. Instrum. 9531–6. 310. Chetwynd D G. 1987. Selection of structural materials for precision devices. Precis. Eng. 9 3–6. 311. Chetwynd D G. 1989. Material selection for fine mechanics. Precis. Eng. 11 203–9.
428
Handbook of Surface and Nanometrology, Second Edition
312. Ashby M F. 1989. On the engineering properties of materials. Acta Metall. 37 1273–93. 313. Ashby M F. 1991. On materials and shapes. Acta Metall. 39 1025–39. 314. Jones R V. 1957. Instruments and the advancement of learning. Trans. Soc. Inst. Tech. 3. 315. Jones R V. 1961. Proc. R. Soc. A 260 47. 316. Sydenham P H. 1972. J. Phys. E: Sci. Instrum. 5(83) 733. 317. Neubert N K P. 1963. Instrument Transducers: An Introduction to Their Performance and Design (Oxford: Oxford University Press). 318. Baird K M. 1968. Metrologia 4 145. 319. Bishop R E D and Johnson D C. 1960. The Mechanics of Vibration (Cambridge: Cambridge University Press). 320. George A F. 1979. A comparative study of surface replicas. Proc. Conf. on Metrology and Properties of Surfaces, Leicester (Lausanne: Elsevier).
321. Narayanasamy K et al. 1979. Analysis of surface roughness characteristics of different replica materials. Proc. Conf. on Metrology and Properties of Surfaces, Leicester (Lausanne: Elsevier). 322. Mansfield D. 2008. Extraction of film interface surfaces from scanning white light interferometry. SPIE. 323. Whitehouse D J. 1990. Biographical memoirs of RE Reason. Proc. Royal. Soc. Memoirs. 36 437–62. 324. Weckenman A, Bruning J., et al. 2001. CIRP, 50 (1) 381–384. 325. Colonna de Lega X and de Groot P. 2005. ‘Optical topography measurement of patterned wafers’ Proc. Characterization and metrology for ULSI Technology. American Institute of Physics p. 432–436.
5 Standardization–Traceability–Uncertainty 5.1 INTRODUCTION Traceability and standardization enable results to be compared nationally and internationally using common units. Failure to ensure that the results of a measurement or experiment conform to a standard are not in itself serious. It may be, for example, that a particular firm is investigating the function of a part it makes, so it does some test experiments. The measurement system may be needed to determine changes which occur during the experiment. It may not be necessary to communicate such results to the outside world and often in a proprietary situation not even desirable. Under these conditions all that is required is that the system within the firm is consistent, not necessarily traceable to national or international standards. However, the day of the family company working in isolation is over. Consequently, results have to be compared and instruments have to be verified often nationality and sometimes internationally according to mutually acceptable rules. Easily the safest way to do this is to relate all measurements to the international system of units and procedures. Then there is no doubt and no ambiguity. The problem is that it can introduce extra costs because of the need for training and the need to buy in test equipment. However, there can be no question that in the long term this is the most cost-effective way to proceed. Traceability is defined by the following statement agreed by the International Standards Organization ISO VIM 2004 [1]. Traceability is the property of the result of a measurement whereby it can be related to stated references, usually national or international standards, through a documented unbroken chain of comparisons all having stated uncertainties.
This statement implies tracking through a chain of procedures, identifying and evaluating uncertainties at each element in the chain and calibrating and documenting right through from the national or international standard to the workshop. So, in this chapter uncertainty due to various types of error and statistical tools to deal with them, calibration of instruments and traceability from primary to workshop levels will be considered. Broadly, they will be in the order mentioned, but as they are to some extent interwoven reference will be made wherever necessary. Some general comments will be included at the beginning. The whole subject of metrology is concerned with identifying and controlling errors and with putting confidence
limits to measured values. In surface metrology there are two basic aspects to be considered: one concerns the errors in the parameters due to the measuring instrument, and the other is concerned with the intrinsic variability of parameters due to the surface itself. Both will be discussed here. Because the errors are often random in nature some treatment of the basic statistical tests will be given. This should not be taken as the basis for a student course because it is not taken to any depth. It is included merely to bring to the forefront some of the statistical tools that are available to the surface metrologist. To start the chapter the nature of errors will be considered. In what follows, the way in which an instrument is calibrated will be considered, starting at the input and continuing through the system to the display. Obviously not every instrument can be considered, so as a starting point the stylus method will be used. The messages conveyed will be the same for all instruments. These will be considered when appropriate. Scanning probe microscopes will be included in this chapter. Part of the question of traceability is that of variability, not the fidelity of the average measurement (intended value) of a parameter but the variation that arises when measuring it (whatever it may be). Some investigators think that a systematic error in an instrument can be acceptable but that variability cannot. This is only true if the systematic error is known or at least known about. The danger is that sometimes it is not.
5.2 NATURE OF ERRORS This is a subject which is very extensively reported. Basically errors can be split up into two: systematic and random errors. In practice the distinction is usually a question of timescale. The level of random errors may be predicted and, at the same time, systematic or repeatable errors change their character with time. The essential difference is the time within which accurate or traceable readings need to be taken in order to guarantee consistent results.
5.2.1 Systematic Errors Systematic errors are sometimes called bias and can be caused by imperfect devices or measurement procedures and also by the use of false standards. Not excluded in the definition are the effect of (unwittingly) biased people and the environment. As mentioned above, systematic errors can be measured (if suspected) and can be corrected.
429
430
Handbook of Surface and Nanometrology, Second Edition
5.2.2 Random Errors
5.3.2 Readability
Random errors produce the scatter of readings and are caused by non-constant sources of error either in time or in space. They cannot be eliminated because they have no known or deterministic value. They can often be minimized by the use of averaging techniques. Other terms sometimes encountered include [2, 3] gross error, which is due to a mistake, additive errors, which are usually zero-point displacements, and multiplicative errors, which are characterized by their property of being multiplicatively superimposed on the measured value. These do depend on the numerical value of the measured quantity. They are based on the deviations of the measurement system from its desired value. Absolute measurement errors are defined as the difference between the actual measured value and the true value (which is not known). Relative errors are absolute errors divided by a reference quantity.
This is the susceptibility of the measuring device to have the indications converted to a meaningful number.
5.3 DETERMINISTIC OR SYSTEMATIC ERROR MODEL The absolute measurement error Δx is the difference between the measured value x and the true value xs. If the errors are time dependent ∆x (t ) = x (t ) − x s (t ).
(5.1)
In system theory the transfer property of a system is designated by its transfer function S(p). It is the quotient of the Laplace transform of the output quantity to that of the input. Thus the error-free transfer function of the measurement system is S ( p) =
xo ( p) . x s ( p)
(5.2)
where p is the Laplace operator p = σ + jω The actual output is obtained from SF ( p) =
x ( p) . x s ( p)
SF ( p) =
S ( p) ∆x ( p) −1 = . Ss ( p) x s ( p)
This is derived from the word caliber used to describe the size of gun bores. It refers to the disciplines necessary to control measuring systems to assure their functioning within prescribed accuracy objectives.
5.4 BASIC COMPONENTS OF ACCURACY EVALUATION 5.4.1 Factors Affecting the Calibration Standard
1. Traceability—interchangeability 2. Geometric compatibility 3. Thermal properties 4. Stability 5. Elastic properties 6. Position of use
5.4.2 Factors Affecting the Workpiece 1. Geometric truth 2. Related characteristics—surface texture 3. Elastic properties 4. Cleanliness, thermal effects, etc. 5. Definition of what is to be measured 6. Factors affecting the instrument 7. Amplification check, inter-range switching 8. Filters 9. Effects of friction, backlash, etc. 10. Contact of workpiece adequate 11. Slideways adequate 12. Readability adequate
5.4.3 Factors Affecting the Person (5.3)
This leads to the concept of the measurement error transfer function, SF(p):
5.3.3 Calibration
(5.4)
This concept is useful in thinking about the propagation of errors through a system.
5.3.1 Sensitivity This is the ability of the measuring device to detect small differences in the quantity being measured.
1. Training 2. Skill 3. Sense of precision 4. Cost appreciation 5. Hygiene-rusty fingers
5.4.4 Factors Affecting the Environment
1. Thermal (20°C) 2. Sunlight, draughts, cycles of temperature control 3. Manual handling 4. Cleanliness 5. Adequate lighting
Although surface roughness calibration standards are nothing like as formalized as length standards it is useful to list
431
Standardization–Traceability–Uncertainty
the hierarchy of standards for any given country and, in particular, length standards.
be considered to be a mathematical operation which is not necessarily linear. There are two important parameters [4]:
5.4.5 Classification of Calibration Standards
5.4.6 Calibration Chain
1. Initial procedure 2. Determine and set product tolerance—go-no-go 3. Calibrate product measuring system—calibrate gauges
1. The number of measurement values n that may be obtained in unit time at the output of the measuring instrument. This is related to the time te required by the measuring instrument to reach the correct value from a transient input:
n = 1/te .
1. One to 3 weeks—calibrate measuring system used to calibrate product measuring system 2. One year—ensure working standards to laboratory standards 3. One to 5 years—reference laboratory standards to national standards
fg = 1 ⁄ 2te .
2. The error ε is a measure of the difference between the correct undisturbed output variable xo of an assumed ideal measuring system and the actual output xo real.
In general it is the task of the measurement to relate a number of output variables xo to a number of input variables, and this in the presence of a number of disturbances or noise terms xr. This is the general format with which error signals will be determined as shown in Figure 5.1. The measurement can
ε = xo − xo real ,
ε2 =
∑x
2
oi
− xoi real /m,
(5.8)
i =1
x0
to take into account a number m of inputs and outputs. There is a purely geometrical interpretation of this, which has been used to describe errors in an instrument systems. However, the purely Euclidean method will not be pursued here.
xr
Figure 5.1 System parameters in the presence of noise.
Ideal
xo E
xi Real
Figure 5.2 Error system.
(5.7)
so m
xi
(5.6)
So according to Equation 5.3, a sensible way to describe errors is to relate the difference of xo and xo real. This is shown in Figure 5.2. Thus
5.5 BASIC ERROR THEORY FOR A SYSTEM
System parameters
(5.5)
According to the Nyquist sampling theorem the following correlation exists with the highest frequency indicated by the measuring constraint fg:
5.4.7 Time Checks to Carry Out Procedure
1. International 2. National 3. National reference standards 4. Working standards and laboratory reference standards (interlaboratory standards)
xo real
Squarer
E2
Mean value
– E2
432
Handbook of Surface and Nanometrology, Second Edition
5.6 PROPAGATION OF ERRORS
It is rarely necessary to go past two terms in the series, so for large errors and three variables u = f (x, y, z),
Measuring surface roughness or roundness in themselves rarely involves a knowledge of the propagation of errors because they are in effect “stand-alone” measurements. But there are many parameters in surface metrology, which are derived from a number of others, such as cylindricity, squareness, etc. For this reason an elementary guide to error manipulation will be given here.
In the case for multiplied or divided variables, such as for the volume of a box,
5.6.1 Deterministic Errors Consider the superposition of small errors. Take first the case of a single variable y = f (x), say; for a small change in x, δx, y will change by δy, so
or
δy f ′( x ) = δx , y f (x)
(5.9)
where f′(x) is the differential coefficient of f(x). For, say, three variables, Equation 5.9 can be expanded to
δu =
∂f ∂f ∂f δx + δy + δz , ∂x ∂y ∂z
(5.10)
where δx, δy, and δz are small. Equation 5.10 is no more than the first term of Taylor’s theorem. On the other hand, if the values of δx, δy, δz are not significant, Equation 5.10 is not sufficiently accurate and other terms have to be incorporated into the form for δu. Thus for two variables u = f (x,y), ∂u = +
∂f ∂f δx + δ y ∂x ∂y
∂m f 1 ∂m f ∂m f (δx )m m + mδx m −1 m−1 +…+ (δy)m m , ∂y m! ∂x ∂x ∂y
log P = log a + log b + log c
(5.15)
dP δa δb δc = + + . P a b c
(5.16)
In the general case P = albmcn,
dP δa δb δc =l +m +n . P a b c
(5.17)
This relationship is an example of differential logarithm errors. The important point is that it shows that the fractional errors add up (or subtract), because usually the worst case to consider is when the errors are considered to be additive.
5.6.2 Random Errors
δ2 f 1 ∂2 f ∂2 f (δx )2 2 + 2δxδy + (δx )2 2 δy 2! ∂x ∂x ∂y
+ …+
(5.14)
and for changes in P, dP = (log P) = d(log a) + d(log b) + d(log c),
u = f ( x , y, z )
P = abc.
Taking logarithms of both sides of Equation 5.14 gives
δy = δxf ′(x)
2
∂f ∂f ∂f 1 ∂ ∂ ∂ δx + δy + δz + δx + δy + δz f . ∂x ∂y ∂z 2! ∂x ∂y ∂z (5.13) δu =
The question arises as to how to deal with random errors. In fact they can be treated in a similar way. So, if the errors are small ∂f ∂f δx + δy + … . ∂x ∂y
(5.11)
(5.12)
If δx and δy are random variables they are represented in terms of a spread of values rather than by individual ones. The spread is usually given in terms of the standard deviation or variance. Thus for two variables
δu =
(5.18)
written in symbolic notation as ∞
i
∑ δx ∂x + δy ∂y f . ∂
∂
i =1
Note that, for large errors, the constituent errors δx, δy, δz are not additive as in Equation 5.10 because of the presence of the cross-terms.
2
∂f ∂f variance [δu] = E δx + δy ∂y ∂x
(5.19)
433
Standardization–Traceability–Uncertainty
where E is the expectation (mean value). Equation 5.19 is approximately 2
2
∂f ∂f ∂f ∂f Su2 = S x2 + S y2 + 2 ρS x S y . ∂x ∂y ∂x ∂y
(5.20)
Note that Su, Sx, etc., are the standard deviations of u, x, etc., if ∂f / ∂x, ∂f / ∂y are evaluated at the mean values x , y ,ρ is the correlation coefficient. For ρ = 0, x and y are independent and then Equation 5.20 becomes 2
2
∂f ∂f Su2 = S x2 + S y2 ∂x ∂y
(5.21)
S2 S Su2 = u 2 a 2 + b 2 y2 x y
N
2 x 2
Sr2 =
1 ( x 2 S x2 + y 2 S y2 ), (x 2 + y 2 )
and if r = (xˉ 2 + yˉ 2)1/2, Sr2 =
1 ( x 2 S x2 + y 2 S y2 ), (x + y 2 ) 2
where the bar indicates an average value. Leach [5] considered that as the measurement of surfaces could be considered a point-to-point length measurement it was relevant to use the recommendations for assessing uncertainty in length measurement laid down in “GUM” [6]. This, he suggests, is ideal for estimating the uncertainties of a large number of factors which may be involved in making an instrument for measuring surfaces. In particular, the measurement of length using a laser. For this it is necessary to evaluate the “combined standard uncertainty” of all the factors. The basic formula for this is as follows: UC2 ( L ) =
N
∑ i =1
ci2u 2 ( xi ) +
N
n
i −1
j =1
∑ ∑ 12 c + c c u ( x )u ( x ), 2 ij
∑
i ij
2
∂f S 2 ( x ). i ∂xi
(5.23)
Here it is assumed that the “first order” model for the uncertainties is acceptable and that the various factors in the uncertainty are independent of each other and that the important ones are all present. Second order terms indicated in the second part of the right hand side of Equation 5.22 sometimes have to be included usually in some optical instrument operation. In general surface metrology the main sources of error are usually well known and their values relatively easy to estimate given the overall limit of accuracy required for the measurement for example in a wear or friction situation involving large mechanical elements. The same is not true, however, in nanometrology because the number and amount of influence of the sources are not usually known.
S2 S2 Su2 = u 2 a 2 x2 + b 2 y2 x y
σC = 2
i =1
so if u = xayb
u2 terms represent the S2 and the “c” terms are the partial derivatives ( ∂f /∂x , ∂2 f /∂x∂y, ∂3 f /∂x∂2 y ) , etc., where x = x1, y = x2, and so on. The real difficulty is that of identifying the factors and evaluating their uncertainties, once identified. These will be considered shortly, following in brief Leach’s example of a laser measurement of length as related to surface metrology. Usually, if there is an uncertainty in x the independent variable the uncertainty in the dependent variable “f” is assumed to be the sensitivity of “f” to the change in x multiplied by the actual uncertainty in x, i.e., ∂f = ∂f / ∂x.∂x. It can be more complicated as indicated above but in practice it is rarely so. The uncertainty variance (sometimes called the dispersion) can be expressed in the form:
2
i
2
j
(5.22)
where L is the length being measured, the “u” values are the uncertainties of the factors xi which influence the measurement and the “c” values are the weighting factors to be assigned to the specific uncertainties. This formula is identifiable with those given earlier in terms of the deterministic expressions and the random e.g., Equations 5.11, 5.12, and 5.19. The GUM equation is given for random errors. The
5.7 SOME USEFUL STATISTICAL TESTS FOR SURFACE METROLOGY Neither the workpiece is perfect, nor instrument for that matter because measured values are always subject to variations. The mean value of surface roughness or the spread of values is not realistically obtained with just one reading, despite the tendency to do so, because of the need for speed. Hence it is necessary to qualify any value obtained within certain confidence limits. It is quite straightforward and the rules are easy to apply. In this section a few of the most useful tests and their possible applications are given. Furthermore, because the results are invariably obtained from a very limited set of readings, tests for small numbers should be employed. These can be found in any statistical textbook.
5.7.1 Confidence Intervals for Any Parameter The ideal situation is shown below. Suppose that limits are being set on a parameter. It should be possible to express the limits in the form:
prob[ Ll ≤ statistical parameter < Lu ] = P = 1 − α, (5.24)
434
Handbook of Surface and Nanometrology, Second Edition
the experiment this adds one constraint. Thus, Formula 5.25 becomes
Frequency
Area = P
L1 Area = α/2
Value
Frequency
Figure 5.4â•… One-sided test.
where Ll is the lower limit, Lu is the upper limit, P is a given probability, say .95, and α is the significance level. The distribution of values of the statistical parameter would look like Figure€5.3. This is called a two-sided test because upper and lower limits are specified. If, for the sake of argument, only the upper limit had been specified the graph would be slightly changed (Figure€5.4).
5.7.2â•…Tests for The Mean Value of a Surface—The Student t Test Suppose that there are n independent readings with no restrictions on them. This is equivalent to the data set having n degrees of freedom v. If, for example, the values x1, x2,..., xn are subject to the constraint a1x1 + a2 x 2 + ... + an x n = 0,
there would be a constraint of one, so the number of degrees of freedom left is reduced by one, so v = n–1. If there are m restrictions then v = n–m. Hence, suppose the standard deviation of n readings is wanted:
∑ i =1
,
(5.26)
follows a t distribution where μ is the true mean, x is the mean from the sample of n readings, and s is the sample standard deviation given by Equation 5.26. So it is possible to plot this function as if it were the “statistical parameter” in Equation 5.24:
Lu
Value
i =1
1/ 2
( xi − x ) 2
1/ 2
,
µ−x , s n
Area = α
n
∑
( xi − x ) 2
which is more correct and is in fact unbiased (tends to the true value as n→∞, s is the sample standard deviation. Reverting back to the confidence limits, it so happens that a “statistical parameter” containing means is given by
Area = α/2
Area = P
1 σ= n
n
Lu
Figure 5.3â•… Two-sided test.
1 s= n −1
(5.25)
has n degrees of freedom if x is known previously, but if x has to be worked out from the values of xi obtained from
µ−x prob tl ≤ ≤ tu = P = 1 − α. s n
(5.27)
x − tl s ≤ µ ≤ x + t u s = P = 1 − α. n n
(5.28)
Rearranging
Thus the true value of the mean lies within x − t1s n and x − tu s n to within a probability of P with a significance of α; x − t1s n is the lower confidence limit and x − tu s n is the upper limit. It just so happens that the distribution of t is symmetrical, so tl = tu except for the sign. As n becomes large so the value of the t distribution becomes closer to the Gaussian. Therefore, for P = .95 and n large the value of t becomes near to 2 which is the value for 95% confidence from the mean of a Gaussian distribution. When the value of t is needed as in Equation 5.27 it is looked up in tables with arguments α/2 and v degrees of freedom. This test is often used to decide whether or not a sample or standard of roughness or roundness as received is the same as that sent off. This is not simply a question of bad postage. In many “round robins” of standards committees a lot of time has been spent measuring the wrong specimen. Under these circumstances the null hypothesis is used, which says in this case that the mean value of the received specimen x1 minus that of the sent specimen x2 should be zero. So if n1 measurements have been made on one specimen and its mean and standard deviation evaluated as x1 and s1 and x 2 , s2 with n2 on the other specimen are the corresponding
435
Standardization–Traceability–Uncertainty
values for the questionable sample. The t value will be given by t=
x1 − x 2 nn s 1 2 n1 + n2
1/ 2
.
n2 i =1
( x2i − x2 )
(5.29)
where
s=
∑
n1 i =1
( x1i − x1 ) + 2
∑
This test is particularly useful for determining the quality of roughness standards where the spread rather than the actual value can be the deciding factor. As for the t test for n large, the χ2 value becomes compatible with the standard error of the standard deviation obtained using a Gaussian distribution.
5.7.4 Goodness of Fit
n1 + n2 − 2
2
.
(5.30)
Equation 5.27 is really testing the difference between the means x1–x2 as a ratio of the standard error of the mean. This latter term is the standard deviation of the distribution which could be expected with samples having joint degrees of freedom of n1 + n2–2 (i.e., n1 + n2 readings). To work this out, if the measured t value obtained by inserting all the numerical data into the right-hand sides of Equations 5.29 and 5.30 is larger than the value of t looked up in the tables corresponding to a value of P and v, then the null hypothesis would be rejected—the specimens are different. If it is less then they are not different—to within the significance α (i.e., 1–P). This can also be used to test if an instrument is giving the same answers from a specimen under different conditions. An example might be to test whether an instrument designed to measure a specimen off-line gives the same answers (to a known significance) when it is being used in process. Note that the usual criterion for small n numbers is about 25. Anything less than this requires the application of the t and other tests. Gaussian or binomial approximations should only be used for large values of n, which rarely happens in surface metrology.
Another use of the χ2 test is to check out the type of distribution found in an experiment. This is usually called the “goodness of fit” test. Suppose that it is required to check whether a set of readings falls into a distribution pattern. It could be, for example, that someone needs to determine whether a surface has a Gaussian height distribution so that the behavior of light or friction or whatever can be checked against a theoretical model. The χ2 distribution can be used to test this. The method is to compute a statistic which in fact takes a χ2 value. The statistic χ2 is given by m
χ2 =
∑ (O −E E ) , i
i =1
i
2
(5.33)
i
where Oi is the observed frequency of observations falling into the ith class (or interval) and Ei is the expected value assuming that the distribution is correct. From Equation 5.33, if the distribution has been correctly chosen, the sum of squares Oi – Ei will be close to zero. If incorrect, the value of χ2 will be large, so this very simple test can be used. The degrees of freedom which have to be applied with this test are m–1 because it is important that the total number in the observed classes equal that in the expected classes that is ΣOi = ΣEi = m.
5.7.3 Tests for The Standard Deviation—The χ2 Test
5.7.5 Tests for Variance —The F Test
The quantity ns2 /σ2 is distributed as a chi-squared (χ2) distribution with v = n–1 degrees of freedom, where σ is the true standard deviation, s is the sample standard deviation, and n is the number of readings. The confidence limits of σ in terms of s can be derived from this χ2 distribution:
This is an alternative to the t test for checking the validity of data and is based more on a comparison of variances rather than a comparison of mean values. Thus if two samples are of size n1 and n2 and have variances s21 and s22, respectively, the variance ratio test F is given by
(n − 1)s 2 Prob χ12−α / 2,n−1 ≤ < χα2 / 2,n −1 = P = 1 − α, (5.31) σ2
from which
(n − 1)s 2 (n − 1)s 2 2 < ≤ σ , χα2 / 2,n−1 χ12−α / 2,n−1
(5.32)
to within P probability. So the spread of values to be expected on a surface σ can be estimated from the measured value of s. The value of the degrees of freedom v = n–1 is because s is worked out using the evaluated mean.
F~
σ12 s12 ~ , σ 22 s22
(5.34)
where values of F are for chosen levels of significance α. The degrees of freedom to be looked up are v1 = n–1 and v2 = n2 –1. The two samples are considered to differ considerably in variance if they yield a value of F greater than that given by the table at the chosen level of significance.
5.7.6 Tests of Measurements against Limits—16% Rule Often it is necessary to ask the question whether a surface has achieved the value of the surface roughness set down.
436
Handbook of Surface and Nanometrology, Second Edition
Obviously, just taking one measurement could give a false answer to the question because by chance a freak position on the surface could have been measured which happened to have a value which was far too high (or low) relative to the acceptable limit set down for the surface. To ensure against this happening, a rule has been advocated called the 16% rule, which says that only if more than 16% of the measurements taken on the surface are bigger than the limit value set for the surface should it be rejected (if the limit is the upper level set). For a lower limit the surface fails if more than 16% of the measured values are less than the limit set. This somewhat an arbitrary value which is obtained from the Gaussian distribution of errors. Thus the probability
Many experimental situations require an examination of the effects of varying two or more factors such as oil, texture, and speed. A complete exploration of such a situation is not revealed by varying each factor one at a time. All combinations of the different factor levels must be examined in order to extract the effect of each factor and the possible ways in which each factor may be modified by the variation of the others. Two things need to be done:
P(z) =
1 z 1 1 + erf = 0.84, i.e., 1 − 0.16, when z = σ, 2 2 σ
(5.35)
and P(z) = .16 when z = −σ. Here erf(–) is the error function and σ is the standard deviation of the error curve for the process. The reason for this rather generous allowance of out of limit values is to accept the fact that often there are some small inhomogeneities across the surface, which would not impair the performance, yet would fall foul of the quality screening procedure and hence be costly: a compromise of 16% seems plausible in non-critical applications, but only if agreed by all parties. There are occasions where a more stringent regime is in operation called “max” rule in which the maximum value for the surface is not allowed to be exceeded. This type of rule is not recommended because the more readings that are taken the more likely it will be that the maximum value will be exceeded, so an unconscious tendency could be to keep the number of readings down to an absolute minimum in order not to violate the limit—a dangerous possibility!
5.7.7 Measurement of Relevance— Factorial Design 5.7.7.1 The Design In function and in manufacture the influence of surfaces is often not well understood. It is therefore very relevant before using surface parameters to find out the really significant variables. In practice, this is not a very easy task because of their interdependence. Perhaps the most difficult problem is to get the first idea of importance: what is the dominant factor? One powerful way to find out is to use factorial design. This gives an indication of which variable is worth concentrating on. An example might be the frictional force in a shaft in which the variables are oil, speed, and texture among others. It is cost effective to spend effort, money, and time on finding the most significant of the parameters. Factorial design is meant to give this vital clue.
1. Decide on the set of factors which are most likely to influence the outcome of the experiment. 2. Decide the number of levels each one will use [7].
Obviously it is best to restrict such pilot investigations to a minimum number of experiments possible, so it is often best to take two values of each factor, one high and one low. These are usually chosen such that the influence on the outcome in between the two levels is as linear as possible. Non-linearity can spoil the sensitivity of the experiment. Two-level experiments are used most often and result in what is called factorial 2 design of experiments. There are of course factorial 3 or higher levels that can be used. The idea is that once the dominant factor has been identified more refined experiments can then be carried out using many more than just two levels for this dominant factor. Take as an example a three-factor two-level experiment. The outcome might be the surface roughness of the finished component and the factors might be cutting speed, depth of cut and feed. There are 23 possible experiments, equal to eight in all. The following shows how the dominant factor is evaluated. The experiments are listed in Table 5.1. In going from experiments 1–2, 3–4, 5–6, 7–8, only factor A is altered, so the total effect of A can be found by comparisons of (x1 + x3 + x5 + x7) with (x2 + x4 + x6 + x8). All the former have factor A low and the others high. Similarly the effect of factor B is found by comparisons. ( x1 + x 2 + x5 + x6 ) with ( x3 + x4 + x7 + x8 )
↓
↓
low
high
TABLE 5.1 Three Factor, Two Level Factorial Experiment Experiment 1 2 3 4 5 6 7 8
Algebraic Assignment
Factor A
Factor B
Factor C
(1) a b ab c ac bc abc
Low High Low High Low High Low High
Low Low High High Low Low High High
Low Low Low Low High High High High
437
Standardization–Traceability–Uncertainty
The interaction ABC (half the difference between the interaction AB when C is high and when C is low) is given by symmetry as ABC where
and C ( x1 + x2 + x3 + x4 ) with ( x5 + x6 + x7 + x8 ),
↓
↓
low
high
while comparing x3 + x7 with x4 + x8 gives the effect of A when B is high, etc. It is convenient to use Yates’s symbolism to express these experiments more simply. Consider the effect of A. The effects of all experiments with a (A high) and not a (A low) are a, ab, ac, abc with (1), b, bc, c. (Here (1) does not mean unity—it means all factors are low.) The main effect of A is
A = 14 (a − 1)(b + 1)(c + 1),
(5.37)
where each bracket is not meant to stand alone. For example, in Equation 5.37 a–1 does not mean the result of experiment 2 with the value of unity taken from it; Equation 5.37 only means something when the brackets are multiplied out. Similarly, the effects of B and C are B = 14 (a + 1)(b − 1)(c + 1)
C = 14 (a + 1)(b + 1)(c − 1).
(5.38)
Note that the factor “B”, say, which is being investigated, has the bracket (b–1) associated with it. 5.7.7.2 The Interactions The interaction between A and B is usually designated by the symbol AB and is defined as the average of the difference between the effect of A when B is high and the effect of A when B is low. The effect of A with B high is 1 2
(abc + ab) − 12 (bc + b) = 12 b(a − 1)(c − 1)
(5.39)
and the effect of A with B low is
1 2
(ac + c) − 12 (c + (1)) = 12 (a − 1)(c + 1).
A = ( 12 )n −1 (a − 1)(b + 1)…(q + 1) AB = ( 12 )n −1 (a − 1)(b − 1)…(q + 1).
Because this is so important consider an example of surface roughness and grinding. Take two factors as grain size g and depth of cut d. The four factorial experiments might produce the results shown in Table 5.2. The effect of g and d are
g=
1 2
(g − 1)(d + 1) = 12 (gd − d + g − (1)) = 7.4,
g = 12 (g + 1)((d − 1) = 12 (gd + d − g − (1)) = 4.5,
where 7.4 and 4.5 are just numbers of merit. Hence the grain size is about twice as important as the depth of cut in determining surface texture value. The factorial design of experiments gives an indication of the important or dominant factors. Obviously if the relevant factor has been missed out then it is no help at all. Under these circumstances—and in any case—it is useful to do more than one set of experiments. If this is done then it is possible, using the t test, to test the significance of the results obtained. This process is called replication. The difference between each set of experiments is carried out to see if the difference is significant. If it is, either the experiments have been badly performed or there is another factor which is present but has not been identified [3]. An alternative to the factorial design is the Taguchi method described in quality control tests. This arranges inputs and outputs to experiments in an orthogonal matrix method. The technique identifies the critical parameters but does not TABLE 5.2 Results Experiments
AB = 14 (a − 1)(b − 1)(c + 1)
AC = 14 (a − 1)(b + 1)(c − 1). BC = 14 (a + 1)(b − 1)(c − 1),
(5.41)
(5.43)
AB…Q = ( 12 )n −1 (a − 1)(b − 1)…(q − 1).
(5.40)
The difference between Equations 5.39 and 5.40 is defined as twice the value of the interaction, so
(5.42)
The reason for bringing the half into the interactions is to make all seven expressions similar. Again notice that there should be (2N –1) equations for a 2N factorial set of experiments. The general 2N experiments of factors ABC…Q are
A = 14 (a + ab + ac + abc) − 14 [(1) + b + c + bc] (5.36)
which can be written purely algebraically as
ABC = 14 (a − 1)(b − 1)(c − 1).
(1) g g dg
Roughness 20.6 26.5 23.6 32.5
438
Handbook of Surface and Nanometrology, Second Edition
y
Regression of x on y Regression of y on x
y P(x–,y–)
yi y1
x x1
Figure 5.6â•… Lines of regression.
xi
Figure 5.5â•… Correlation coefficient between x and y.
allow correlations between parameters to be established as in the factorial method described above. Another test that is often carried out and has been referred to earlier concerns correlation. This is not the autocorrelation function but the simple correlation between two variables (Figure€5.5). The correlation coefficient between x and y is given by ρ where
ρ=
∑ (x − x )( y − y ) ∑ (x − x ) ∑ ( y − y) N i =1
i
2
i
2
.
Earlier in the book the idea and use of best-fit curves and lines has been employed extensively, but for those unfamiliar with the technique a few comments are given below. The best-fit line between y and x and vice versa depends on which direction the differences or residuals between the actual line and the points are measured. It has been indicated when looking at least-square lines for surface analysis that, strictly speaking, the residual distance should be measured normal to the line itself. This is rarely needed, but as can be seen from Figures 5.6 and 5.7 there are a number of possibilities. For y on x, the slope m is given by m=
∑ xy − ∑ x∑ y . N ∑ x − (∑ x )
N
2
2
(5.45)
For x on y the slope m′ is given by
m′ =
∑ xy − ∑ x∑ y . N ∑ y − (∑ y )
N
2
2
y on x
True deviation
(5.44)
5.7.8â•…Lines of Regression
P (x,y)
Figure 5.7â•… Deviation from best-fit line.
i
i
Best-fit line
x on y
(5.46)
The values of x and y here have not been altered to extract the mean values. It is straightforward to show that the two
lines are not the same and that they cross at ( x , y ) , the mean values of x and y, i.e., at the point p ( x , y ). It is conventional to plot the regression with the dependent variable on the independent variable (y on x). The value of such an analysis is important, for instance, if the actual dependence or relationship of one parameter with another is being tested.
5.7.9â•…Methods of Discrimination It is often required to determine which parameter best describes the functional performance of a workpiece surface. Quite often the theoretical predictions are unsatisfactory and lacking in some way. In these circumstances, it is possible to use a number of statistical tests to ascertain from practical results which parameter is best. One such method is called “discriminant analysis.” Suppose there are a number of surfaces which have been tested for performance and have been separated out into “good” and “bad” surfaces depending on whether or not they fulfilled the function. Discriminant analysis is a method whereby as many surface parameters as is deemed sensible are measured on all the surfaces. The idea is then to find which parameter best separates the good surfaces from the bad. If there are n parameters from m surfaces they can be considered to be m points in n-dimensional space. Weights are then assigned to each parameter in such a way that the two groups “good” and “bad” form different clusters. The weights are manipulated, in much the same way as in neural networks, until the ratio of betweencluster separation to within-cluster separation is maximized.
439
Standardization–Traceability–Uncertainty
If this is successful the weights form the coefficients of a “discriminant function,” which separates the function in one-dimensional space (see Ref. [8]). Discriminant analysis is another embodiment of the analysis of variance (ANOVA) technique. Dividing the between-cluster to within-cluster sum of squares is a measure of the discrimination. Thomas [8] uses the symbol Λ for this. Note this is not to be confused with the topothesy used in fractal analysis which has often been given the same symbol. The larger Λ is the better the discrimination. The method described above applies to just one parameter; it can be extended by means of linear discriminant functions. In this approach, the various parameters are combined as a weighted sum and the weighted sum and the weights are adjusted to maximize Λ. The techniques in effect define Λ· in terms of the coefficients ai of a linear expression. The ratio is then differentiated with respect to ai and solved for the values of ai which maximize the ratio. This is exactly equivalent to solving the normal equations in the least-squares routine. The discrimination used in the example of beta functions in Section 2.2.3.3 uses the discriminant function approach of combinations of surface parameters. In this case, however, the weightings are kept at unity. The linear discriminant function in effect reduces to finding the eigenvectors of a non-symmetric matrix. The eigenvector is composed of the number of parameters in the linear expansions and the associated eigenvalue is a measure of the effectiveness of the particular discriminant function. As in the use of the factorial experiments of Section 5.7.7, discriminant analysis or the ANOVA methods are only meant to be an aid to the investigation and not an end in themselves.
5.8 UNCERTAINTY IN INSTRUMENTS— CALIBRATION IN GENERAL The practical assessment of surface roughness or any of the other surface metrology parameters involves directly or by implication:
1. Recognition of the nature of the signal. 2. Acceptance of measurements based on various approximations to the true surface. 3. Acceptance of a sampling procedure. 4. An appreciation of the nature of the parameters employed and the methods and instruments for carrying them into effect. 5. The establishment of calibrating procedures and tests for accuracy.
In this section the sources of error in instruments will be considered. The obvious place to start is at the input. Because most instrumentation in surface metrology at present revolves around the stylus method this will be taken as the basis from which to compare the others. The stylus, then the transducer and, finally, the amplifier will be dealt with in order. Then some procedures of sampling, including some idea of national
variations, will be given. Obviously some information will be outdated because the improvements in materials and devices will continue to determine the best value of signal-to-noise ratio that can be achieved. The best way to develop the test procedure is to establish the uncertainty of the specimen rather than the instrument. A possible procedure might be as follows:
1. Visual inspection of the workpieces to determine where it is obvious that inspection by more precise methods is unnecessary, for example, because the roughness is clearly better or obviously worse than that specified or because a surface defect which substantially influences the function of the surface is present. 2. If the visual test does not allow a decision, tactile and visual comparison with roughness specimens should be carried out. 3. If a comparison does not allow a decision to be made, measurement should be performed as follows. The measurement should be carried out on that part of the surface on which the critical values can be expected, according to visual inspection. a. Where the indicated parameter symbol on the drawing does not have the index “max” attached to it. The surface will be accepted and the test procedure stopped if one of the following criteria is satisfied. Which one of the following to use depends on prior practice or agreement: −− The first measurement value does not exceed 70% of the specified value (indicated on the drawing). −− The first three measured values do not exceed the specified value. −− Not more than one of the first six measured values exceeds the specified value. −− Not more than two of the first 12 measured values exceed the specified value. Sometimes, for example before rejecting high-value workpieces, more than 12 measurements may be taken (e.g., 25 measurements), up to four of which may exceed the specified value. (Corresponding to the transition from small numbers to large numbers of samples.) This is a variant of the 16% rule described in Section 5.7.6 in this chapter. b. Where the indicated parameter symbol does contain the index “max” initially at least three measurements are usually taken from that part of the surface from which the highest values are expected (e.g., where a particularly deep groove is visible) or equally spaced if the surface looks homogeneous. The most reliable results of surface roughness inspection (as in roundness and other surface metrology parameters) are best achieved with the help of measuring instruments. Therefore, rules and procedures for inspection of the most
440
Handbook of Surface and Nanometrology, Second Edition
important details can be followed with the use of measuring instruments from the very beginning, although, of course, the procedure may be more expensive. But if the latter is to be considered then the instrument procedure has to be agreed. Before embarking on the actual calibration of the instrument and its associated software, it is useful to remember that many errors are straightforward and in fact can be avoided. These are instrument-related errors which in some cases are just misleading. A typical set of things which should be checked are as follows:
Calibration procedures must be suited to the features of the instrument being calibrated. The following features in various combinations will be considered here:
1. Motor drive: Check worn, loose or dirty parts, connections, switches, misaligned motor mounts, excessive bearing wear. 2. Link between drive and tracer: Make sure locking joints are solid, rotating joints free and that the linkage is connected to the drive correctly. 3. Tracer itself: Make sure the stylus is not loose or not badly damaged. Check any electrical connections— the most likely source of error. 4. Amplifier: Batteries if used should be charged. Any meter should not stick. No worn or defective power cable or switches should be allowed. 5. Workpiece: This should be firmly mounted preferably on some solid mechanical base. The workpiece should be as free from dirt as possible and the area chosen for examination should be as representative of the surface as possible. In the case of stylus instruments some preliminary check should be made to ensure that the stylus pressure (or that of the skid) will not damage the workpiece. See stylus damage prevention index. 6. Readings: If the output is on a non-digital scale of a meter remember that the eye always tends to read toward a division mark. If the output is multiscale select the relevant scale and if necessary, practice reading it.
These and other common-sense points can save a lot of unnecessary trouble later on. The next section will deal with conventional stylus instruments but it will be necessary to consider scanning probe microscopes from time to time as these have some features in common with ordinary stylus instruments: but they will be discussed in some detail in the section following.
5.9 CALIBRATION OF STYLUS INSTRUMENTS Most stylus instruments involving electronic amplification are provided with at least one adjustable control for the amplification. Often there are several while these may be set by the maker before dispatch, they must generally be checked and, if necessary, readjusted by the user after setting up the instrument, and as often thereafter as may be required to maintain the standard of performance. Such controls often have the form of screwdriver-operated potentiometers.
1. Pick-ups with displacement-sensitive transducers able to portray steps (e.g., modulated carrier systems) cooperating with an auxiliary datum device (commonly known as skidless pick-ups). 2. Pick-ups with displacement-sensitive transducers having a skid cooperating with the surface under test (commonly known as skid-type pick-ups). 3. Skid-type pick-ups with motion-sensitive transducers not able to portray steps (e.g., moving coil or piezzo). 4. Graphic recording means. 5. Digital recording means. 6. Meters giving Ra (and sometimes other parameters).
Difficulties may sometimes arise, for example, when there is too much floor vibration to use an instrument with a skidless pick-up, or when the skid normally fitted has too small a radius to give good results on a calibration specimen (i.e., it does not integrate properly). Some degree of special adaptation may then be required. Most instruments offer a range of magnification values for a recorder, and/or of full-scale values for a meter, these being selected by a switch. There may be an individual gain control for each switch position, or one for a group of positions, and often there will be separate controls for the recorder and the meter. When there is one gain control for a group of switch positions, switching errors may arise, and the adjustment may then be made for one preferred position, or averaged over the group. In all cases, the gain at each switch position must hold throughout a range of wavelengths to the extent required by the standard. Apart from the adjustment of gain, about which this comment is mainly concerned, and the residual inter-range switching errors, errors may result from the non-linearity of circuits, recorders, and meters. Such errors are dependent on the deflection of the pointer, if relevant, and will generally be least in the region between 65 and 80% of full scale, where calibration of gain is to be preferred. They are inherent in the apparatus and, apart from verifying that they are exceptionally small, generally there is nothing that can be done to correct them. Internal vibration from actuating motors and electrical fluctuations (both classed as “noise”), may give rise to further errors, especially at high magnifications. In addition, charts and meter scales are subject to reading errors. It will be evident that a complete calibration throughout the whole range of amplitudes and wavelengths for which the instrument is designed can involve a somewhat lengthy procedure, so much so that, at least in the workshop, it is usual to accept calibration on the basis of a limited number of spot checks, and for the rest to assume that when the equipment is right for these, it is right for its whole range of operation. While this approach has given generally acceptable control, the trend toward greater accuracy has been urging
Standardization–Traceability–Uncertainty
manufacturers and standardizing bodies to evolve a more comprehensive system of calibration. Two forms of calibration procedure are recognized. One is basic and complete but costly: it involves the direct evaluation of the magnification of recorded profiles, either analog or digital, followed by accurate assessment of their parameters from the records. With the aid of digital recording and computer techniques this form has now been raised to a very high level. The other procedure, suitable for the workshop, is based on the use of instrument calibration specimens which, in the context of surface metrology, can be likened to gauge blocks in the field of dimensional metrology. Unfortunately these specimens, in the range of values ideally required, are still in a state of evolution, and it is for this reason that the range of workshop testing has so far been somewhat restricted. To assist in the routine testing of instruments National Measurement Institutes (NMI) in coordination with the International Standards Organization (ISO) have developed a range of artifacts which can be used to check the calibration of stylus instruments [9]. The general groupings are: A—Vertical magnification check. B—Stylus dimension check. C—Combined height and spacing magnification check. D—Used to verify the overall performance of an instrument. E—Used to verify form capability or straightness of reference datum. These practical standards are meant to complement the more formal primary techniques reserved and implemented in the NMIs or approved laboratories. The artifacts are therefore less accurate but more available to the industry. In the next section, some methods including those using artifacts, of verifying instruments will be discussed.
5.9.1 Stylus Calibration 5.9.1.1 Cleaning Calibration should start by making sure that the stylus tip is in good order. Pending the evolution of instrument calibration specimens suitable for checking the sharpest tips, sufficient inspection to ensure that the tip is not damaged can generally be carried out with a microscope, but as will be seen this is often not sufficient in side view and plan view. The former requires an objective having an adequate working distance generally of at least 3 mm, which will be provided by most 16 mm objectives. A magnification of 100–300 × will generally suffice. Some fixturing may be required to mount the pick-up on the stage of a microscope. If the tip is seen to require cleaning, great care will have to be taken to avoid damaging the tip or pick-up. Brushing with a sable hair brush may suffice, but if not, the tip can be pressed lightly (with perhaps 10 mg force) into a soft material such as a stick of elder pith or even the end of a matchstick, taking care to avoid
441
destructive shear stresses such as could result from scraping the stick across the tip under such a load. This point about cleaning is valid for all methods of assessment. It may seem very crude but it is extremely effective. Referring to Chapter 4 in the section on tactile sensors, it is clear that the observed signal is very dependent on the geometry of the stylus. Changing from a 2 µm tip to a 10 µm tip can influence the result by almost 10% on a fine surface. The same is true for the resolution spot of an optical probe or the tip of a capacitor electrode. So methods of examining the stylus are absolutely crucial from the point of view of calibration and verification of instruments. 5.9.1.2 Use of Microscopes Consider the verification of the stylus. The obvious way is to examine the tip under a microscope. Methods of doing this are not quite as easy as may be supposed. The reason is that the typical tip has dimensions that are very close to the wavelength of light. This poses two problems. The first is that it needs a reasonably good microscope to view the dimension and the second is that the method is hardly practical for instruments that do not have a detachable stylus or pick-up unit. Methods are available for the investigation of the wear of a stylus during an experiment where the pick-up from a conventional instrument is separately mounted and can be swung around out of its normal position for measurement, but in general this technique is not available except in research work. The SEM has been used to measure styli but it is not easy. In principle it should give an accurate, highly resolved image of the tip profile. However, for small-radii tips the styluses should be coated with a conducting film to stop the surface charging up. Carbon has been used for this, as also has palladium–gold alloy which is coated to a depth of 100 angstroms. Also the magnification of the images must be calibrated since it can vary up to 20% from one pump-down to the next. The calibration is usually achieved by means of a calibrated line standard. The stylus and the calibrated line standard have to be mounted in the same rotary sample holder. During operation [9] the line scale is first rotated under the SEM beam to enable the calibration to be determined. The TEM can also be used as described in Section 4.6.4 on TEMs but this requires a double replica method which is a messy procedure but can be used as an effective check. Optical micrographs of the stylus are an alternative but the resolution is poor and any structure less than about 1 µm is likely to be indefinable. Using a numerical aperture of about 0.9 is desirable but the short field depth needed to tell where the shank starts and the diamond finishes precludes it for spherical tips. Flat tips are not so much of a problem as spherical tips because it is the flat dimension only that has to be determined and not the shape. For spherical tips the shape has to be determined in order to evaluate the radius. This involves the use of algorithms from which to get the radius from the profile of the stylus. The use of optical microscopes for the determination of the dimensions of spherical tips is
442
therefore restricted to 0.3 NA to get a suitably large depth of field. This gives a resolution of worse than 2 µm and so limits the technique to 10 µm tips. Also, if a high-resolution microscope is used there is a serious possibility of pushing the stylus into the objective lens. For optical methods, as in the SEM, the stylus has to be removed from the instrument, although not necessarily the pick-up, so it is more convenient but less accurate than the SEM and TEM. 5.9.1.3â•… Use of Artifacts One practical method is to run the stylus, whilst in the instrument, over a very sharp object such as a razor blade, and to record the profile on the chart of the instrument. Razors have a tip of about 0.1 µm and an angle of a few degrees and so are suited to measure styluses of about 1 µm. It has been reported [10] that there is very little deformation of the blade. In the same reference it is highlighted that traversing speeds have to be very low so that the frequency response of the chart recorder is not exceeded. Typical tracking speeds are 0.001–0.01 mm s–1. Also, it is advisable to keep the vertical and horizontal magnifications of the chart equal so that there is no ambiguity as to the true shape of the stylus. For accurate assessment using this method, the blade has to be supported very close to its edge (~0.5 mm) to avoid deflection. Once the profile of the stylus has been obtained there remains the problem of evaluating it. If the instrument has a digital output, then problems of algorithms and frequency responses have to be agreed. If the chart record is the only output it is convenient to try to fit a template to the graph inscribed with different radii [11]. Ideally, in industry, such complicated procedures should be avoided. There are a number of indirect methods which have been used in the past. The first one used a roughness standard for calibrating the meter to give an indication of stylus wear. This is perhaps the most important factor to consider from the point of view of a user. The form of the profile on these specimens is triangular and called the “Caliblock” originally devised by General Motors. The standard has two sections: one is used for a rough surface amplitude calibration (see later) and the other is of about 0.5 µm Ra for fine surfaces. It was the finer of the two parts which was and still is used to estimate wear of the diamond. The idea is that if the stylus is blunt it will not penetrate into the valleys of the triangle. This results in a reduction in the signal and hence the measured Ra value. It is straightforward to calculate to a first approximation what loss in Ra corresponds to the bluntness of the stylus. However, it is difficult to get quantitative information about the true shape of the tip. It could be argued that this is not needed and that all that is required is a go-no-go gauge. For many cases this is correct but there is a definite need for a more comprehensive, yet still simple, test. Furthermore this test should be performed on the instrument without any modification whatsoever. One such standard is shown in Figure€5.8. This standard comprises a number of slits etched or ruled into a block of glass or quartz. See standards type B in ISO 5436-1 [9].
Handbook of Surface and Nanometrology, Second Edition
W1
W2
Figure 5.8â•… Stylus calibration block
P1
P2
P3
Figure 5.9â•… Profile of stylus locus seen on chart.
Infinitely sharp 90° stylus
P
Shaped blunt stylus Blunt 90° stylus W
Figure 5.10â•… Graphical interpretation of chart readings.
Each groove has a different width and usually corresponds to one of the typical stylus dimensions (i.e., 10, 5, 2, 1 µm). Use of these standards enables the tip size and the shape and angle to be determined. When the stylus is tracked across the standard a picture like Figure€5.9 is produced in which the penetrations P1, P2, etc., are plotted against the dimension of the slot (Figure€5.10). A straight line indicates a pyramidal stylus, of slope m. The stylus slope measurement
w/2 = tan θ /2 P
(5.47)
P ~ w cot θ.
(5.48)
from which
The penetration of a spherical stylus is approximately given by the spherometer formula (Figure€5.11)
P = w 2 / 8 R.
So, for example, on the graph the spherical stylus has a �quadratic penetration and, incidentally, will always drop to
443
Standardization–Traceability–Uncertainty
P = (d/2) tan θ
P ∼ d2/8R
Figure 5.11â•… Penetration of styluses into grooves of width ‘d’.
α
(a)
(c)
r
(b)
A
B C
Carbon nanotube
Figure 5.12â•… Stylus considerations.
some extent into a slot no matter what its width. This is not true for the flat-type stylus. Problems with this type of standard include the fact that the grooves can fill up with debris and so cause the stylus to bottom prematurely and give a false impression of a blunt stylus. The opposite or negative of the standard has also been attempted giving a set of ridges corresponding to a set of razor blades. Although stiffer than a razor blade, the evaluation of the stylus shape is more difficult because the profile obtained is a sum of the stylus curvature and ridge curvature at every point. Extracting the stylus tip information from the profile is difficult, although debris is less of a problem than with grooves. One or two extra problems emerge with the use of scanning probe microscopes. The stylus in ordinary instruments follows a given shape such as a cone or pyramid (or tetrahedron as in the case of one of the Berkovich stylii). These styli have a defined tip dimension and an angle (i.e., r and α) as seen in Figure€5.12. However, the fact that these styli have a finite and usually substantial slope (i.e., 60º or 90º) means that accurately measuring the position of an edge or side of a protein is virtually impossible. In Figure€ 5.12c, for example, instead of B, the edge of the protein registering the contact on the shank C together with the large size of the stylus, indicates that the edge of the protein is at A and not B. In biological applications error produced in this way can be misleading. This has resulted in attempts to minimize r
and eliminate α by attaching a carbon nanotube (CNT) (by means of an adhesive) to the usual tungsten or silicon stylus used in AFM or STM (Figure€5.12b) [12]. These nanotubes, members of the self-assembling carbon fullerine family, have typical dimensions of about a nanometer diameter and micrometer length. They are tubes and not rods and can be single or multiwalled. It would appear that the multiwalled, say, three layer, are stiffer than the single wall type but no information is available on the internal damping characteristics—damping is the correct word to describe energy loss at this scale. There are other types of linear molecule that could conceivably be used. For example, simple fibers rather than tubes. Silicon or molybdenum selenide fibers having similar dimensions are possibilities, with titanium or tungsten-based probes also worthy of consideration [13]. To be of any practical use the tubes have to have the requisite lateral resolution and must be stable over a realistic lifespan. Nguyen et al. [14] carried out some exhaustive tests on ultra-thin films of conductive and insulator surfaces in the 5–2 nm region. They also tested resolution by using metal films (e.g., gold and iridium) having grain boundary separation between 10 and 2 nm. Measuring resolution is very difficult at this small scale; it is hardly feasible to find a test artifact which has a range of grooves or spacings similar to that used for the microscale shown in Figure€5.8. Very hard surfaces such as silicon nitride were repeatedly scanned for long time periods to test for wear and degradation. Fortunately they showed that the CNT probe on an AFM offers great potential for measurement down to the nanometer and subnanometer. Most important is the finding that these tips maintain lateral resolution, hence fidelity, over long periods of time. Even after 15 hours of continuous Â�scanning, wear is minimal. It should be pointed out here, however, that continuous scanning is not necessarily a valid operation. It is discontinuous operation which shows up weaknesses in bonding and other mechanical factors, not to mention operator error. 5.9.1.4â•… Stylus Force Measurement Also needed for a profilometer is the force on the surface. This is important from the point of view of potential Â�damage. There are a number of ways of achieving the force. It suffices to measure the static force. Multiplying this by two gives the traceability criterion. The classic way of doing this is by using a small balance and simply adding weights until a null position is reached. For a laboratory the balance method is acceptable but as typical forces are 0.7 mN (70 mg) and lower it is much too sensitive for workshop use. One simple practical level [15] actually uses the profilometer for its own force gauge. This method is shown in Figure€5.13a. Where the deflection y(x) is a function of x and cantilever constants b for width and d for thickness, and I is the second moment of area.
y( x ) = Fx 3 / 3EI ,
(5.49)
444
Handbook of Surface and Nanometrology, Second Edition
y(x) can be found from the chart (Figure€5.13b) or from digital output as the probe is tracked along the cantilever. In practice the free cantilever can be replaced by a beam fixed at both ends. The deflections near the center of the beam are small but stable. Using Equation 5.49 the force F can be found and from it the maximum force imposed on the surface (i.e., 2F in the valleys). If possible it would be useful to determine the damping by using the log decrement method (i.e., letting the stylus dangle in a vertical position). Watching the decrease in oscillations from the chart enables the damped natural frequency to be found (the natural frequency is usually known).
over 50 μm. There are problems, however, if small displacements are to be calibrated because of the calibration error of the gauge block itself, amounting to one or two microinches (25–50 nm), which is a large proportion of the step height. A convenient way to minimize the errors is to use an accurate mechanical lever. This can produce a reduction of 10 or even 20 times.
5.9.2╅Calibration of Vertical Amplification for Standard Instruments Magnification is defined as the ratio of the vertical displacement of the needle on a graph (or its digital equivalent) to that of the movement of the stylus (or equivalent tracer). 5.9.2.1╅ Gauge Block Method The basic requirement is to displace the stylus by accurately known amounts and observe the corresponding indication of the recorder or other output. The stylus can be displaced by means of gauge blocks wrung together. For best answers the blocks should be wrung onto an optical flat (Figure€5.14). In some cases it is best to fix them permanently so that the steps can be calibrated by an interferometer rather than just relying on the specified values. According to Reason [16], the technique can be used directly for the calibration of steps
5.9.2.2â•…Reason’s Lever Arm Reason’s lever [16] was designed so that the position of the pivot could be accurately determined (Figure€5.15). The reduction ratio is determined by adjusting the position of the pivot P relative to the stylus (which is not traversed) by means of the screw S and spacing gauge block G until equal deflections are obtained on both sides. 5.9.2.3â•… Sine Bar Method An alternative to the use of a lever arm and gauges is a precision wedge as shown in Figure€5.16 [16]. A thick glass optical flat with parallel faces is supported in a cradle on three sapphire balls of closely equal diameter (two at one end and one at the other). The balls rest on lapped surfaces and a precise geometry is obtained by using a heavy base plate with a flat-lapped surface onto which lapped blocks of gauge block quality are secured and on which the sapphire balls sit. The nominal wedge angle is 0.0068 rad. A shielded probe with a spherical electrode is rigidly mounted to the base place and forms a three-terminal capacitor with a conducting layer on the undersurface of the glass flat. The micrometer is arranged to move the cradle over 12 mm.
(b)
(a) Cantilever
Chart
Stylus
My(x)
y(x) x
Figure 5.13â•… Profilometer as force gauge.
S
Figure 5.14â•… Gauge block calibration.
A
G
P
B Polished
2D
C B d L
Figure 5.15â•… Calibration of amplifier with Reason lever arm.
d
445
Standardization–Traceability–Uncertainty
Stylus of instrument
Flat
Micrometer Return spring
Sapphire balls
62 mm Shielded capacitative probe
Evaporated conductive surface
Figure 5.16 Calibration of amplifier using sine bar method.
The displacement of the stylus can be worked out in two ways, one by knowing the horizontal shift and the wedge angle, and the other, better, method using the capacitance probe. 5.9.2.4 Accuracy Considerations The relationship between electrode spacing and capacitance is
y = a + bexp(λc),
(5.50)
where y is the gap, a, b, and λ are constants. A change in the gap is then related to a change in the capacitance by the differential of Equation 5.50 giving
dy = bλ exp(λc) = K exp(λc). dc
(5.51)
height and displacement steps are now being required. The lever method is limited to about 0.25 µm steps for normal use but 25 nm has been measured with 1 nm accuracy. Unfortunately, this is barely sufficient for today’s needs. Obviously the accuracy of any gauge block technique depends in the first place on how accurately the gauges have been measured. The same is true for the measurement of the step itself by interference means. In this case there are two basic errors: one is that of uncertainty in measuring the fringe displacement, and the other is the variation of step height measurement when the geometry of the edge is imperfect. Therefore, proper measurement of a step height must take into account:
To relate the change in c to y requires bλ to be known, so the probe has to be calibrated. This might sound like a snag but it turns out not to be so because it is possible to calibrate capacitance probes over relatively large displacements and then to interpolate down to small displacements with the same percentage accuracy. So the wedge is moved over its whole range yielding a height change of about 90±0.25 µm. The wedge angle is calibrated by using interferometry. The ultimate resolution of this technique has been claimed to be 2 nm. Obviously there are other ways in which a step or vertical movement can be calibrated. The lever and wedge are just two of them. Of these the lever is much the cheaper and simpler but not quite as accurate. How does the lever work? If L is the length of the long arm engaging the gauge blocks and 2d the displacement of the unit from one stylus operating position to the other, the lever reduction is d/L. As an example, L is 100±0.1 mm and 2d is measured accurately with a 25 mm or 10 mm gauge block G. The reported accuracy of such a technique is much better than 1%. But again the problem is that much smaller surface
1. The geometry on both sides of the step. 2. The effect of fringe dispersion by imperfect geometry. 3. The nature of the “method divergence” between optical interference and stylus methods. 4. The realization that the accuracy of the height value is ultimately limited by the surface roughness on both sides of the step.
5.9.2.5 Comparison between Stylus and Optical Step Height Measurement for Standard Instruments This last point limits the overall accuracy of transferring a step height calibrated by interferometry to a stylus measurement [17, 18]. Thus examination of the system of measurement for a stylus and an interferometer can be made using Figure 5.17. Comparison between stylus instruments and optical instruments has been carried out by the PTB in Braunschweig [19]. Two useful optical methods were included in the investigation: the phase shift Mireau-type interferometer and the Linnik interference microscope. Measurements were carried out on very smooth rectangular-shaped groove standards with depths from 3 µm down to 60 nm, just inside the nanometer range. The steps were obtained by evaporating
446
Handbook of Surface and Nanometrology, Second Edition
aluminium onto a zerodur plate using a rectangular aperture. Successively opening the aperture led to six steps forming a staircase of steps. Due to the limited lateral resolution of the plane wave interferometers, it was found to be impossible to use the same lengths of profile as the stylus and interference microscope. Also high waviness of the upper step levels resulted in differences between the interferometer and the interference microscope. It is not clear whether relocation was used in this exercise. Table 5.3 shows the results taken with the Nanostep stylus instrument, the Zeiss IM/Abbé, the Zeiss IM/UBM and the Wyko IM (interference microscope). The deviations expressed as a linear function of groove depth d came within the estimated instrument uncertainties. At the 2σ level these were: Nanostep 1.5 × 10 –3 d + 0.7 nm, Zeiss IM/Abbé 1.5 × 10 –3 d + 4 nm, Zeiss IM/UBM 1.5 × 10 –3 d + 1.5 nm, and the Wyko IM 1.5 × 10 –3 d + 4 nm. The constraint part is added to take into account the higher deviations at smaller grooves. Among these, the stylus method has smaller uncertainty probably because the stylus tends to integrate random effects. The experiments showed that the methods had gratifyingly close agreement as can be seen in the table. However, problems of surface contamination and phase jumps caused by the influence of varying extraneous film thicknesses, especially in the bottom of the grooves caused problems with all the optical methods as discussed in Chapter 4. Polarization problems were not addressed.
Here the height h of the step measured interferometrically as a function of the fringe spacing t and dispersion Δ is
Stylus path
∆ λ h = n + . t2
(5.52)
The value of the fringe n can be obtained by counting the fringes or by use of the method of “exact fractions” [20] using different wavelengths of light. The variability of h as a function of Δ and t can be obtained using the propagation formulae given in Section 5.6 Equation 5.21 as 2
2
∂h ∂h var(h) = σ t2 + σ 2∆ . ∂t ∂∆
(5.53)
If the mean t and mean Δ are t and ∆ , the mean percentage error is 1/2 2
2 2 δh δh 1 σ σ . = = h h 1 + nt ⁄ ∆ ∆ t
(5.54)
This is again obtained from the error formulae making the assumption that σΔ = σt = standard deviation σ of the measurement. Errors due to geometry rather than simple fringe measurement errors shown in Equation 5.52 can be incorporated by expanding the expression s = t/Δ into two terms of Taylor’s theorem, yielding
Fringes
∆
1/ 2
2 δh σ h d(δs) = 1 + , h h d (s )
t
(5.55)
h
where the first term represents Equation 5.54 and the addition is the term arising from geometry and lateral averaging problems. Errors in step height of 0.1λ/2 are not unknown. Generally the variation in height is strongly dependent on the particular geometry and its slope relative to that of the
Figure 5.17 Calibration relative to fringe pattern.
TABLE 5.3 Measurement with Different Instruments Groove Number 8 7 6 5 4 3 2 1
Nanostep, nm 2803.5 1838.8 1407.8 1001.9 485.9 240.5 95.4 61.2
Zeiss IM/ Abbé, nm
Zeiss IM/ UBM, nm
Wyko IM, nm
Mean Value, nm
s, nm
2806.9 1636.3 1407.4 1004.1 484.8 236.9 95.0 59.5
2805.6 1838.4 1407.1 1003.9 488.5 242.1 96.7 61.7
2799.1 1843.2 1402.9 1003.3 487.5 245.2 93.2 58.8
2803.8 1837.8 1406.3 1003.3 486.7 241.2 95.1 60.3
3.8 1.4 2.3 1.0 1.6 3.4 1.4 1.4
447
Standardization–Traceability–Uncertainty
reference surface. Geometrical errors in the lower part of the step and in the reference surface would also contribute to the total uncertainty of the height measurement. The effects of lateral averaging are particularly important if the interferometer magnification is at 10 × or less. Practical verification of the typical step error for stylus instruments has indicated [18] that the error is about equal to the Rq value of the surface texture. Remember that the need for a step to be calibrated is so that the magnification of the pick-up can be determined.
5.9.3 Some Practical Standards (Artifacts) and ISO Equivalents 5.9.3.1 Workshop Standards For convenience in workshops, there are portable standards made in much the same way as the stylus calibration standard. There are variants naturally, but one such standard for measuring the vertical magnification of a surface texture instrument is shown in Figure 5.18. This was the 1940s standard developed by the then Taylor Hobson and is probably the first practical standard The standard consists of a ruled or etched groove which is calibrated. The material used is often quartz but can be made from glass and chrome plated to make them suitable for use as workshop standards. There may be three “lands” but one preferred method is to use only the center height and the adjacent two valley bottoms. To minimize the possible noise values only a small region with each of the lands is used, such as R1, R2, R3 in Figure 5.18. The difference between standards used for optics and those used for stylus methods is that the former usually Coarse
(a)
involve the metalizing of the surface to ensure high and uniform reflection from both the upper and lower levels of the step or groove. Flat-bottomed grooves have the advantage that the calibration of magnification is not affected by the quality of the tip. Moreover, they tend to ensure that the surface quality of the two levels is identical, which is as desirable for the optical method as it is for the stylus. The important thing to remember about such standards is that the instrument is being calibrated as a system; from front end to recorder. This is vital to maintain the high fidelity needed and is so easily lost when dealing with workshop quantities. Automatic calibration can be achieved by having a fixed movement programed to cover the regions of R1, R2, and R3. These height values would have a best-fit line fitted through them to get the calibrated height value. Such standards are now incorporated into the international scene and a brief description condensed from an article by Leach [21, 22] and ISO [9] is as follows. 5.9.3.2 ISO Standards for Instrument Calibration Type A [9] are the successors of that shown in Figure 5.18 and the formal diagram is shown in Figure 5.19. Type A2 is shown it has rounded edges instead of sharp ones. It is not obvious why this is necessary except perhaps to prevent debris being trapped in the corners of the grooves! Type B standards are used to measure the stylus tip. There are various types namely B1 which has narrow grooves supposed to be sensitive to the stylus tip dimension, B2 two grids of equal Ra value, one sensitive to the tip dimension and the other not sensitive and type B3 which has a fine protruding edge where the radius and apex angle must be smaller than the radius and apex angle of the stylus being measured. It is Fine
Practical
Glass or quartz
(b)
R2
R1
Figure 5.18 Instrument height calibration artifact.
Calibrated height from regions R1, R2, and R3
R3
Etched or ruled marks
448
Handbook of Surface and Nanometrology, Second Edition
3w w 3
w 3
w
p B
A w 3
C
Figure 5.19 Type A calibration standard. (From ISO 5436-1. Geometrical product specification—Surface texture: Profile method—Measurement standards—Part 1 Material measures, 2000. With permission.) (a)
(b)
(c)
Figure 5.20 Standard waveforms, class C, for amplitude and spacing characterization of instruments: (a) sine wave standard, (b) sawtooth standard, (c) cusp-arcuate standard. See also Figure 5.31.
not clear whether those artifacts shown in Figures 5.8 and 5.9 which can be used to also estimate the shape of the stylus as well as the tip dimension are included in this B category [9]. Type C artifacts are used to measure the vertical magnification and horizontal characteristics of instruments. As seen in Figure 5.20, they are of the form of a series of repetitive grooves of similar shape having a low-harmonic content. There are four subgroups: type C1—sine waves, type C2—triangular waves, type C3—sine or triangular waves with truncated peaks and valleys (although there are some which have only the peaks are truncated), and type C4—arcuate waves. These artifacts are also used to help to verify the filter characteristics of the instrument. See Section 5.9.5. Although they are made with one period, they can cover a small range of frequencies by changing the angle at which the stylus traverses the artifact i.e., instead of the yaw angle being zero it can be changed to a real value to increase the apparent pitch. If the angle through which the standard has been turned is θ and the pitch d the effective pitch becomes d sec θ. Type D are used to verify the overall performance of an instrument. These have an irregular profile in the direction of traverse that repeats in the horizontal direction usually after five sampling lengths. These artifacts are often made by creep feed grinding. Type E artifacts are used to verify the form capability of an instrument or the straightness of a datum slideway. They are in two groups one, E1—a spherical dome shaped artifact
that is characterized by its radius and Pt, and type E2—a precision prism characterized by the angles between the surfaces and Pt on each surface. The aforementioned calibration artifacts are principally intended for use in calibrating and verifying the performance of stylus profile instruments. Although some of these can be adapted for use with optical probes there needs to be a lot more work to be done before equally satisfactory artifacts are available for use specifically for use with optical profilometers. See Wilkening and Koenders [22] for a review of the situation. It is in general more difficult to achieve optical artifacts than mechanical ones because of the much more complicated interaction between electromagnetic waves and the surface than it is between the solid stylus and the surface. Another type of surface artifact which is attracting recent attention is that for measuring over an area rather than just profiles. These are called aerial standards. Leach [23] has been developing various types of artifact to meet this need but his and others [22] still require practical acceptance. ISO/FDIS 25178-701 2007 goes someway to describing suitable standards [24]. These are in six groups: Type ER—These have two or more triangular grooves which are used to calibrate the horizontal and the vertical amplification. They are characterized by their depth, the angle α between the flanks, and the intersection line between the flanks. Type ER come in two forms: ER1—two parallel grooves as shown in Figure 5.21 having measurands as groove spacing l and d. ER2—rectangular grooves as shown in Figure 5.22 having measurands l1and l2. θ, the angle between the grooves. ER3—circular grooves as shown in Figure 5.23 having measurands Df and d. Type ES—sphere/plane artifacts for measuring vertical and horizontal amplifications and the xy perpendicularity, the response curve i.e., the spatial frequency response and the geometry of the stylus. The measurands are the largest distance of a point of the sphere to the plane P, d, the radius of the sphere Sr, and the diameter of the circle obtained by the intersection between the sphere and the plane P, Di given 2 by Di = 2 Sr 2 − ( Sr − d ) . See Figure 5.24. Type CS—contour measuring standard shown in Figure 5.25 are used for measuring overall calibration in one horizontal axis of the instrument. The measurands are the radius of the arcs of the circle, the distances l1…ln between the centers of the circles and/or the summits of the triangles with respect to the reference plane, and the heights h1…hn between the centers of the circles and/or the intersections of the flanks of the triangles. Type CG—X/Y cross grating standards which are characterized by the average pitches in the x and y axes, and the angle between the x and y axes. This type of artifact comes in two forms: Type CG1—X/Y cross grating shown in Figure 5.26 which are used to for measuring the vertical and the horizontal
449
Standardization–Traceability–Uncertainty
l
P
α
d
Rf Key: d l α P Rf
Depth of grooves Distance between grooves Angle of the groove flanks Reference plane Groove bottom radius
Figure 5.21 Type ER1 parallel groove standard (From ISO 5436-1. Geometrical product specification—Surface texture: Profile method—Measurement standards—Part 1 Material measures, 2000. With permission.)
1
L1
d
p 0 Rf
L1
2
Key: d L1, L2 1, 2 θ P Rf
Depth of grooves Groove spacings Symmetry lines of parallel grooves Angle between the grooves Reference plane Groove bottom radius
Figure 5.22 Type ER2-recangular groove standard (From ISO 5436-1. Geometrical product specification—Surface texture: Profile method—Measurement standards—Part 1 Material measures 2000. With permission.)
Df
d P
Rf Key: d Df P Rf
Depth of grooves Diameter of the groove Reference plane Groove bottom radius
Figure 5.23 Type ER3-circular groove standard. (From ISO 5436–1, Geometrical product specification—Surface texture: Profile method—Measurement standards—Part 1 Material measures, 2000. With permission.)
magnifications and the xy perpendicularity of the instrument. The measurands are the average pitches in the x and y axes lx and ly and the average angle between the x and y axes. Type CG2—X/Y/Z crossed gratings shown in Figure 5.27 are the measurands same as those for type CG1 with the addition of the average depth of the flat-bottomed pits. There are other possibilities for measuring and calibrating height displacement as well as other dimensional features, such as roundness, at one and the same time. One method is to use beads, but the technique is at present limited to optical microscope calibration. However, it may be that in the future harder materials will become available and the method extended. At present the arrangement is to manufacture small precise beads made out of styrene monomer and which swell around hydrocarbon seeds in water [25]. Spheres of 9.89±0.04µm have been made in space and represent another possible way of calibrating small sizes. The basic problem is that the spheres are soft and only suited to
450
Handbook of Surface and Nanometrology, Second Edition
viewing. Fortunately for surface metrologists it is easier to make small balls than large ones, so 1µm calibrated spheres may be available soon. Large quartz and glass balls of about 20 mm are now being used to calibrate the amplitude of roughness and form. The use of space frames, such as a tetrahedron ball frame or a rack of balls with cylinder ends [26] to calibrate the volumetric accuracy of coordinate-measuring machines will not
be considered here. Neither will the use of interferometers directly [27, 28]. These are well known and not especially relevant to surface metrology calibration.
P
S d
5.9.4 Calibration of Transmission Characteristics (Temporal Standards) This involves the transmission characteristics from the input, that is the stylus or probe, through to where the output is registered. Basically what is required is how the instrument sees waves of a given length and communicates them, whole or in part, to the storage device or to be acted on as part of an adaptive control loop in a bigger system. θ
LY Sr
Di
X Key:
Z
d S Sr Di
Distance from the top of the sphere to the plane P Part of a sphere Radius of the sphere Intersection diameter
P
Datum plane
P
Figure 5.26 Type crossed grating CG1. (From ISO 5436-1, Geometrical product specification—Surface texture: Profile method—Measurement standards—Part 1 Material measure, 2000. With permission.) H3
H2
R1
Lx Key: LX Pitch in the X axis LY Pitch in the Y axis Angle between the X and the Y axes θ
Figure 5.24 Type ES-sphere plane standard. (From ISO 5436-1, Geometrical product specification—Surface texture: Profile method—Measurement standards—Part 1 Material measure, 2000. With permission.) H1
Y
α H4 α
L1 Key: Ri α P L1,...,Ln
R2
L2
L3
Di
Rf
Radius of the arcs of circles Angle of the triangles Datum plane Distances between the different patterns
H1,...,Hn Heights between the different patterns with respect to the datum plane P
Figure 5.25 Type CS-contour standard. (From ISO 5436-1, Geometrical product specification—Surface texture: Profile method— Measurement standards—Part 1 Material measure, 2000. With permission.)
451
Standardization–Traceability–Uncertainty
LY
Filter
Amp
Stylus
Output
Vibrating table Monitor
d
Driver Key: LX LY θ d
LX Pitch in the X axis Pitch in the Y axis Angle between the X and the Y axes Depth of the pits
Crystal
Power
Figure 5.28â•… Vibrating-table method of instrument calibration. θ
Figure 5.27â•… Type crossed grating CG2. (From ISO 5436-1. Geometrical product specification—Surface texture: Profile method—Measurement standards—Part 1 Material measures, 2000. With permission.)
There are a number of ways of approaching this, but some involve the direct use of time and others the indirect use. Consider first the direct use. This was originally needed because the filters, transducers, etc., are analog, that is temporal in response. This situation is changing rapidly because now the filtering and metering are carried out digitally and so have to be dealt with differently. However, the basic idea is to evaluate the transfer function of the system. This concept assumes a linear system. Unfortunately there are occasions when the system is anything but linear. One such instance is when the stylus bounces off the workpiece because the traversing speed is too high. Another is because the stylus itself acts as a non-linear mechanical filter if it does not penetrate all the detail. Having said that it is still possible and necessary to have a clear idea of the metrology system response, not only for calibration purposes but also to enable it to be incorporated into the larger manufacturing process loop. There are two basic ways of checking the transmission characteristic. Both involve oscillating the stylus with a constant amplitude and waveform through a sufficient range of frequencies to check all those wavelengths which might be important on the surface. One way is to use a vibrating platform, and the other is to use a periodic calibration surface. The former will be considered first. The basic set-up is shown in Figures 5.28 and 5.29. A vibrating platform energized from a low-frequency oscillator has been used. The basic problem is not that of monitoring frequency or even waveform, but that of keeping the amplitude of vibration constant over the range of frequencies. Notice that this type of calibration has to take place with the skid removed or not touching the vibrating table, otherwise no signal is seen in the amplifier. A number of ways have been devised to monitor the amplitude of vibration. Originally a small slit was put on the table and this was illuminated by a light source. The slit or flag impedes the amount of light falling onto a photodetector. Depending on the output the voltage into the drive circuit of
Stylus Moving table Driving coils Mirror Reference Laser
Comp Detector
Figure 5.29â•… VNIIM calibrator using interferometer. (From Barash V. Y. and Uspensky Y. P., Measurement Techn., 2, 83, 1969 (in Russian); Barash V. Y. and Resnikov A. L., Measurement Tech., 3, 44, 1983.)
the vibrator is adjusted to keep the movement amplitude constant. This turns out to be less than satisfactory. The preferred way to monitor the displacement is to use an interferometer [27, 28] similar to Figures 5.28 and 5.29. In this the armature of a moving coil is used for two purposes. The upper face is used for the table and the lower face for the mirror of the interferometer. There are endless variations on this theme, but one is that the reference mirrors are slightly tilted so that fringes move across the face of the detector. Alternatively the detector could be a simple counter. Various ways have been used to enhance the sensitivity of such a device when the movement has been required to be less than λ/2, for example, the use of Bessel function zeros [28] when more complex optics are used. It seems that this technique will ultimately be used for the overall calibration of instruments for both amplitude and frequency. Laser interferometers have been used to measure both step and groove heights in this system [29] (Figure 5.30). Despite being a method which is easily controllable, the vibrating-table method has two serious disadvantages. The€first is that it is difficult to transport and therefore it cannot be used as a workshop method. The second is that it does not truly measure the system characteristics because the pick-up is not
452
Handbook of Surface and Nanometrology, Second Edition
Laser
Diffuser
Photomultiolier
Slit and mask
Displacement meter
Vibrating mirror
Amplifier
Specimen Reference mirror
Figure 5.30â•… Step and groove measurement. (From Hasing, J., Werkstattstechnic, 55, 380, 1965.)
traversing horizontally when the frequency response is being checked. The problem of dynamic calibration of scanning probe microscopes (SPM) is the same as for surface texture instruments only on a smaller scale. Very little has been reported. The following technique addresses these two problems.
(a)
(b)
5.9.5â•…Filter Calibration Standards The gradual emergence of substantially sinusoidal instrument calibration standards has been leading toward a simpler method. The idea of a portable standard is attractive and has been used to calibrate the meter for years. The question is whether a similar type of standard can be used to check the transmission characteristics. Basically a specimen such as a category C calibration artifact mentioned earlier of suitable spacing of say, 250 µm and 2.5 µm Ra is traversed preferably with a skidless pick-up at various angles giving effective spacings from 250 to 2500 µm (i.e., about 10:1 range). If the angle through which the standard has been turned is θ and the pitch d the effective pitch becomes d sec θ. The output, which has only to be expressed as a percentage of the maximum indications, should fall for each transmission value within the specified range of the standard. The method can be used with skid-type pick-ups providing that the skid has a long enough radius both along and across the direction of traverse to make the skid error negligible (> τ for practical cases. Here, to avoid mathematical problems AT(τ) must be zero for τ = > L. Therefore 2 var ( P ) ≈ L
L
∫ A ( τ ) d τ ≈ 2σ T
T
2
0
τC . L
(5.111)
This says what has already been said in words. The variance of a parameter for measuring roughness is inversely proportional to the number of degrees of freedom in the assessment length, that is L/τc where τc is the correlation length of the autocorrelation function AT (τ) and σ2T is the variance (= AT (0)). The point of importance here is that only τc is needed, not the shape of the autocorrelation of T(z(x)) [65]. As an example consider the Ra value of a roughness profile, the transformation is given by T ( z ( x )) = z ( x ) .
(5.112)
If the surface is Gaussian the value of σT is found from ARa(τ). Now ∞ ∞
ARa ( τ ) =
∫∫
∞ 0
z1z2 p ( z1z2 ) dz1dz 2 −
0 0
0 ∞
0 0
∫∫ ∫∫ ∫∫ −
0 −∞
+
−∞ 0
−∞ ∞
... (5.113)
L
1 P = E[P] = T (z (x ))dx , L
∫ 0
(5.107)
AR a (τ) =
2σ 2 {[1 − A(τ)]1/ 2 + A(τ) sin −1( A(τ)) − 1} (5.114) π
473
Standardization–Traceability–Uncertainty
giving π σ T2 = Ra2 − 1 = σ 2 1 − 2
2 . π
2 = Rq2 1 − . π
Hence
var ( Ra ) = ( π − 2) Ra2τc /L.
where A′ is normalized with respect to R2q of the profile and RaD is the discrete Ra. For the case when q L, A(τ) = 0. Also the one and only condition requires that grain center is within τ–L and L and that no other grain encompasses either the origin or τ positions. Taking the probability and constraints into account C p = E ( f ( x ). f ( x + τ))exp(− λ p ( L − τ)).λ p
(6.27)
( L − τ).exp− (λ pτ).exp(− λ pτ)
which is the familiar form of the Poisson distribution. For K = 0 Equation 6.27 becomes
C p ( τ) =
(6.28)
and for K = 1, p(1) = λτ exp (–λτ), and so on. Similarly if the distribution of even widths (rather than the positions at which they interact with the surface) is uniform,
= AG (τ).(λ p L ).λ p ( L − τ)exp(− λ pτ) = AG (τ).exp(− λ pτ).k.
(6.32)
where k is a constant dependent on λp and L, both of which are closely related to specified process parameters.
522
Handbook of Surface and Nanometrology, Second Edition
Hence the profile autocorrelation is the typical grain correlation modulated by the density of grains λp. Equation 6.32 encapsulates the Poisson, hence Markov feature, so the shape and size of the typical grain impression and the exponential envelope character of the correlation function explicitly demonstrate the dependence on actual process parameters. Furthermore if the typical correlation length is Ccp then the depth of cut d is approximately given by d = Acp2 /8 R where R is the grain size. For example, for an Acp of 20 µm and R of 100 µm, d~0.5 µm. A mathematical model enables investigations and simulations to be carried out on a number of processes. Thus C (τ) =
K′ exp ( −ζw0 τ ) 4ζw03 cos ( w τ ) + ζ w0 sgn τ sin ( w τ ) a a wa
C (τ) =
∫
L−τ
0
z1 ( x ) z2 ( x + τ)dx /(L − τ)
(6.33)
(6.34)
Taking a simple case of a square impression (i.e., square cut) to illustrate the point, this yields
C (τ) = σ (1− | τ| /L ). 2
(6.35)
where σ2 is the (RMS)2 of the cutting shape and L is the width. Letting τ/L = τ– and σ2 = 1, then
C ( τ ) = (1 − τ )
function associated with an independent grain impression and p(L) is the probability density of an impression of size L then a useful expression is C (τ) =
∫
Lmax
0
C ( τ, L ) p ( L ) dL where C ( τ, L ) = 0 for L < τ.
(6.37)
For machining processes in which the unit event may be considered to be additive or subtractive without complex effects occurring, such as material movement, the situation is different from that in which machining takes place. This might happen in spray deposition for the addition of material and as in electrodischarge machining (EDM) for removal of material. If the particles are of about the same order in size and the interactions linear then the criterion for separated events as such need not be a constraint. The simple convolution explains the process. Hence, the build-up of a surface layer or the erosion of it can be represented by the profile z′(x) given by
z ′( x ) = z ( x ) ∗ h( x )
(6.38)
where h(x) is the shape of the individual impression (which might be a composite shape as in Equation 6.38) and z(x) is the spatial impulse train at which the events hit the surface). Under these conditions [57]
(6.36)
If the impressions are not interacting with each other but represent a succession of indentations across the surface. They comprise of an impulse train, as in Figure 6.65, an indentation shape Figure 6.66, and the convolution of the two to give the generated surface shown in Figure 6.67. This exercise gives a clue as to what happens in practice when the impressions left by the grains interact as the process removes metal. When interactions between the unit machining events occur, the surface obviously becomes more complex but is not intractable! In general if C (τ L) is the autocorrelation
Figure 6.65 Impulse train.
Figure 6.67 Generation of abrasive surface—first surface square grit, second surface triangular grit.
Where the average frequency is wa. ζ is a measure of damping in the cutting process. Consider the next feature, the impression the grain has on the surface. It is informative to consider what the autocorrelation function of a single impression might be. Thus if C(τ) is the ACF of the single impression on the surface, it is given by
Figure 6.66 Grit impression.
C (τ) = λ2
∫
∞
h ( α ) h ( α + τ ) dα
−∞
(6.39)
where λ is the density of events per unit distance. Note that the integrand in Equation 6.39 represents the autocorrelation function of the unit event, for example the single sputtered particle or the individual crater. The interaction between them completely disappears! In other words, in this special case, shown in Figure 6.68, the autocorrelation function of the unit event is preserved throughout the whole process. At the same time, for this case the height distribution is classically Gaussian. The situation is not so straightforward in grinding because the interaction between furrows cut by the grain is complex. However, it is useful to demonstrate the principle above because, even for grinding and other abrasive processes, a remnant of the average unit cutting event is always present in the ACF, together with other features representing the aspect of the machine tool [58].
523
Surfaces and Manufacture
Height
z
Unit event –each of same nominal volume
f(y) f(x)
x
Figure 6.68 Build-up of surface by superposition of unit events.
An important message that emerges from this part of the section on grinding is that in very complicated processes (and in simple ones for that matter) simulation and theoretical methods can be very helpful in pointing out certain characteristics; features can be explored which are difficult to investigate practically. From the exercise here, for example, it has been possible to show how the ACF remembers the detail of the unit event of machining, even after many revolutions of the wheel. This illustrates the potential in the idea that the surface roughness remembers what is happening in the process. It can therefore realistically be used as a basis for process investigation (e.g., for wheel loading and consequent automatic dressing), but it has ultimately to be checked practically. There have been many attempts to link grinding parameters with the surface texture. They have usually ended up with products of non-dimensional factors. Very little correlation has been observed between roughness parameters and grinding parameters. Recent work [59] suggests that there may be some correlation between waviness parameters and grinding parameters especially the waviness height Wt, which appeared best for distinguishing process variables. A scale sensitive, fractal-based parameter smooth–rough crossover (SRC) seemed to be best for relating the wheel and workpiece topographies but, in general, a direct relationship seems impossible. It has been mentioned above that the statistics governing abrasive processes are basically Poisson/Markov rather than fractal or scale sensitive fractal. In fact, if the grains of a grinding wheel are shown on circles representing the path of the grains, the f(x) direction corresponds to Markov whereas, at any one position of y, Figure 6.69 Machine tool and process kinematics. The profile height is the envelope of all the high-cutting grains on the wheel. So f(y) at any time is the resultant effect of all high grains (in that x position on the wheel) which produce the surface. This is not Markov nor is it fractal. It is a “martingale,” which is a statistical “game” condition in which all events up to now (the present) produce just one value. It is analogous to Minkowski subtraction in morphology—see reference lines Chapter 2. This is the equivalent of the envelope, which is the profile. Plunge and creep feed grinding are good examples of this condition.
y x
Figure 6.69 Machine tool and process.
Hence in very general terms the value f(y) is a martingale type of statistic in which the present is decided by all that went before and the f(x) is the Markov process in which what occurs depends only on the present value. The two statistical statements for f(y) and f(x) have a certain symmetry even if they are difficult to deal with. Variations in f(y) represent the movement of the center of the wheel in the y direction i.e., the waviness perpendicular to the profile f(x) which is a machine tool characteristic. K variations in f(x) are concerned with the process.
6.4.9 Honing Honing is a finishing operation in which stones are used to provide a fine roughness of the order of 0.5 µin Ra often on internal cylindrical bores. The stones, which are usually made of aluminum oxide (in the form of what is called sticks), comprise grains of size 30–600 µm joined by a vitrified or resinoid bond. Honing is usually employed to produce an internal bearing surface for use in the cylinders of diesel or petrol engines. This involves the use of the hone as a finishing operation on the surface that has been previously ground. For internal honing the amount of material removed is small (~2 µm). Traditional honing methods employ three or more stones (six is typical) held in shoes mounted on a frame, which can be adjusted to conform to the correct bore size, the shoes being arranged to follow the general shape of the bore. In operation the cutting speed is about 1 m s–1 with a pressure of 0.5 Nm–2 In practice, the honing tool is allowed to dwell for a small amount of time at the end of its traverse before retraction. This causes all sorts of problems of interpretation of the surface roughness of cylinder bores because the lay changes direction rapidly. Before more critical evaluation of honing is undertaken it should be mentioned that another similar process is superfinishing. This is very similar to honing but the stones are given a very small axial vibration, and it is used on external cylindrical surfaces. In honing and superfinishing the process is often used to remove the effect of abusive grinding. This is the situation in which the machining—in this case
524
Handbook of Surface and Nanometrology, Second Edition
grinding—produces thermally induced stresses in the surface left by the main finishing process (grinding). A picture of a typical honed surface is shown in Figure€6.70. An areal picture of honing can be shown in another way using diffraction by a laser throwing light at an angle through the cylinder (Figure 6.71). Ra values ranging from 5 µm down to 0.3 µm can be generated by the honing method. The picture of honing as shown in Figure 6.70 is somewhat idealized. This picture is probably typical for the middle of a cylinder bore, for example. There are, however, big differences at the end of the stroke because of the change in direction of the hone. Usually the hone describes a figure of eight at the stroke end. This has the effect of destroying the continuity of the oil channel scratches and roughening the load-carrying lands. For this reason it does not always pay to allow the piston to utilize all the bore.
6.4.10â•… Polishing and Lapping
Figure 6.70â•… Plateau honing.
Honing angle
1.0
Fixed grain
Loose grain
Fixed grain
Fixed grain 4 (nm)
Removal µm/m
Figure 6.71â•… Diffraction method to resolve honing angle.
Polishing involves the use of free grains usually suspended in some liquid rather than bonded into a wheel as in grinding. It is invariably a process associated with generating nominally ideally smooth surfaces. This process is usually defined as one in which the microroughness is commensurate with the diameter of the molecules, or of the same scale as the lattice spacing. One could argue that the smoothest surface is the surface of a liquid in an absolutely undisturbed state. A surface approaching this ideal is that of glass when congealed in a state of rest. The faces of monocrystals grown from salt solutions present another example of a surface with a fine structure. Of all the methods of treating a surface already mentioned, such as turning and grinding, the smoothing of the surface is at the expense of plastic deformation. The essential mechanism of fracture in plastic materials is the removal of shaving and, together with this main phenomenon, the plastic deformation of adjacent particles. However, the main phenomenon in the machining of friable materials is the development of cracks within the mass of material, which penetrate to some depth below the surface and intersect, thereby producing a mechanically weakened layer easily fractured by the repeated action of the abrasive. This is the microfracture mode mentioned earlier. To a large extent the laws operating in the case of plastic materials, usually metals, are different to those operating during the treatment of glass and ceramics, certain rock crystals, and to a lesser degree metals such as cast iron, germanium, etc. In polishing there is a problem of determining the true mechanism of material removal and a number of theories have been advanced. It is not the intention here to go into great detail. However, it is informative to see how one outcome can result from a variety of wholly different mechanisms. The earliest polishing theories apply to that of glass polishing. French [60] suggested that an abrasive particle produces a pressure on the surface of glass which has a very small resistance to fracture in comparison with its resistance to compacting, and that this causes cracks to develop along planes of maximum shear (which are inclined to the axis at an angle of about 45°) (Figure 6.72). Cleavage commences along the planes of maximum shear, but due to the
0.5
Loose grain
Fixed grain
Ry Rz
2
5
10
15
Number of lapping sets
1
Ra 5
10
15
Number of lapping sets
Figure 6.72â•… Removal rate and surface finish. (From Touge, M. and Matsuo, T., CIRP Annals, 45, 307, 1996.)
525
Surfaces and Manufacture
fact that the particles act like a wedge, the shear plane is diverted upwards, which leads to a conchoidal fracture. Preston [61] considered the action quite differently. In his opinion the grain rolls between the tool and the glass. Owing to the irregular shape of the grains they produce a series of impacts one after the other, as a result of which conical cracks are generated. These cracks he called “chatter” cracks. These intersect each other and, with repeated action by the abrasive grains, pieces are pulled out. Therefore, he concluded that there would be an outer layer showing relief and under it a cracked layer. The direction of the cracks and the resulting stresses in crystals differ for differently orientated faces and differ markedly from glass in, for example, lithium fluoride. The pronounced difference in the smoothness of ground and polished glass was responsible for the theory concerning the different mechanisms of these two processes and for the development of different theories of polishing. The real question is whether a polished surface is qualitatively different from a finely ground surface or whether it is possible to develop conditions—as in ductile grinding— under which a transition from a ground surface to a polished surface with a gradually decreasing value of roughness can be detected. It is now more or less held that mechanical removal plays an important role in polishing, as does a thermal surface flow (as discussed by Bielby [62]) and formation of a silica-gel surface (in glass) by hydrolysis [63]. A polished surface is characterized by the absence of cavities and projections having dimensions in excess of the wavelength of light. In the case of polished surfaces the projections are not greater than one or two molecular layers [65]. It is clear that while splintering, as described by Preston, plays a large part in the grinding and polishing of glass (and ceramics), plowing or ductile movement of material generally plays only a small part because of the very hard and brittle nature of the material, so that in glass grinding, for example, even at light loads and stiff machines there must be a considerable danger of splintering. These remarks do not apply to ductile grinding. It basically reduces to the question as to what extent the surface of polished material is different from the bulk. It can be asserted in accordance with French [60] that this is definitely a factor to be considered. One pointer to this is the fact that smooth-sided scratches called sleaks do exist on glass surfaces and are quite different from the deeper scratches which exhibit the Preston-type fractures. It has been pointed out that apart from the property of reflecting light, polished surfaces exhibit other characteristics that differ from those of the bulk [64]. Thus polished surfaces sometimes exhibit higher strengths than the bulk (although it depends on how this is defined) and increased mechanical resistance. Polished surfaces often have different heat and electrical surface properties, reinforcing the belief that polishing is more than a simple mechanical process.
Surface melting to produce the very smooth surface was shown by Bowden and Hughes in 1937 to contribute to the polishing process. The essential factor here is not necessarily the relative hardness of the lap and the material, it is just as likely to be the difference in melting point. So far, however, there still does not appear to be a universal view of a mechanism to explain the smoothness often obtained in a few angstroms. The terms polishing and sometimes smoothing are nominally the same as lapping. They are all basically the same process—the essential feature being the presence of free grains in a slurry and the lack of heavy loads and high speeds. The mechanical movement needed is traditionally provided either by a machine or by hand as in the case of polishing small telescope mirrors. It is to be hoped that the more controlled ductile grinding will replace this method. More will be said later on the philosophy of the machining method used to optimize a particular function. The question is whether the measurement methodology should follow the functional requirements of the surface or whether it should reflect the manufacturing process. Or should it do both? One of the problems with complex processes such as lapping is that sometimes there are effects which have little to do with the actual process. For example some work carried out in 1996 [65] attempts to relate removal rate and surface finish to the lapping process, yet the paper finds a very significant effect of the facing of the lapping plate on removal rate. The facing or truing of the plate acts as a turning operation in which the feed to the facing tool is an important factor. This is hardly a typical lapping parameter! Perhaps the most significant finding is the different effects of the fixed grains with respect to the loose grains. It is clear that loose grains dramatically change removal rate but have little effect on surface finish. It is conceivable that the loose grains present sharp facets to the workpiece at all times, thereby enhancing removal, whereas the fixed grains having high asperities soon get smoothed and therefore have a limited effect on material removal. Quite why the surface roughness is constant is a mystery—it should decrease as the lapping sets increase! It may be that these results only relate to the materials involved (i.e., Mn—a ferrite and diamond grains) but it is more likely to be symptomatic for the fine finish lapping process.
6.5 UNCONVENTIONAL PROCESSES 6.5.1 General So far the main processes have been discussed—the conventional machining methods. It should be emphasized that this chapter is not meant to be an exposition of the processes themselves, but a discussion of the way in which processes affect surfaces. In what follows, some of the unconventional methods and the geometry produced by them will be briefly considered. The word unconventional though is a misnomer
526
Handbook of Surface and Nanometrology, Second Edition
Mechanical processes
z Time
Abrasive Slurry
m
m
Ultrasonic machining (USM)
m+
Workpiece
Water jet machining (WJM)
– +
Electolyte
Water jet
Workpiece
Tool
Acid/ Alkaline
Resistant
Air and abrasive
Nozzle
m
Electro-chemical processes
Chemical processes
2e– H2
M(OH)n
M+
Workpiece
Workpiece
Abrasive jet machining (AJM)
Chemical jet machining (CHM)
Electro chemical machining (ECM)
Ar Energy level + Dielectric
∆Ε= hc λ
Voltage e– m
Ar+
Lens 10–6Pa
Air
Photons
Laser beam machining (LBM)
e–
10–7Pa
Electron beam machining (EBM)
Plasma
Ar
Lens
e
m
m
m
–
Electro discharge machining (EDM)
Ar e–
Time
Lens
W
W
m
ION beam machining (IBM)
Plasma beam machining (PBM)
Figure 6.73â•… Mechanical and electrothermal processes. (From Snoeys, R., Staelens, F., and Dekeyser, W., CIRP Annals, 35, 467–80, 1986.)
as many of the processes are often used. However, for clarity they will all be grouped together in this way. Within the term “unconventional” there are many divisions, some of which are shown simply in Figure 6.73. The pictures speak for themselves [66]. Others are also included. It can be seen that they can be divided into mechanical, chemical, and thermal methods. In all cases the concept of a tool producing a chip is dispensed with. The surface is literally machined by contact with free bodies, either mechanical, chemical, electromagnetic, or atomic. These methods have been developed usually to machine hard or otherwise unconventional materials. As a result the texture has been very much a secondary consideration. Hence only a simple description of the techniques will be given here. The typical types are shown schematically in Figure 6.73€[66].
6.5.2â•…Ultrasonic Machining A process that similarly uses an abrasive grain in a slurry is ultrasonic machining. In this the surface generated is random as in the case of polishing and grinding, but the significant difference is that there is no easy escape route for the abrasive (or the debris particles). The machining action is produced by the vibration of a shaped tool tip in an abrasive slurry which forms a cavity of sorts—hopefully of the required shape for a bore or shoulder in a bore in the workpiece. A light static load of about 10 N is applied to hold the tool tip against the workpiece. Material is removed by the abrasive action of the grains in the slurry
Transducer
mλ/2 Abrasive slurry
Workpiece
Figure 6.74â•… Ultrasonic machining.
which is trapped in the gap between the tool and the workpiece. The tool tip imparts the necessary energy for metal removal by means of an ultrasonic vibrator which shakes the tool tip. It is arranged that the vibration mode of the tool ensures an antinode at the tip (Figure 6.74). In this figure, the order of the antinode is optional. The vibration is fundamentally working in the cantilever mode (m is an integer). This has exactly the same effect as the wear mechanism that occurs in fretting in which small vibrations trap debris between two mating surfaces. The frequencies used tend to be in the range 20–40 kHz and the type of vibrating transducer can be piezoelectric, in which the vibration is determined by knowing the velocity of
527
Surfaces and Manufacture
sound C in the piezoelectric material given by its density ρ and its elastic modulus E [69]. Thus
C = E /ρ.
Tool
(6.40)
Knowing C and, given the exciting frequency f, the half wavelength of the wavelength of the vibration can be found as
C /2 f
Workpiece
(6.41)
to ensure that the maximum amplitude vibration ensues. Other forms of transducer include magnetostrictive devices which incorporate a piece of ferromagnetic material such as nickel.
6.5.3 Magnetic Float Polishing Polishing can also be achieved using a similar slurry [65] but there is a difference as seen in Figure 6.75. In this method the workpiece rotates (or reciprocates) the abrasive material, usually SiC up to about 40% in weight, contained` in a magnetic fluid (ferricolloid). It has been verified that 0.04 µm Rt is possible to achieve using 4 µm SiC particles with a uniform force provided by the magnets. The polishing force on each grain is small due to the fact that the abrasive floats. Also the temperature of each polishing point is reduced since the magnetic fluid is made thermally conductive. The potential use of this method is in making aspheric shapes as well as flat surfaces.
Equilibrim gap ge
Figure 6.76 Electrochemical machining.
depends on the cathode feed rate. One of the advantages of such a technique is that, in theory at least, the tool does not wear due to electrolysis, although it can sometimes corrode. The electrolyte depends on the application but often a dilute acid is used in preference to a salt. A typical configuration is shown in Figure 6.76. The surface roughness generally is not very good. The factor which influences it most is polarization, that is the flow and direction of flow of the electrolyte. This effect is usually present at quite a high velocity. The reason for this is set out in any production process book, but basically the requirement is that the gap between workpiece and tool should be as near as possible constant and preferably small so as to be able to maintain dimensional accuracy. This equilibrium gap ge is determined from the relationship
6.5.4 Physical and Chemical Machining 6.5.4.1 Electrochemical Machining (ECM) The components are immersed in the electrolyte as the anode and the burrs or the surface skins are removed by means of an electric current [68]. The rate of removal depends on the electrochemical equivalent of the metal and the current density. In practice the workpiece, contained in a bath, is the anode and is connected to a dc supply. Also in the bath is the cathode, which is profiled as the negative of whatever shape is intended on the workpiece. As the cathode is lowered, the material of the workpiece is removed at a rate which
Workpiece
Abrasive Magnetic fluid Magnets
Figure 6.75 Magnetic float polishing.
ge = EVK /J = ρ f /J ,
(6.42)
where E is the electrochemical equivalent, K is the conductivity of the electrolyte, J is the equilibrium current value, V is the applied voltage, ρ is the work material density, and f is the tool feed rate. Equation 6.42 shows that the equilibrium gap is proportional to the applied voltage and to the feed rate and that the equilibrium current density is inversely proportional to feed. In practice the value of K is not constant but varies as the temperature of the electrolyte changes. Hence the gap increases as the electrolyte passes through it. So to keep the gap constant the temperature of the electrolyte needs to be constant, which implies that the flow is as high as possible. This produces flow marks on the surface, particularly if there is any debris in the fluid. Furthermore, in the case where small holes or gaps are being formed, high velocity of electrolyte is difficult to achieve. This can often lead to boiling of the electrolyte which in turn detrimentally affects the surface roughness [69]. Usually the velocity of the electrolyte is about 50 ms–1. The form on the workpiece is very important, perhaps even more so than the surface roughness. In die sinking by ECM it is possible to produce a shape on the workpiece that is very nearly the negative of the tool. However, the relationship between tool shape and workpiece shape is not always straightforward [69]. For example, if the tool is curved, as in Figure 6.77, the gap at 1 is different to that at 2 because the
528
Handbook of Surface and Nanometrology, Second Edition
α
Tool
Tool Workpiece Dielectric fluid
Pulse generator
1 2
Figure 6.77â•… Relationship between workpiece and tool shape between the wheel and the workpiece.
gap (Equation 6.42) is proportional to the feed normal to the surface, which is f cos α at 2 rather than f at 1. Hence form of an unwanted nature can easily be produced on surfaces that have been machined by ECM. As far as the roughness is concerned, very often flow marks show where the electrolyte has been. The surfaces are usually rough, of the order of 2–3 µm Ra. 6.5.4.2â•…Electrolytic Grinding A variant on ECM is electrolytic grinding, which is used on very hard materials. In this the tool is a metal wheel which has abrasive grains in it. An electrolyte, which also plays the part of cutting fluid, is circulated . When the wheel contacts the workpiece grinding occurs in addition to ECM. The result is a very high rate of removal and a very fine surface roughness in which the abrasion reduces the effect of polarization. Surfaces as fine as 0.02 µm Ra have been produced by this method. Also, because the electrochemical action is taking most of the metal away the wheel does not wear as much as in conventional grinding. Apart from the flow line effect often found, experimenters [70] have found that very high-flow velocities considerably reduced the presence of short-wavelength components on the surface. In some systems, especially in hole making rather than die sinking, the tool of copper or steel is insulated on the outside and just has a thin conductive tip. This tooling design affects the surface integrity and hence the fatigue and corrosion resistance of the material. Also the relatively rough surface on the side walls is a result of the low-current density on the side gap caused by the insulation of the tool. Under certain circumstances an improved tool having no insulation is used with an electrolyte of NaCl2. This can considerably improve the surface roughness as well as the overall reliability of the system [71]. 6.5.4.3â•…Electrodischarge Machining (EDM) This is a method used to machine very hard materials and uses the discharge of an arc from one conductor to another. Basically when a voltage is applied between two conductors immersed in a dielectric fluid, the fluid will ionize if the potential difference is high enough. A spark is produced between the conductors, which will develop into an arc if the potential difference is maintained. The property of the arc that causes the removal of metal is its temperature. Typical values are between 5 and 10,000°C, well above the melting
Workpiece
Figure 6.78â•… Electrodischarge machining.
point of most metals. Local melting occurs, especially in the conductor connected to the positive terminal (Figure 6.78). The tool is lowered toward the workpiece in the liquid dielectric, usually paraffin or white spirit. When the gap is small the dielectric ionizes and the spark jumps across the gap. If the potential difference falls the arc decays. A normal gap of about 25–50 µm is maintained by a servo motor whose demand signal is actuated by the difference between a reference voltage and the gap breakdown voltage. The sparking rate is limited by the pulse rate from the generator or, for very short pulse intervals, by the de-ionization rate of the fluid. The tool electrode is often made of copper or any other conducting material. However, for long life and high-form accuracy tungsten carbide is sometimes used. In appearance the surface roughness looks like shot blasting because of the apparent cratering. This is because the spark does not always retain one position on the surface but wanders slightly, as is the nature of electrical discharges. Because of this action the surface is isotropic in character. The form obtained in edm can be quite accurate (~5 µm). The surface roughness value is very dependent on the rate of machining: the two are conflicting. If a good surface roughness is required then the metal removal rate should be small. Surface roughness values of 0.25 µm have been achieved but only at low rates of material removal. The use of water as opposed to oil as the dielectric in EDM [71] produces a poor surface roughness and poor subsurface conditions. These are mainly due to the fact that a thick thermally affected layer is produced; water is not efficient in getting rid of the molten metal on the surface. There is also the fact that the removal of metal can be eased somewhat by the addition of organic compounds such as sugars, polyhydric alcohol, and their polymers to the water. However, with water the surface roughness is degraded, for example a removal rate of 3 mm3 min–1 with oil or paraffin produces a surface roughness of 15 µm Rt whereas with water a figure of 40 µm results—a considerable worsening. Recent attempts to develop “dry” EDM is reported in Section 6.3.4.2.1.
6.6â•… FORMING PROCESSES 6.6.1â•… General Forming as a means of improving the surface roughness seems an improbable method, yet it is possible and has been
529
Surfaces and Manufacture
used effectively in a number of different applications. One such method is called ballizing; another is swaging, as, for example, is used in cleaning up the internal bore of a barrel in armaments.
Many of the processes used in engineering do not involve the removal or the depositing of material. They rely on the distribution of material from one shape to another. The first shape is easy to make and to distribute; the second shape is useful in function. An example is sheet steel rolls being transformed into car bodies by a forming process. The mechanisms involved are usually a compression together with some element of lateral flow. Sometimes the process is carried out under high pressure and temperature, sometimes just one. The main point is that contact and flow are the essential mechanisms involved and surface texture is a key feature of both. Some of these issues will be discussed in Chapter 7 and the areal possibilities (3D) have already been examined in Chapter 2. The matter of surface characterization has been highlighted by the need to deal with textured surfaces such as are now produced by lasers. One recent attempt to explain the regimes in forming [72] is based on earlier work by Stout [73]. Basically the mechanism of contact is broken up into three variants of the material ratio (Figure 6.79). During deep drawing, stretch forming, and similar, processes the contact can be material/material and liquid/ material. At any level the sum of the areas has to be equal to the nominal area. Areas in which lubricant is trapped are subject to hydrostatic pressure in the same way that liquid is trapped in elasto-hydrodynamic and plasto-hydrodynamic lubrication in ball bearings and gears. In the figure an area of void which touches a boundary of the nominal area can theoretically allow liquid to move out to the contact zone so any pressure developed is hydrody-
Material contact area Lubricant hydrostatic area
Figure 6.79â•… Cross section of contact zone.
Open void area Material ratio area
6.6.2â•…Surface Texture and the Plastic Deformation Processes
Hydrodynamic area lubricant
0/100
Closed void area
z 100/0
Figure 6.80╅ Apportioning of areas. (From Stout, K. J. et€al., The Development of Methods for the Characterization of Roughness in 3 Dimensions. Report EUR 15178 EN EC, Brussels.)
namic. Surrounded voids that could entrap liquids are subject to hydrostatic pressure. The basis of the classification at any level is “material area ratio open void enclosed void.” Hence three curves describing this relationship can be plotted as shown in Figure€6.80. This type of map is useful because it highlights the areal properties of real surfaces rather than relying on profiles. The feature which becomes revealed is the closed void which is exactly the same type of feature as the “col” found when areal (3D) summits are being identified (see Chapter 2). The closed voids are usually referred to in terms of volume enclosed. Arealy patterned surfaces often have constant values of the contact and void areas as a function of height z. The problem with this development is that it does not go far enough. Ideally the contacting surface should be included because the map of areas can be misleading. Areas of voids from both surfaces can interfere with each other. Another problem is that there is no thought given to initial contact. This is determined by using elastic theory within which surfaces’ asperities are important. The height properties of the surface under purely plastic deformation do not include the surface parameters usually specified. It should be remembered that, simple as the breakdown given above is, in terms of the number of parameters, each parameter is a curve. To get numbers some decision regarding height has to be made. In practice this should be determined by the actual application. Values taken from the material ratio curve purporting to estimate the voids which carry lubricant have been made [74]. At the same time bearing parameters have been postulated to indicate load-carrying capacity. These have been applied to a number of surfaces. One shown below in Figure€6.81 is the plateau honed surface used in the automotive industry. Some of these parameters are shown in Table 6.3. How they are defined is given in Table 6.4. They have been designated “functional parameters” because they are specific to an application.
530
Handbook of Surface and Nanometrology, Second Edition
Upper surface defines run-in characteristics
Body of surface defines wear/life characteristics
A large value of Svi indicates a good fluid retention. For a good random surface—Gaussian for example—this index is about 0.1.
0 < S vi < 0.2 − (h0.8 − h0.05 ).
The levels 0.8 and 0.05 are arbitrary height values. A bearing type parameter is the surface bearing index Sbi. Surface bearing index Sbi Valleys define lubrication characteristics
Figure 6.81 Multi-processed surfaces (e.g., plateau honed surface).
TABLE 6.3 Functional Parameters • • • • •
Surface material ratio Sq Void volume ratio Svr Surface bearing index Sbi Core fluid retention Sci Of these even a basic set has 14 parameters
TABLE 6.4 Some Functional Parameters • Surface material ratio Sbc = Sq/Z0.015 Where Z0.015 is the height of the surface at 5% material ratio • Core fluid retention index Sci = (Vv (0.05)−Vv (0.08))/Sq (unit area) Where V is valley If Sci is large then there is good fluid retention 1 roughness enhances the intensity of the hydrophobic or hydrophilic behavior. On the basis of this model Johnson and Dettre [158] point out that after a certain critical value of r = rc the contact angle continues to increase while hysteresis starts to decrease leading to a slippery rather than a sticky surface. After this value the non-wetted fraction of the solid interface increases leading to a decrease of the wetted fraction. This state is described by the Cassie–Baxter equation [159].
cos θcb = rf . f .cos θ + f − 1.
(6.53)
Where r f is the roughness factor of the wetted area only and f is the fraction of the projected area of the solid surface that is wetted. Therefore in order to create super-hydrophilic (SH) surfaces the solution points to low energy coatings on surfaces of HAR being used, e.g., machining paraffin surfaces, glass beads with fluoro carbon wax. The intention is to create a HAR topography so as to be well within the Cassie–Baxter regime in an already hydrophobic material or in a conventional material coated with a low energy layer, before or after the roughness formation.
This is not the only consideration, for example, in self cleaning glass it is necessary for the glass still to maintain its transparency at the same time being strongly hydrophobic. There has to be a compromise in the surface roughness: it has to be rough enough i.e., big “r” factor,to put the surface well into Cassie–Baxter conditions yet smooth enough not to cause optical diffusion at the surface i.e., Ra 5 then most asperities will deform plastically. Other workers have approached the problem somewhat differently using the spectral moments of a profile, m 0, m2, m 4, described earlier. They determine the probability density of peak curvature as a function of height. Then, assuming that a profile comprises randomly distributed asperities having spherical tips, they estimate the proportion of elastic to plastic contact of the profile on a flat as a function of separation. Thus if Rp is the maximum peak height and h is the separation of the flat from the mean plane, the probability of elastic deformation becomes Pel where Pel = Prob[c < ( H /E )2 /( Rp − h)] given that z* ≥ h, (7.137) so that in this context the important parameter is that which describes how curvature c changes with height, that is f(c/z* > /h). Unfortunately such height-dependent definitions are impractical because Rp, the peak height, is unbounded; the larger the sample the bigger is Rp. One comprehensive index has been derived by Francis [92] who in effect defines two indices ψ1 and ψ2
ψ1 =
σ ′ E*1 k 1/ 4 Pm
(7.138)
ψ2 =
σ E*1 k 1/ 2 Pm
(7.139)
σ ′2 , σσ ′′
(7.140)
which is similar to that of Nayak in that σ″, the RMS curvature, is included (σ′ is the RMS slope and σ is the RMS of the gap between the surfaces). In general ψ1 has been found to be the most sensitive of the two indices. He finds that for ψ2 14. More recently generalized indices have been proposed [93] which try to take isotropy into account. These authors arrive at a plasticity index in which the isotropy γ is included. When written in terms of the Greenwood index this becomes
1 ψ= w
1/ 2
=
π 1+ γ 2 2k K (e) H (e)
1/ 4
E* σ* H Rx
1/ 2
, (7.141)
where K(e) and H(e) are elliptical integrals for the pressure distribution, γ = R xRγ, the ratio of the radii of curvature in the two directions, is taken to be orthogonal and k is a function of the yield stress and approximates to unity over a range of γ. The interesting result from this work is that for surfaces which have γ 2 – No (holes = 1) (c)
Dens (max) + Dens (min) – Den (saddle) + Den (holes = 1) >z >z >z >z = Den (patches) z
FIGURE 7.44 Nayak behavior of areal contour patches (“No” means number). (From Nayak, P. R., Wear, 26, 305, 1973; Pullen, J. and Wilhamson, J. B., Proc. R. Soc., A327, 159–73, 1972.)
1.5
Surface height/ h0
1 0.5 0
–0.5 –1 –1.5
FIGURE 7.45 Shape of sine wave at various degrees of flattening. (Gao, Y. F., et al., Wear, 261, 145–54, 2006.)
7.2.3.3.4 Energy Considerations There are other considerations to be taken into account in plastic deformation. One is the energy used in the deformation. This has been addressed by, for example, Almqvist et al. [103] To develop a model for the numerical simulation of the contact of elastic—perfectly plastic rough surfaces. Energy dissipation is taken into account as well as the use of spectral theory to characterize the surfaces as well as facilitate the numerical solution. Their approach is summarized thus. A contact mechanics problem can be governed by the minimum potential energy theory. Pressures and displacements can be obtained by solving a minimum value of an integral equation i.e., by using a variational approach.
min ( f ) min 1 = p≥0 p≥02
∫ s
pudS +
∫ s
pgdS ,
(7.153)
where ∫ s pdS = W , p is pressure, u is normal displacement, g is the gap between the undeformed surfaces S is the contact and W the load. For two elastic half spaces, assuming plain strain the total normal surface displacement u(x) in profile for a given pressure distribution is,
u(x) = −
2 πE ∗
∫
∞
lnx − s p ( s ) ds + C ,
−∞
(7.154)
in the case of a sinusoidal pressure profile p(x) of one sine wave frequency only. It is possible to find the exact closed form solution for the normal surface,
p ( x ) = α cos (ω x ) ⇔
2α cos ( ω x ) + C , (7.155) ωE ∗
674
Handbook of Surface and Nanometrology, Second Edition
where, ω = 2π(k/L), which can be interpreted as the complete contact of two sine wave surfaces. In spectral terms
which when coupled with the deformation of the asperities and the bulk gives the separation as a function of radius as d(r)
U ( ω ) = P (ω ) H ( ω ) ,
(7.156) d ( r ) = ω b (r ) − y (r ) = − y0 +
where H ( ω ) = (2 * / ωE * ) is the transfer function. They based their numerical method on that of Stanley and Kato [104] which accounts for the energy dissipation due to plastic deformation. The gradient of p has to be modified according to. ∇p = ue + u p + g,
(7.157)
ue and up are the elastic and plastic deformations. The procedure is to iterate until ue,up and p satisfy Equation 7.153. Under the constriction 0 ≤ p 0.6 H p0 < 0.6 H p0 = 0 .6 H
been somewhat neglected. He also goes on to criticize his and Bush’s earlier work for failing to take account of coalescing of the contact zones as the surfaces become pressed together more closely. They assumed that whenever a summit meets the opposing surface that this contact then grows independently of other contacts resulting in an increase in the number of contacts as the surfaces approach. The problem here is that in the limit, upon complete compression, there is only one contact, which corresponds for the profile case of the value of H in Equation (7.78) having the value of unity. The P values on the other hand become the total number of peaks in the profile. Nayak considers the areal form of the contacts which are in this case patches of contact. He showed that some patches have holes in them and that unfortunately random process theory cannot predict the number of contact patches but only the difference between the number of patches and the number of holes i.e.,
bulk deformation occurs
asperity deformation occurs
asperity and bulk deformation occur,
which is a nice breakdown of plastic conditions but is somewhat optimistic in its rigid demarcations. One relevant conclusion is that the roughness is the primary factor in controlling the deformation behavior of the contacting surfaces. In their careful experiments they used aluminium specimens with a surface roughness of Rq ~0.67 μm and a load of 4 N and brass with a surface roughness of Rq~o.13 μm and a load of 12 N. 7.2.3.3.5 Greenwood–Nayak Comment Greenwood [108] re examined the work of Nayak [109] on plastic deformation and comes to the conclusion that it has
where DC =
ζ2 d p (ζ ) − d h (ζ ) = DC ζ exp − , 2 2
1
( 2π )
(7.166)
3
2
σ m and ζ is the height z/σ. σ
Nayak attempted to estimate the number of holes by assuming that the surface power spectrum could be split up into two uncorrelated roughness’s z1 and z2 where z2 is narrow band and is the main contribution to the roughness value. How this is meant to work is not clear. Greenwood [108] makes some simplifying assumptions to get a clearer picture. He assumes that the holes within any contact patch can only have one minimum which corresponds in the profile form of having a single valley in a high spot above y′ rather than the larger number usually found in practice. 7.2.3.4 Size and Surface Effects on Mechanical and Material Properties It is necessary to consider the direct influence of the surface on function such as in electrical conductivity but it is also important to look at the properties of the surface itself as in the mode of contact discussed at some length above. One aspect of these properties which is becoming more important is the effect of size [110]. The basic idea is to see if there is a relationship which relates the properties of the bulk material to that of small size and indirectly to the properties of the surface. They identify intrinsic length scales of several physical properties and show that for nano- structures whose characteristic sizes are much larger than these intrinsic scales the properties obey a simple scaling law. They attribute the simple law to the fact that essentially the physics of the situation is straightforward: the underlying cause of the size dependence of the properties is the competition between the surface and bulk energies. This law provides a yardstick for checking
676
Handbook of Surface and Nanometrology, Second Edition
the accuracy of experimentally measured or numerically computed properties of nano-structured materials over a broad size range and should reduce the need for repeated and exhaustive testing. A common feature of this energy competition is that when the characteristic size of the object is very large (macro?) the physical property tends toward that of the bulk. By dimensional analysis the ratio of a physical property at small size L, i.e., F(L) to that of the bulk F(∞) can be expressed as a function of a non-dimensional variable Xj (j = 1 M) and a non-dimensional parameter lin/L,
For gold and silver nano-particles lin~0.01–0.1 nm, for example and R would be many nanometers.
F (L) l = F X j , in , F ( ∞) L
(7.167)
where lin is an intrinsic length scale related to nthe property and L is the fcature size of the object of interest. The assertion is that many physical properties can be expressed in the form F (L) l l = 1 + α in + O in , L F (∞) L 2
(7.168)
l where O in L denotes the remainder to the second order at most but usually allowing the linear form to be adequate, F (L) l = 1 + α in , F ( ∞) L
thus
which could indicate that the ratio is usually small and that the function is exponential. The premise is that this law is followed by many properties at the nano scale. They identify the intrinsic length scale and the law for a number of properties;
7.2.3.4.2 Elastic Deformation The elastic deformation of nano-structures obeys the law, for example the elastic deformation of nano-wires. The intrinsic size in this case is as before about 0.01–0.1 nm and is determined from the ratio of the elastic modulus of the surface “skin” τ to the bulk elastic modulus E: the ratio having the dimension of a length. for nano-beams, wires etc. L~1– 100 nm making the ratio of the intrinsic size to the feature size very small and comfortably satisfying the requirement for linearity. Figure 7.47 shows a simple plot of E(L)/E(∞) against scale of size for metals which illustrates the fact that elastic properties tend to become stiffer as the size reduces and the surface properties become more dominant. The elastic constants of silver and lead nano-wires of diameter ~30 nm are nearly twice that of the bulk [111,112]. Many other examples involving the Eshelby tensor in material science, solid state physics and the mechanics of composites also have this size effect. 7.2.3.4.3 Plastic Deformation of Small Volumes It has long been controversial as to whether small volumes of material are, or should be more resistant to plastic deformation than is implied by the yield strength of the bulk material. Dunston and Bushby show that there is an increase in the initial yield strength of small volumes [113], wherever there is a strain gradient, due to geometrical rather than material reasons which is a surprising result. The onset of yield is postponed in the presence of a strain gradient especially in ductile materials. Strain gradient plasticity cannot explain the size effect in the yield strength with small volumes—that is, the initial deviation from elastic behavior at the initiation of plastic deformation because at the onset of plasticity there is no plastic strain gradient and so no geometrically necessary
7.2.3.4.1 Melting Point of Nanoparticles The melting temperature T(R) varies inversely as the radius of the particles R.
1.3
T ( R) l = 1 − 2 in . T ( ∞) R
1.1
This is the Gibbs–Thomson equation. As an example of the intrinsic length they cite a number of models of which the following is one,
E/E (∞)
1.2
1.0 0.9
{001} /, Cu {111} /, Cu {001} /, Cu {100} /, W
0.8
lin =
1 ρS γ SV − γ LV ρS H ρL
2
3
,
(7.169)
where ρS, ρL , γSV, γLV are the densities of the solid and liquid and the energies of the solid–vapor and liquid–vapor interfaces respectively.
0.7
0
0.05
0.10
0.15
0.20
1/N
FIGURE 7.47 Change of elastic modulus with scale of size. (From Wang, J., Duan, H. L., Huang, Z. P., and Karihaloo, Proc. R. Soc. A, 462, 1355–63, 2006. With permission.)
677
Surface Geometry and Its Importance in Function
dislocations. This is a result of “critical thickness” theory. The increased yield strength is not due to an increase in the material property but is due to the reduced volume from which the elastic energy—to drive relaxation—is drawn. In strained layer growth the layer thickness and the geometric strength is independent of the yield strength of the material, depending only fundamental parameter G and b (Burgers Vector) and so can be very large for small layer thickness having values in the GPa range. In cases with a strain gradient the layer thickness or equivalent term is a variable whose value is obtained by minimization of the yield stress, then the geometric strength depends strongly on the bulk yield stress. In ductile metals low yield stresses in the MPa range give lower values of geometric strength with bigger thickness indicating again that the smaller dimensions are synonymous with higher strengths. 7.2.3.4.4 Brittle Fracture Cracks In brittle fracture the critical stress required σc to extend a crack of length L varies inversely proportional to L1/2. This strengthens both small objects as well as small stressed regions on large objects such as tensile strained epitaxial layers [114]. As indicated earlier the reason for the change in the physical properties is a direct result of the size effect in which surface properties become more important as the size reduces. A few examples of the competitors in the match are; Particle Melting Elastic Deformation Brittle Fracture
Surface energy versus Bulk latent heat of fusion
Inclusion: σ1, A1 Matrix: σm1, Am1
Inclusion: σ2, A2 Matrix: σm2, Am2
Level N
Level 2
σN , AN
Level 3 σ0, A Inclusion: σ3, A3 Matrix: σm3, Am3
2R
2RN
FIGURE 7.48 Hierarchical system. (From Carpinteri, A., Cornetti, P., Pugno, N., and Sapora, A., Microsyst. Technol., 15, 27–31, 2009. With permission.)
Also, because hierarchical systems in biology tend to be fractal in nature their approach follows along fractal lines. A typical simple system is shown in Figure 7.48. The nomenclature σ0 nominal stress of whole bar, area A0, σi stress hard phase σM, stress soft phase, vi volume of inclusions at ith level, vr total volumetric fraction. In brief, from Carpinteri, thanks to the property of fractal self similarity for level N → ∞ the domain of inclusion gives a fractal domain D of D=
2 ln ( N ) 0 < D < 2, ln N − ln ν
(7.170)
The total volumetric content of inclusions Vr can be modeled as R νr = ν N = N R
2− D
.
(7.171)
Surface energy versus Strain energy in the bulk
If the ratios at each level are the same the nominal stress σ0 is given, assuming satisfactory bonding or adherence, by
Energy consumed in creating new surface versus Strain energy released into the bulk
7.2.3.4.5 Other Size—Strength Effects Many materials exhibit a hierarchical structure which can cover a number of size ranges. Common amongst these are biological and composite materials both taking on important roles in nanotechnology. The way in which their properties change with size is obviously different from traditional materials. In what follows the strength properties of a matrix type material will be briefly examined to give some idea of the current approach to the subject. 7.2.3.4.6 Hierarchical Effects The problem of hierarchical material strength seems to be especially difficult in biological materials such as in bones which can have up to seven scales of size. Carpinteri et al. [115] have made a special study of (usually) biological materials but which seems to be generally useful.
σ0 =
N
∏ i =1
νi σ N + 1 −
N
∏ ν σ i
M
.
(7.172)
i =1
Combining Equations 7.171 and 7.172 give R σ0 = N R
2− D
2− D 2− D R R σ N + 1 − N σ M ≈ N σN . R R
(7.173)
So, as R→∞ σ0 →0 consequently as the size reduces the nominal stress σ increases providing that σN >> σM which is the usual case. Similarly, the stiffness represented by the elastic modulus is given by,
R E0 = N R
2− D
EN ,
(7.174)
indicating that the stiffness increases as the wire or ligament diameter reduces. Notice that the increase is determined
678
Handbook of Surface and Nanometrology, Second Edition
primarily by the geometric considerations and not the material. This conclusion was reached earlier with other mechanical properties. Further analysis with variable properties at each level of the hierarchy is also possible. Thus
behavior of an ensemble of such contacts taken over the whole surface. The starting point has therefore usually been to define each surface in terms of some sort of asperity model as shown in Figure 7.49, and then to assign values to its features such as radius of curvature, presence or absence of surface films such as oxides, the elastic model, and hardness. The behavior is then considered with respect to movement and load. Such contact behavior is complex and beyond the scope of this book. However, there are texts which deal in great detail with such matters: (see for example the definitive book by K L Johnson [34]). It suffices to say that the properties of asperity contact are investigated classically in much the same way as in Section 7.2, which deals with perfectly smooth, simple geometric bodies. In the case of simple loading the mode of deformation was considered paramount—whether the contact would behave elastically, plastically or as a mixture of the two. When sliding or lateral movement is considered other issues are present, for example whether or not stick–slip and other frictional properties exist. This philosophy of using such asperity models has been useful on many occasions but it does suffer from one drawback. This is that real surfaces are not comprised of lots of small asperities scattered about, each independent of the other; the surface is a boundary and can be considered to be continuous. Random process methods (see e.g., Ref. [120]) tend to allow the simplistic model to be replaced by a more meaningful picture-sometimes! Even then the philosophy has to be considered carefully because surface contact phenomena are principally concerned with normal load behavior and not with time, as in electrical signals, so that top-down characteristics tend to be more important than time or general space parameters most often used in random process analysis. The first and most important assumption using simple asperity model is the nature of the geometry of the typical asperity. Having decided on the asperity model involved in the contact the next step involves an investigation of how these contacts will be distributed in space. This involves giving the surfaces asperity distributions. From these the density of contacts is found, the mean height of the contact and the real area of contact, to mention just a few parameters. This brings in the second main assumption. What is this distribution? Various shapes have been tried such as the linear, exponential, log normal, beta function, and so on. More often than
R E0 = N R
D∗
EN ,
(7.175)
where
D∗ = D
ln ( φ ) , ln ( nh )
(7.176)
in which there are “n” equal parts of which nh are enriched by φ and the rest depleted by φ. In this case the nominal strength depends not only on the volumetric fraction of inclusions but the nature of the inclusions. 7.2.3.4.7 Coated Contact The problem of elastic and sometimes also plastic contact between non-conforming coated surfaces has received much attention in recent years. One good example of their application is in wear protection. The coated analog of Hertz equations can be solved using layered body stress functions and the Fourier transform method [116]. A neat method for finding that contact dimension and the maximum pressure has been developed by Volver [117] but perhaps the most practical method has been due to Sayles [118] who manages to include, albeit with a numerical method, the effect of roughness. When one or both surfaces in contact are coated a somewhat different situation arises because this is now a three body situation rather than the conventional two body. So instead of transitions from elastic to plastic being straightforward the introduction of the third body—the film—presents problems [119] the basic result is that for light loads there is a considerable difference between coated and uncoated surfaces. For those asperities making contact only with the film it is the coating, being soft, which deforms plastically. The substrate has the effect of stiffening the coating.
7.2.4 Functional Properties of Normal Contact 7.2.4.1 General Many functions devolve from contact mechanisms. Some involve static positioning and the transfer of one sort of energy normal to the surface, as in thermal and electrical conductivity. Some involve a small normal displacement, as in stiffness where the transfer is normal force, yet other cases involve small movements over a long period, as in creep, and others fast or slow transverse movement, as in friction, lubrication and wear. In practically all cases of functional prediction the basic approach has been to split the problem into two. First the behavior of a typical contact is considered and then the
Skin on surface 1 Asperity 1
Skin on surface 2
Asperity 2
FIGURE 7.49 Contact of the asperities.
679
Surface Geometry and Its Importance in Function
not the Gaussian distribution is used because of its simple properties and likelihood of occurrence. It can be applied to the asperity distribution or the surface height distribution. Often it is applied to the gap distribution. In any event the Gaussian distribution can be considered to be a useful starting point, especially where two surfaces are concerned. Even if the two mating surfaces are badly non-Gaussian their gap often is not. Another technique is emerging, which does not attempt to force asperity shapes or distributions into analytical treatments of contact. Instead actual surface profiles are used (see [121]). With this technique digital numerical records of real surfaces are obtained. The normal movements of the opposing bodies are made on a computer, which ultimately results in overlap between them. This overlap is then equated to the sum of the strains created by the compliances of a series of adjacent pressure “elements” within the regions of apparent contact. Usually the pressure elements are taken to be the digitized ordinates of the surfaces. This is in effect utilizing the discrete method of characterizing surfaces advocated by Whitehouse, rather that the continuous method. The sum of all these pressurized elements, none of which is allowed to be negative, is equated to the total load in an iterative way until the accumulated displacements at each element equate to the rigid-body movement imposed and the sum of the corresponding pressure steps needed to achieve this displacement equals the nominal pressure. This method, usually called the numerical contact method (see Ref. [122]), does not rely on a model of the topography. Also the contact geometry is fully defined across the whole interface. The technique does imply that the behavior of each of the “elements” is independent, which is questionable at the sample spacing of digitized surfaces. However, if the element width is taken as β*, the autocorrelation length, then this criticism could be relaxed. If the force-compliance characteristic for an “element” is given by W(z) and the probability density function of ordinates is p(z), the effective load-carrying capacity L of a surface f(z) being pressed by a flat is given simply by Equation 7.80 where A is the height at first contact:
stiffness of bolted joints in the construction of the machines. In this work it was often assumed that the asperities were distributed according to a parabolic law and that the surfaces were made up of cusp-like asperities. The average behavior of the asperities was then based upon the material ratio (bearing area) curve for the surface. Such approaches were reasonable for surfaces especially manufactured to have the cusp-like characteristics but not for surfaces having random components of any significance. Thomas and Sayles [120] seemed to be the first to attempt to consider the stiffness characteristics of random surfaces. They replaced the deterministic model for the surface, and a process mark and waviness, by a continuous spectrum of wavelengths. In this way they transformed a simple deterministic model of the surface into a random one in which overlapping bands of wavelength from a continuous spectrum had to be considered. Nowadays other features of the joint which are concerned more with friction are also considered to be important in the stiffness of machine tools and any other structural joints because damping is produced. Conventionally, stiffness can be defined as S where
S = − dW /dh ⋅
(7.178)
It is here regarded as a purely elastic mode. This can be defined in terms of Hertzian contact theory and Greenwood and Williamson’s surface model (W is the load and h the compliance). Hence, if it is assumed that two rough surfaces can be approximated to one rough surface on a plane, the following relationships can be used: ∞
∫
W = C (s − t )
32
ϕ ( s ) ds,
(7.179)
t
where t = h/σs and φ(s)ds is the probability of finding an asperity tip at the dimensionless height s = z/σs from the mean plane of the surface. By differentiating and reorganizing slightly the stiffness becomes
A
L (z) =
∫ p ( z ′)W ( A − z ′) dz ′,
(7.177)
z
which can be expressed as the characteristic function of the height distribution multiplied by the characteristic function of the load-compliance curve! It may be that this sort of approach will become used more often because of its simplicity and freedom from ensnaring assumptions. To resort to this method does not say much for the theoretical development of the subject over the past few years! 7.2.4.2 Stiffness Early work concerned with the dynamic vibrational behavior of machine tools [122–124], realized the importance of the
3 F1/ 2 (t ) S= W, 2σ s F3 / 2 (t )
(7.180)
where, using Thomas and Sayles’s terminology where s is a dummy height variable and ∞
Fn =
∫ (s − t) ϕ(s)ds n
t
(7.181)
In Equation 7.180 the ratio of F1/2(t)–F3/2(t) changes very slowly with respect to mean plane separation and mean plane separation changes little with load; it being assumed that the movement is small and centered on a much higher preload value.
680
Handbook of Surface and Nanometrology, Second Edition
Hence, from Equation 7.180
where
S ∝W.
(7.182)
This implies that within a practical range around the preload╯value the stiffness is proportional to the load but not Â�independent of it. (See the first part of graphs in Figure€7.50). Examination of Equation 7.181 shows that this formula is the same as that for the nth central moment of the distribution. It has already been shown from the work of Longuet– Higgins and Nayak that the moments of the power spectrum are themselves conveniently related to the curvatures of signals, density of peaks, etc. (Chapter 2). Use of the moment relationship can enable some estimation of the stiffness as a function of surface parameters to be obtained. Although tractable there are a number of assumptions which have to be made to get practical results. Thus, let the spectrum of the surface be P(ω) = K/(a2 + ω2), where K is constant and a is time constant. This assumes that the autocorrelation function is exponential which, as has already been shown, is rather questionable over a very wide range but likely over a limited range. If this frequency range is ωu which is where plastic flow occurs, and letting Ω = ωu /a, then 1 a 2 Ω 3 /3 − Ω + tan −1Ω D= (m4 /m2 )1/ 2 = . 6π 3 Ω − tannΩ 6π 3 (7.183) The average radius of curvature is 1 R = 2 (Ω 3 /3 − Ω + tan −1Ω)−1/ 2 , 3a σ
and D is the density. Putting Equation 7.185 in Equation 7.180 and integrating gives S in terms of R, D, and σ the average curvature, the density and the root mean square value of peaks. A simple way to do this from profile graphs has been explained by Sherif [125] using 2
σ = Rq21 + Rq2
Hsc1 + Hsc2 1 and D = L2 , 2
(making the wrong assumptions that the area of density is the square of the density of the profile). However, it is close enough for this approximation. He approximates the curvature 1/R to be R S m1 + pi ⋅ 32 Rpi 2
(7.186)
Assuming that this model is acceptable, the way in which R, D, and σ affect the stiffness can be readily obtained. (The graphs are shown in Figures 7.51 through 7.54). Approaching the stiffness from the Whitehouse and Archard theory [15], Onions and Archard [96] derived the 30
(7.184)
where m 4 and m3 are the moments of the spectrum which correspond to the even differentials of the autocorrelation function at the origin. The constant in Equation 7.179 is 4 C = DAE * R1/ 2 σ 3 / 2 3
1 1 − v12 1 − v22 = + , E* E1 E2
R = 2 µm S
20 R = 5 µm 10
(7.185) W
FIGURE 7.51â•… Stiffness as a function of load and R.
S (GN m–1)
30 σ = 2 µm
20
30
σ = 10 µm
D = 10 mm–2
10
S D = 40 mm–2 W (kN)
50
FIGURE 7.50â•… Stiffness as a function of load and σ. (From Thomas, T. R. and Sayles, R. Trans. ASME, J. Eng. Md., 99B, 250–6, 1977.)
W
FIGURE 7.52â•… Stiffness as a function of load and D.
50
681
Surface Geometry and Its Importance in Function
70
β* = 5 µm
giving
S (GN m–1)
∞
β* = 50 µm
50
W (kN)
FIGURE 7.53 Stiffness as a function of load and correlation length.
60 OA (WA) S
GW
50
W
FIGURE 7.54 Stiffness as a function of load for GW and OA models.
important surface parameters from a knowledge of the RMS of the ordinate heights, σ, and the correlation length, β*. Hence 4 DAE* (2.3β* )σF (h). 3
W=
∞
Where
F (h) =
∫
∞
( s − h) 3 / 2
∫ 0
h
p * (s , C ) dCds, NC1/ 2
(7.187)
where p*(s, C) is the probability density of an asperity having a height and curvature C given by Equation 7.188 where s and C are normalized with respect to the RMS value of surface heights p* (s,C )
⊥ exp(− s 2 /2)exp[− (s − C /2)2 ]erf(C /2) ⋅ (7.188) 23 / 2 π
For independence N = 1/3 for profiles and 1/5 for areal mapping, given the simplest discrete patterns. Hence
R=
2 π1/ 2
2
2.3β* 9σ
(7.189)
S=−
1 dW 6 AE ∗ = (s − h)1/ 2 σ dh 5 ( 2.3β∗ )
∫ h
∞
∫ h
p∗ ( s , C ) dCds. (7.190) C 1/22
It can be seen that σ, R and D affect the stiffness in Greenwood’s model and α and β* in Onions and Archard’s [96]. Furthermore, the stiffness for a given load is about 50% higher for the latter than the former presumably because Greenwood’s model underestimates the contact pressure because it uses asperities of constant curvature. The simple conclusion to be reached from this exercise is that it is possible to increase the normal stiffness at a joint by either having the fine surfaces or having a large preload for W. The former is expensive; the latter imposes a high state of stress on the components making up the structure. A word of caution here concerns the independence or otherwise of the parameters. Although β* and σ are given for the Onions and Archard theory, in fact the peak curvatures and densities are obtained from it using the discrete properties of the random surface explained earlier. Also, in the Greenwood model σ R and D are not independent. It has already been shown by Whitehouse [15] that the product of these is about 0.05 for any random surface:
σRD = 0.05
(7.191)
This gives the important basic result that normal stiffness relies on two independent geometrical parameters in this type of treatment. These two parameters have to include one in amplitude and one with horizontal importance. Furthermore it is the instantaneous effective curvature of the two surfaces at a contact which is important, not the peak asperities which rarely touch the other surface directly. What does happen when two surfaces are bolted together? The result depends on whether tangential or normal relative movement takes place in the contact zone. This determines whether “static” friction occurs which depends on the relative elastic modulus of the two surfaces. If they are the same, no slip takes place at the contact—the displacement has to be equal on both surfaces. If not there is a component of force tangentially which results in slip and a certain amount of tangential compliance, which fortunately is not very dependent on surface roughness [34]. In the event of tangential movement the effective contact area is changed considerably whereas the pressure distribution is not [126]. There have been attempts to take into account stiffness effects for both elastic and plastic regimes. For example, Nagaraj [126] defines an equivalent stiffness as
Se = m( Sc ) + (1 − m)S p ,
(7.192)
and (where D is equivalent to η the density of peaks)
1 1 D= 5 (2.3β*)2
where the plastic Sp corresponds to slip material either in the normal or tangential directions when the asperities are deforming plastically. Because plastic and elastic regimes are
682
Handbook of Surface and Nanometrology, Second Edition
usually involved at the same time both types of movement must take place. Ironically, the presence of friction inhibits tangential movement which has the same effect therefore as asperity interaction, yet the tangential compliance is to a large extent controlled by the elastic region immediately under the contact zone. The way in which the plastic stiffness term changes the apparent stiffness is shown in Figure 7.55. This graph from Nagaraj seems to suggest that stiffness is independent of load over a considerable range. Although normal and not tangential stiffness is under discussion, it should be mentioned that the latter is of fundamental importance in wear. Tangential movement of small magnitude due to mainly elastic compliance always precedes gross sliding and hence wear and friction. Friction can be used to indicate what is happening when two surfaces are brought together with a normal load. Static friction or non-returnable resistance to the imposition of load has an effective coefficient of friction. A typical coefficient of friction curve might look somewhat as shown in Figure 7.56. It is split into four distinct regions:
1. In this region there is sliding between the surface oxide films. The friction is usually small because the surface films have a lower coefficient of friction than metal–metal. 2. As contact pressure is increased surface films are progressively broken down, metal–metal contact occurs and the friction begins to rise. With plastic behavior
Normal load (kN )
5
Frictionless elastic case
3
1 1
2
(µm)
FIGURE 7.55 Deformation as a function of load—plastic and elastic. (From Nagaraj, H. S., ASME, J. Tribol., 106, 519, 1984.)
III II µeff
IV
I
Contact pressure
FIGURE 7.56 Different regimes in friction.
3. Here substantial metal–metal contact occurs, this time with no interceding films; hence the coefficient of friction is high. 4. Extensive plastic surface deformation occurs as the material fails in compression, eventually resorting to the steady condition under that load when μ = 0 and elastic conditions on the whole prevail and there is no plastic movement.
This sort of graph occurs when the load is applied uniformly in time. Should there be any vibration at the same time it has been noticed that the coefficient of friction drops considerably. This effect is great if the variation in load is tangential rather than normal. 7.2.4.3 Normal Contact Other Phenomena—Creep and Seals 7.2.4.3.1 Creep So far the emphasis has been on situations where the contact has been between rough surfaces but there are cases where the shape is of primary importance one such case is in creep of which an example is given below. The behavior of a polymer sphere in contact with a rigid flat is investigated using finite element analysis. A simplified form of the modified time hardening is used for the constitutive material model. A new dimensionless form of the creep is suggested, which enables a universal solution of the problem for any contact time based on a conveniently selected reference time. The model may be applicable to other materials which can be described by the modified time hardening model including biological materials. This can occur with almost any material under certain operating conditions e.g., metals at high temperature, polymers at room temperature, and any material under the effect of nuclear radiation. Contact creep is common in a variety of tribological applications e.g., rail and wheel, miniature electromagnetic systems (MEMS), magnetic tapes, and various natural and artificial joints in the human body. This later application which involves the creep of polymers or cartilage under compression especially in the case of osteoarthritis. An interest of the above in the creep problem stems from the phenomenon of increasing friction as a result of longer contact time under a constant normal load and the potential effect of creep on wear. Note that the creep effect is non-elastic and hence irreversible upon removal of the load under normal operating conditions. Yet some creeping materials may restore their original shape after some time under certain conditions such as temperature or the presence of certain chemical solutions. Two basic approaches to the creep problem can be used according to Brot. The first is the mathematical approach, where the models describe either a visco-elastic fluid with constant flow rate under a constant stress or a solid that completely stops creeping after some time. The other approach is to treat the creep utilizing some empirical formulas to relate the various creep parameters based on creep experimental results. Exhaustive tests using ANSYS version 10, finite element package and experimental verification showed that the
683
Surface Geometry and Its Importance in Function
behavior of the creep contact was mostly dependent on the creep parameter C3 and that the contact area is always linear to the creep displacement and that its behavior over time depends largely on the parameter C2 and the sphere radius. 7.2.4.3.2â•… Mechanical Seals A subject which is close to contact and stiffness is that of mechanical seals. The parameter of importance is related to the maximum gap which exists between the two surfaces. Treatment of this subject is fragmentary. Much emphasis is put on profiles, from which estimates of gap can be made, but in reality it is the lay of the surface which dominates. Early attempts found that the peak curvature was very important. This was due to the fact that the curvatures determine the elastic compliance of the contacting points which in turn determines the mean gap and hence the leakage. The nearest to providing a suitable model for this were Tsukizoe and Hisakado [127], although Mitchell and Rowe [128,129] provide a simple idea in terms of the ratio of the mean separation of both surfaces divided by their mutual surface texture, that is leakage = d / σ12 + σ 22 ⋅
(7.193)
A particularly simple way to determine the leakage capability was provided by George [130] who showed that the amount of leakage could be estimated by the mismatch of the power spectra of the metal surface and the polymer ring seal pressed on it (Figure€7.57). To measure the polymer it had to be frozen in situ so as to get a profilometer reading from it representing the as-sealed state. 7.2.4.3.3╅ Elastic Mechanical Seals Rubber like seals have an advantage over steel and similar material seals because they do not suffer or impart plasticity to the mating component. Nevetheless, the mechanism of sealing is still not understood properly, as reported by Persson [131]. He adapts his theory of contact mechanics to that of percolation theory [132] to develop expressions for the leakage. The basic system for a seal is shown in the figure which basically shows a high pressure system at pressure Pa separated from a low pressure system at pressure Pb by a rubber
seal having a squeezing pressure of Po resting on a rough hard substrate which is usually a metal. The argument give by Persson follows his usual theme of looking at different magnifications to reveal different scales of size: a practice which the engineer finds to be an unnecessary image, he or she usually working directly with instrument resolutions which relate directly to spatial dimensions. However, following his terminology and train of thought there will be a certain magnification ςC at which there will be a percolation channel through which fluid will flow. What Persson means here is that larger scales of the rubber surface will be following exactly the profile of the metal substrate at scales of size larger than this and so will not allow any channels to exist i.e., this magnification is the first one where the surface of the rubber does not follow the profile exactly. It is assumed that as a first step that this channel will be the largest one and that any channels at smaller scales will allow negligible leakage. Obviously the narrowest constriction in the channel will determine the flow, if this has width λC at height u1 (ςC). For a Newtonian fluid the rate of flow Q u3 Q = M∆P = M ( Pa − Pb ) = α 1 , 12 η
assuming λC >> uC The term α is concerned with the shape of the constriction but it is assumed to be about unity. If the total area is L X by LY the leak rate is LY M∆P, LX
A (ς ) P = erf 01 , 2G 2 A0
(7.196)
where 4 E G (ς ) = π 1 − ν2
Polymer ring seal In P (ω)
(7.195)
and as can be seen from Q the key factor is the average height u1 . This is evaluated in reference [133] Needed is the area ratio, Thus
Metal face
(7.194)
2
ςq0
∫ q C (q) dq, 3
q0
and C (q) is the power spectrum Finally q1
u (ς) = π
∫ q C(q)ω(q)dq 2
ςq 0
Log (f)
FIGURE 7.57â•… Difference between metal and polymer. (From George, A., CEGS Rep., November, 1976.)
∞
∫
ς q0
2 1 p′ [ γ + 3(1 − γ )P 2 (q, p′, ς)].exp − ω(q, ς) dp′ p′ E∗
684
Handbook of Surface and Nanometrology, Second Edition
where
–9
(∫
A0 , ω ( q, ς ) = π q ′ 3C ( q ′ ) dq ′ A (ς )
)
−1
γ ≈ 0 . 4, P ( q, p, ς ) = erf ( s ( q, ς ) p ) , s ( q, ς ) =
2
(7.197)
ω ( q, ς ) E∗
ms = 6 µm
–13
, Log Q (m3/s)
p ( ς ) = P0
4 µm –17
2 µm
–21
Figure 7.58 shows the model and Figures 7.59 and 7.60 show the leakage as a function of pressure and separation of the surfaces as a function of pressure. They use molecular dynamics to verify the results and find that the leak rate for both rubber and steel decrease very fast, perhaps even exponentially, with force. For example the leak rate decreased by six orders of magnitude for rubber as the load increased by 10 orders. The whole basis of the Persson approach depends on the validity of the “magnification” concept in which the scales of size can be separated instead of being jumbled up in most engineering surfaces.
–25
1 µm
0
0.2
0.4 0.6 Squeezing pressure (MPa)
0.8
1
FIGURE 7.60 The leakage Q as a function of the applied squeezing pressure P0 for surfaces with roughness Rq 1,2,4, and 6 μm. (From Persson, B. N. J. and Yang, C., J. Phys: Condens Matter, 20(315011), 11p, 2008. With permission.) Force
Repulsion (compression) P0 Pa
Z0 Pb
Rubber
Fluid
Attraction (tension)
FIGURE 7.58 Mechanical seal system. (From Persson, B. N. J. and Yang, C., J. Phys: Condens Matter, 20(315011), 11p, 2008. With permission.)
Log uc (m)
–6 0.6
–7 0.7
–8 H = 0.9
–9 –10
0
0.2
0.8
0.4 0.6 Squeezing pressure (MPa)
0.8
1
FIGURE 7.59 The interfacial separation uC as a function of the squeezing pressure P0. Areal fractal dimension for the Hurst exponent H = 0.8 is 2.2. (From Persson, B. N. J. and Yang, C., J. Phys: Condens Matter, 20(315011), 11p, 2008. With permission.)
z 2γ
FIGURE 7.61 Type of force as a function of separation. (From Johnson, K. L., Contact Mechanics, Cambridge University Press, 1985.)
7.2.4.4 Adhesion So far the discussion has been about forces, either elastic or plastic or both, which oppose an imposed load between two surfaces, these are not the only type of force which is operative. Another kind of force is due to the atomic nature of the surface. A direct result of the competing forces due to attraction and repulsion between atoms or molecules in the two surfaces is that there should exist a separation between them z0 where these molecular forces are equal; the surfaces are in equilibrium. Distances less than z0 produce a repulsion while separations greater than z0 produce attraction. This is shown in Figure 7.61 as the force-separation curve and surface energy for ideal surfaces [34]. In the figure the tensile force—that of adhesion—has to be exerted in order to separate the surfaces. The work done to separate the surfaces is called the surface energy and is the area under the tension curve from z = z0 to infinity. This is 2λ, where λ is the surface energy of each of the two new surfaces created. If the surfaces are from dissimilar solids the work done in order to separate the surfaces is λ1 + λ2 –2
685
Surface Geometry and Its Importance in Function
λ12, where λ1 and λ2 are the surface energies of the two solids respectively and λ12 is the energy of the interface. The actual formula for the force between the surfaces F(z), is given by
F ( z ) = Az − n + Bz − m
m > n,
(7.198)
where A, B, n, and m depend on the solids. Sometimes it is argued that this adhesive force does not exist since it has been found to be difficult to observe. The only real evidence for such a force, apart from the purely theoretical one, was provided experimentally by Johnson et al. [49] who investigated the contact between a smooth glass sphere and flats of rubber and gelatine and found a good agreement between the theory which predicted a force of 3πRγ/2 for a radius R of the sphere and surface energy γ out. That is, the pull-off force is given by F0 = 1.5πRγ ⋅
(7.199)
The reason why this is so is a classical example of why surface texture is important functionally. Fuller and Tabor [134] resolved the paradox of the non-existence of the adhesive force by attributing it to surface roughness. Since the surface energies do not differ greatly for different surfaces and the relationship is apparently independent of any other physical property (and in particular the elastic modulus), it seems incredible that adhesion is not commonly seen. Figure 7.62 shows the pressure distribution for a sphere on a flat surface for the elastic Hertzian case and the case where adhesion is present. Fuller and Tabor used the Greenwood and Williamson model of surface roughness to show that, as the two surfaces are separated, the contacts at the lower peaks will be in tension (attraction) and broken while the higher ones are still in compression, with the result that there will be negligible adhesion unless the adhesion index
θ=
E*σ3/ 2 < 10, R1/ 2 ∆γ
(7.200)
P P(r)
Adhesion pressure distribution
Hertzian pressure distribution
aH a
aH a
FIGURE 7.62 Pressure distribution for sphere on flat, for elastic and adhesion cases. (From Johnson, K. L., Contact Mechanics, Cambridge University Press, 1985.)
where R is the characteristic radius of curvature of the asperity and σ is the standard deviation of the asperities. To test this, very elastic substances were used. Gelatine and rubber have elastic modulus orders of magnitude less than other solids so that the adhesion index-equation (Equation 7.200) may be small for moderately rough surfaces (~1μm). Unfortunately, as will be seen, this model only seems to work for low-modulus solids. Before looking at other theories, note that the very important parameters involved are curvature and RMS height. In this application they appear to be the direct relevant parameters. This does not mean to say that some correlation could not be found between other parameters. For surfaces having uniform statistics parameters are often interdependent as has been seen. It is perhaps relevant to note that this property of adhesion between rough surfaces is not the same property as that of “wringing,” first mentioned by Whitworth in 1840 [135] and Tyndall in 1857 [136]. In wringing a thin film of liquid has to be present between the two surfaces before the very substantial normal attraction can appear. Although not directly adhesion, this wringing property, which does not depend on atmospheric pressure, has been made great use of in stacking precision-made stainless-steel blocks to make convenient metrology length standards. (This use has been attributed to C E Johansson in Sweden at about 1900.) Some thinking [12] suggests that the force is largely due to the surface tension of the liquid, i.e., capillary adhesion. Reverting back to adhesion, the reason why it is important is because it influences the friction between surfaces in relative motion. Consequently much attention has been given to its theoretical basis, especially with regard to engineering surfaces, a second theory has been proposed by Derjaguin et al. (DMT) [137] and Muller et al. [138] which differs in approach from that of the Johnson et al. (JKR) [49] model. Whereas the JKR model is based on the assumption that attractive intermolecular surface forces result in elastic deformation of the sphere and thus increase the area of contact beyond the Hertzian prediction and only allow attractive forces inside the contact zone; the DMT model assumes that the surface forces do not change the deformed profile from the expected Hertz theory. It assumes that the attractive forces all lie outside the contact area and are balanced by the compression in the contact region, which it takes as having the Hertzian stress distribution. This alternative idea produces a different pull-off force from the JKR model (Equation 7.199). Instead of the constant 1.5 the constraint becomes 2.0 for the DMT model. Muller et al. in 1980 [138] eventually reconciled the two by producing a parameter λ given by
Rγ 2 λ = 2 2. E ε
(7.201)
Here E is the elastic modulus and ε the interatomic spacing. In this revised model for λ > 1 the JKR theory applies. Extra
686
Handbook of Surface and Nanometrology, Second Edition
p =
9
d ϕ 8 ∆γ ε ε = − , dz 3 ε ζ ζ
(7.202)
where φ is the inter-atomic potential, Δγ is the change in surface energy specified by Δγ = γ1 + γ2 – 2γ12, ζ is the separation of the two surfaces outside the contact and ε is the intermolecular distance. The adhesive force therefore is given by
γ = 5 Jm–2
Pull off force Fp/AN ε
γ = 0.5 Jm–2
∇
3
10–4
∇
attempts have been made to refine the theories for the elastic deformation case [10], but effects which include plasticity have also been tried [139–141]. It has been shown that the surface forces alone could be sufficient to produce local plastic deformation, so it appears that theories exist for both very elastic and totally inelastic materials. What about in between? One idea has been presented by Chang et al. [142, 143] who retain the basic concept of the DMT model, which is that it is where the gap is formed between the two surfaces (just outside the contact zone) that supplies the attractive force. Attraction and repulsion seemingly cancel out for reasons already suggested within the contact. This “attractive pressure” p~ according to the “Lennard–Jones interaction potential” [153], is given by
10–11 700
Adhesion index θ
FIGURE 7.63â•… Adhesive force as a function of adhesive index. (From Chang, W. R., Etsian, I., and Bogy, D. B., ASME, J. Tribol., 109, 257–63, 1987.)
(a)
(b)
R
∫
(7.203)
(c) Contact radius
a
where r is the distance from the center of the contact and a is the contact radius. When the sphere and flat are just not in contact Equation 7.202 becomes z2
8
ε 8 1 ε Fs = πR∆γ − , 3 4 ζ0 ζ0
(7.204)
where ζ0 is the closest distance. When there is a point contact, ζ0 = ε and the adhesion at the point contact becomes
F0 = 2 πR∆γ ⋅
T
T 2a0
Fs = 2π p( z )dr
2
h
c
∞
R
(7.205)
Compare this equation with (Equation 7.99) for the JKR model which is 1.5πRΔγ. Using the basic idea of the Greenwood and Williamson model together with the plasticity index, Chang et al. [143] were able to get a value for the adhesion index θ for rough surfaces. They found that if θ > 100 the pull-off force due to adhesion becomes negligible for hard steel as opposed to 10 for rubber. They also showed that the adhesion force is negligible compared with the contact load when the plasticity index ψ ≥ 2.5 or when the surface energy Δγ ≥ 0.5 Jm-2. One interesting point to emerge was that for smooth, clean surfaces the adhesion can be well over 20% of the contact load and therefore it is not negligible as was often thought (Figure€7.63).
z1
JKR DMT
Unstable region Applied load
FIGURE 7.64â•… Contact between elastic sphere and rigid flat in the presence of surface force.
What is strange and disconcerting about this aspect of tribology is that two attempts to explain a phenomenon, one within the contact zone and the other outside, produced practically identical results. When taken with convincing practical evidence this indicates that there is an element of truth in both. In fact this is probably what happens; there is no doubt that the JKR model works for light loads. The main thing to remember is that the parameters of functional importance of the surface roughness are the radius and the standard deviation and the degree to which these affect the adhesion! The predictions of adhesion theory are such that what can definitely be said is that adhesion is most important for light loads and, taking this to its conclusion, there must be a finite contact area even for zero load! Figure€7.64 shows the contact between an elastic sphere and a rigid flat in the presence of surface forces: (a) the DMT model; (b) the JKR model; and (c) the change in radius of contact as a function of applied load. The pull-off force
687
Surface Geometry and Its Importance in Function
required is z1 = 1.5πRΔγ for the JKR and z2 = 2πRΔγ for the DMT model [144]. For ease of picturing what is going on in the contact region the adhesion index given in Equation 7.200 can be rewritten as −
E*σ 3/ 2 = R1/ 2 ∆γ
Hertzian elastic forces of pushingg apart the higher asperities ⋅ pull together (addhesion) forces of the lower asperities (7.206) Recent work [145] on the adhesion of elastic spheres confirms the applicability of the pull off force being 2πRγ for the Bradley solution [146] and 1.5 πRγ for the Johnson result in terms of the Tabor parameter (√RΔγ / E*) ⋅ E2/3 = μT. For μT > 3 the Johnson formula is true. For μT 1. The load approach curves become S shaped leading to jumps in and out of contact! This phenomenon presumably would lead to energy dissipation and could be interpreted as elastic hysteresis. The practical significance of this interesting result depends on the value of μT for real surface asperities. 7.2.4.4.1 Capillary Adhesion Capillary bridges form at the interface between two surfaces which are rough and in close proximity with some moisture present. The liquid forming the capillary bridges can be water condensation or fluid layers. For wetting liquids strong negative pressure results providing a strong attractive force which can under certain circumstances be opposed by elastic forces caused by asperity contact. The capillary forces can sometimes be much larger than the forces between the solids i.e., van der Waals (vdW) forces. Capillary forces can be useful e.g., for insects for walking on walls or they can be detrimental e.g., as in the case of MEMS where they may cause permanent sticking [147]. A well known phenomenon called “wringing” which causes gauge blocks to stick together sometimes inconveniently could be due to the effect of capillary bridges. Persson [148,150] has developed much useful work in contact and has also been working on many aspects of adhesion including new theories on capillary adhesion which will be loosely described in what follows. One interesting aspect of the work has been concerned with the ability of insects and small reptiles to climb up walls. He reports some fascinating detail about the adhesion used by geckos, spiders, and flies. According to his work and others a fly relies on capillary action between the hairs on its feet and a rough surface in order to be able to stick. Apparently its hairs are too thick and too blunt to take advantage of vdW forces on the surface: they cannot penetrate into the roughness to get sufficiently close to ensure that the atomic forces are strong enough to
support it so it has to rely on the facility of inserting liquid into the hair gaps to form bridges between the hairs and the surface which are strong enough to hold it. On the other hand spiders and surprisingly geckos have hairs, of the order of nanometers which can bottom the surface roughness and so do not need the capillary action making geckos easily the most efficient climbers. Capillary bridges are formed at the interface between the surfaces where the roughness is below the Kelvin distance dK
dK ≈
1.06 nm , ln ( PV PSat )
where PV and PSat are the vapor pressure and the saturated vapor pressure of the liquid respectively. He applies light loads and using his theory for contact [149] comes to the conclusion that a pressure p produces an average u , where (See equation 7.210)
α u p ≈ exp − , α ≈ 2, Rq
and depends on the roughness characteristics. As the theory is based on solid and continuum fluid mechanics, it cannot take on quantum mechanics so is not valid for cases where the calculated Kelvin distance is of nanometer values or less where solutions have to rely on molecular dynamic methods. Theory (abridged): This is considered for two surfaces which are rough and hydrophilic. It is assumed that this situation can be replaced by one in which there is a flat contacting a surface which has a roughness equal to the sum of the two, not necessarily valid incidentally, whose roughness is equal to z1(x) + z2(x). The flat is assumed to be smooth. Figure 7.65 shows the configuration. If there are humid conditions when the two surfaces touch then capillary bridges can form around contacting asperities as shown in the figure, providing that the liquid wets the surfaces. Under these circumstances the meniscus radius becomes rK where it is given by
rK = −
γ υ0 , k B T ln ( Pv PSat )
(7.207)
d
c
FIGURE 7.65 Capillary bridges between surfaces showing the critical dimension dc often referred to as dk (the Kelvin distance). (From Persson, B. N. J. J. Phys. Condens. Matter, 20(315007), 11p, 2008. With permission.)
688
Handbook of Surface and Nanometrology, Second Edition
related to the Kelvin distance by
off—which requires a slowly propagating de-bonding crack being the mechanism of bond breaking. The liquid will condense or evaporate at the interface in such a way that the Kelvin distance dK/2 is preserved for the radius of curvature of the meniscus. The situation is different for fast pull off because negligible condensation or evaporation can occur and it is therefore the fluid volume which is the constant parameter and not the Kelvin radius. If A(u) is the area covered by the liquid when the average separation is “u” then the conservation of volume requires that A0 u 0 = A(u)u so,
d K = rK ( cos(θ1 ) + cos(θ2 ) ) ,
(7.208)
and where θ1 and θ2 are the contact angles between the liquid and the solids. For the case when the liquid wets the surface the angles of contact will be zero and the the Kelvin distance will be as that given by dk = 2rk. In the liquid bridge is a negative (attractive) pressure p p = − pK where pK =
2γ . dK
∞
The work of adhesion per unit area ω
ω=
1 A0
∞
∫
U0
U0
∞
FPull (u ) du =
∫ ( p − p (u)) du, a
(7.209)
U0
where p (u) is the repulsive pressure from the hard surface at a mean separation u between the flat and the mean line of the rough (composite) surface. Equation 7.210 it has to be assumed that there is just contact so that the real area of contact is minimal. u0 is the equilibrium separation when FPull = 0 i.e., p(u 0) = pa. When the space between the flat and the rough surface is completely filled with water the attractive pressure pa must be balanced by the repulsive elastic asperity contact pressure which is given by
p (u ) = β E ∗ exp −
αu
, Rq
(7.210)
where α and β depend on the roughness. Hence
2γ αu = β E ∗ exp − 0 , Rq dK
(7.211)
and as pa = 2γ/dK for u > Rq, where dk is the Kelvin dimension.
in the sense that the macro-distortion often complicates the transfer of heat through the microcontact regions. Radiation can occur across the gap as a further means of heat transfer but is not usually significant within the temperature range encountered under normal conditions. The biggest problem in heat transfer is concerned with the conductance through the points of contact and this is why the surface texture of the surfaces is so important. Of principal importance is the number of contacts and their size. These two factors dominate thermal conductivity, although the quality of the contact also has to be taken into account. Following Sayles and Thomas [153], the conductance of an isotropic surface in elastic contact with a smooth plane can be evaluated using expressions already given for the real area of contact, so
7.2.4.5 Thermal Conductivity Heat transfer between two solids in contact can take the form of conduction, radiation and convection, usually all taking place in parallel as shown in Figure 7.67. Conduction can be via the actual points of contact of the two surfaces or through the interstitial gap if a conductor is present. Convection can also be the mechanism of transfer if there is a fluid present. In the case when there is a fluid present it is usual to consider that the area involved is the nominal area of the contact rather than the area related to the size of the spots. This is because the gap does not vary appreciably with load [152]. Heat transfer does, however, have its difficulties
Ar / A = ϕ(t )/2,
(7.215)
and the force per unit nominal area is
F / A = [ E * m2 /(2π)1/ 2 ] ϕ (t )/t ,
(7.216)
where t is the normalized separation of mean planes to the roughness m01/2 using Nayak’s terminology—valid here for t > 2, the highest spots—E* is the composite modulus and φ(t) is the Gaussian probability density function. It is obvious that the total conductance of the surface is intimately related to the number and nature of the contact points. Following for the present the argument using elastic deformation.
n = (2 π)−3/ 2 (m2 /m0 ) ϕ (t ) ⋅
(7.217)
The total conductance under this regime is therefore n
∑ 2a k ,
C0 = A
i
(7.218)
i =1
where k is the harmonic mean thermal conductivity or FIGURE 7.66 Wide range elastic effects. (a) Greenwood, (b) Persson, and (c) Whitehouse.
n
FIGURE 7.67 Transfer of heat between two surfaces.
∑ πa
Ai = A
2 i
⋅
(7.220)
i =1
Radiation
Convection
(7.219)
where a– is the mean contact radius. However,
Conduction
C = 2 knaA
Thomas and Probert [154] here consider an upper limit solution because of the lack of precise knowledge of the distribution of areas. This results in the inequality
a < (2 π )1/ 4 (m0 /m2 )1/ 2 t −1/ 2 ,
(7.221)
690
Handbook of Surface and Nanometrology, Second Edition
5
t
1.5
10–1
n*
C* –3 10
10–4n*
a*
10–4
10–4 F*
FIGURE 7.68â•… Estimate of conductance given elastic conditions. (From Sayles, R. S. and Thomas, T. R., Appl. Energy, 1249–67, 1976.)
from which C < (2 π 5 )−1/ 4 kA(m2 /m0 )1/ 2 t 1/ 2 ϕ (t ),
(7.222)
which is an estimate of conductance given elastic conditions. As t cannot be isolated from the above equations it is possible to plot the parameters in non-dimensional form (Figure€7.68) C*0 = C0 (m0 /m2 )1/ 2 (2 π 5 )1/ 4 /kA
F * = F (2 π )1/ 2 /E*m2 A n* = n(m0 /m2 )(2 π)3 / 2
(7.223)
a * = a (m2 /m0 )/(2 π)1/ 4 ⋅ Equation 7.114 according to Sayles and Thomas [153] is the upper bound conductance for an isotropic surface on a smooth one. This brings out a few points. The first is that the conductance is nearly proportional to the load, which is also true of the number of contacts in accordance with the general elastic theory and backed by Tsukizoe and Hisakado [154], Thomas and Probert [155] and Kraghelsky and Demkin [156]. Relative to the load, the relationships for C 0, n, and a can be obtained [153]: d( ln C )/d(ln F ) = (1 + 2t )/2(1 + t ) 2
C = 1.5( Ar / A)0.95
10–7
These results for the elastic contacts are obtained in terms of bulk parameters of the material and identifiable surface parameters a and n. Also the average spot size is almost independent of the load. This means that C 0 αn. For plastic conditions the conductance is different (from the work of Yovanovich and co-workers [157,158]): (7.226)
reported by Babus’ Haq and co-workers [164]. It is also fair to say that contact conductance has not yet been ideally predicted because of the messy problems that go with it and result directly from the passage of heat. Similar effects do not surround electrical conductance to the same extent. One such problem which is often found is that as the interface and structure heat up there is usually an associated thermal distortion if two dissimilar metals are used in the interface. This can obviously affect the pressures and even the nominal contact area. The direction of heat flow is also another variable. The thermal distortion problem is shown in Figure€7.69. The curvature of a surface as shown is 1/ρ = qα/k where ρ is the radius. Thus for the two surfaces the difference in curvature is q(α1 /k1 − α 2 /k 2 ).
(7.227)
This factor (Equation 7.227) indicates whether or not the macroscopic contact region so formed is a disc or an annulus. When the heat flows from material 1 to material 2 and α1/ k1 2 where this treatment is valid
C ∝ F 0. 9
n ∝ F 0. 8
a ∝ F 0 .1 .
(7.225)
As t╯→ ∞ the exponents of F for C and n→1 and for€a→0 which,€judging from Equation 7.225, makes sense physically.
FIGURE 7.69â•… Effect of distortion on heat flow in two solids. (From Bush, A. N., Gibson, R. D., and Keogh, G. P., Mech. Res. Commun., 3, 169–74, 1980.)
691
Surface Geometry and Its Importance in Function
constrictions—the former corresponding to what is in effect a form error dependent on temperature. Waviness, here meaning the long wavelength component of the spectrum, does have an effect in that it can determine the number of high spots which come into contact. It is sometimes said that the conductance of waviness and roughness is additive [23]. It is true that the waviness and roughness are both important but it is more likely that the waviness modulates the roughness (i.e., there is a multiplication rather than an addition when simple contact is being considered). Waviness has a slightly different effect for thermal conductivity as for electrical contact because of the possibility of heat convection or radiation across the gap between the two contacting surfaces. Waviness determines the property of the gap as well as the number of local contacts. The gap as such is not so critical in electrical conductivity [159–162]. In plastic terms the ratio of real to apparent contact is given simply by the ratio of the applied pressure to the yield flow. The relative contact pressure [163] also influences somewhat the effective thickness of the layers of gas in the interstitial voids, which can affect the rate of conduction. Usually the rate of radiation exchange is ignored [164]. It is small compared with the heat transferred by conduction. The microcontact conductance is proportional to a– and n. Contacts involving surfaces having lower asperity flank slopes in general present a higher resistance to interfacial heat flows. This is attributed to the fact that geometrically this would mean fewer true contact spots, which in effect acts as a sort of constriction when compared with high macroslopes, that is
C = (Q σ ) (∆τAmk ),
(7.228)
where Δτ is the temperature difference between the sur. faces, σ is the joint surface roughness and Q is the power dissipation. It seems from both these approaches that the number of contacts is very important for light loading (elastic) and heavier loading (plastic). If there is a fluid then it may transmit heat, but it is true to say that natural convection is suppressed and heat transfer occurs through the fluid by conduction if the gap is small (that is less than about 0.1 mm). The effective conduction across the fluid is given by
C f = k f A/δ,
(7.229)
where Cf is the fluid conductance [164] and δ is the mean interfacial gap produced by thermal distortion, although the conductance through the fluid (which is usually comparatively small) may become significant when the joint distorts and could under extreme conditions be comparable with the macroscopic conductance. The total thermal conductance is computed as the reciprocal of the harmonic sum of the macro- and micro-conductances
for each of the two contacting members. If a fluid or filler is present the conductance of the medium in the gap due to distortion must be taken into account [163–166]. Using the same nomenclature as for the elastic case the microscopic thermal contact resistance is
C = 2ank /g(a/b),
(7.230)
where a is the mean radius of microcontacts; g(a/b) is called a “constriction alleviation factor” and accounts for the interference between heat flows passing through neighboring microcontact bridges. The macroscopic thermal constriction resistance R is obtained by equipotential flow theory. If the contact plane is isothermal
R = tan −1 (r /c − 1)πck ,
(7.231)
where r is the radial position on the interface from the center and c is the total radius always greater than or equal to r [167]. This R corresponds to the g(a/b) of Equation 7.230 with a small variation, that is
g(a/b) = (2/π) tan −1 (b/a − 1) ⋅
(7.232)
Heat conductance is usually encouraged to be as high as possible so as to increase heat dissipation away from sensitive structures such as in propulsion units, space vehicles, [168,169], nuclear reactors, and so on. Sometimes, however, there has been an attempt to maximize thermal contact in order to improve the thermal insulation of a variety of systems [170,171]. In general the heat conductance is considered to be the summation of the macro- and micro-effects and also within each the contributions due to convection and radiation, that is, the conductance is given by
C0 = Csolid + Cfluid + Cgap(radiation) ,
(7.233)
where the radiation transfer per unit area between two infinite plates is approximately
Cgap = σ ′
ε1ε 2 T 14 − T 42 ⋅ ε1 + ε 2 − ε1ε 2 T1 − T2
(7.234)
From this equation it can be seen that the radiation exchange becomes more important the higher the temperature, as in Wien’s law where ε is surface emissivity. This radiation contribution is negligible at room temperature. The transfer across contacts for temperatures less than 1000°C rarely exceeds 2% of the overall conductance. Equation 7.233 will be considered again with respect to electrical contact.
692
Handbook of Surface and Nanometrology, Second Edition
7.2.4.5.1â•…Relationship between Electrical and Thermal Conductivity The prediction and measurement of electrical contact resistance are easier than for their thermal counterparts. For example, the relaxation times required for the contact to obtain the steady-state is much shorter for an electrical contact. The contact system is much easier to insulate electrically than thermally because of the fewer modes of transfer. Electrical conduction is more tractable [259] than thermal conductivity because the properties of the gap between the surface is not so critical. There is only one mode of transfer, i.e., at contact points. This means that the distribution of contact points spatially is more significant than for thermal conductivity. However, what happens at a contact point is more complicated than the thermal case because of the effect of thin films on the surface. Electrical contact can be used as an analog to thermal contact (with certain reservations) since both are functions of the same mechanical and geometrical parameters. However, because of the presence of surface films considerable departures from the Weideman–Franz–Lorenz law for electrical conductors have been found for contacts between metals according to O’Callaghan and Probert [165]. Therefore the analogy should only be used for perfectly clean contacts in high vacuum and when radiation can be neglected (C > . 4asµ ρ ρ 1+ F (1 − ν)
(7.253)
The treatment can be used for any shape of punch. The radius of the flat is “a.” It is a different way to look at the functional effect of shape and roughness and although interesting is less than practical, although the author does say that the characterization of roughness could be improved.
7.3 TWO-BODY INTERACTIONS— DYNAMIC BEHAVIOR 7.3.1 General In Section 7.2 of this chapter contact has been considered. Although not embodied in the formal definition of tribology, contact phenomena are a key to tribological performance. Historically the study of the subject of lubrication and an understanding of its importance preceded that of wear [179]
695
Surface Geometry and Its Importance in Function
because of its effect on friction. It will be considered after friction and wear in this section because it is less directly influenced by contact where the surface roughness is a critical factor. In the dynamic effects one surface moves relative to the other. The movement considered is usually tangential rather than normal although squeeze films and normal impact will also be discussed. There are three ways in which lateral movement can happen: sliding, rolling or spinning, or a mixture of them all. In this section sliding will be considered first because rolling is just a special case when the slide is zero. This leads directly to the subject of friction. Friction is at the very heart of tribology; wear results from high values of it and lubrication aims to reduce it.
7.3.2 Friction 7.3.2.1 Friction Mechanisms—General An elegant introduction to friction has been given by Archard (1975) in a Royal Institution Lecture [185] who quotes Dowson [186] that the first recorded tribologist is the man in the carving of a statue of Ti at Saqqara pouring lubricant in front of a sledge. This action is dated about 2400 BC, so it can be assumed that a great deal of thought has been given in the past to the reduction of friction. The study of friction initially was no more than the application of empirical knowledge. At a latter stage more general questions were asked and attempts were made to deduce general laws of behavior. The laws of friction as set down as a result of this rationalization of knowledge are called Amonton’s laws, although they may be equally claimed for Coulomb or Leonardo da Vinci. The laws are straightforward:
1. The tangential force is proportional to the vertical load between the surfaces 2. The frictional force is independent of the nominal area of the body in contact
Coulomb thought that friction was involved with surface texture. He postulated that the effort was needed to make the sliding body lift over the hills of the surface being contacted. This required that the asperities interlock. Obviously this idea cannot work because any work done is retrieved when the body slides into the valleys. What was missing was a mechanism for losing energy. This energy loss in practice appears as heat and has important consequences for the theory of tribology. As is now known the force of friction for solid contact is the force required to shear the actual junctions at points of contacts between the two surfaces. The actual area of contact is proportional to the load. Friction is also proportional to the load because this real area of contact is proportional to the load. This relationship has always been difficult to verify because of the difficulty of measuring the real area of contact. In the earliest experiments Amonton came to the conclusion that the force required to overcome friction was one-third of the load—for all cases! Even Euler (1750) [187] agreed.
This view expounded by great men took a lot of overturning. In fact it was much later, in the early 1900s, that rigorous experiments were made. Coulomb’s contention [188] that the friction was caused by the raising of the asperities of one surface over those of another was believed wholeheartedly, yet no one explained where the dissipation came from. It had to be inelastic behavior in the bulk of the material or in some way dependent on the shear of the surface layers. Hardy’s [189] and Bowden and Tabor’s [152] work gave a more realistic basis to the property. They made the whole approach fundamentally simple to start with by considering what Archard calls the unit event of friction, which is the deformation and rubbing of a single asperity. Because of this simple approach it was relatively easy to do experiments with the result that a mass of data was soon produced and some simple relationships derived, namely normal load W = Pm A
friction force F = SA
(7.254)
coefficient of friction = F /W = S /Pm where Pm is the mean contact pressure and S is the shear strength of the contact. Also, initially Pm was taken to be the indentation hardness of the softer of the two contacting materials. It was only when Archard developed certain aspects of his multiple contact theory that this assumption was seriously questioned. Incidentally, Archard extended the so-called adhesion theory of friction to wear, in which similar very simple expressions to Equation 7.144 were produced, namely
V / L = KA ⋅
(7.255)
The volume worn per unit distance of sliding will be proportional to the real area of contact. Here K is a wear coefficient which reflects the probability that an asperity clash will produce a removal or movement of metal. The important parameters here are the density of asperities and the probability that a contact produces a wear particle. Thus again the count of asperities (and associated contacts) is the important parameter, as it is in elastic and thermal contact and in friction, because each contact has an associated shear, and all contacts contribute to the friction. This is fundamentally different from wear in which a further random or probabilistic element enters into the equation. This is the probability of a summit breaking off in contact. The wear is much more of a fatigue problem than is friction. This latter statement refers to the mild wear regime rather than severe wear. Returning to friction, this is affected by all contacts at the point where the contact occurs and it is the properties of the asperities which initiate the friction [190]. At the contact of the asperities junctions appear. These can stick or adhere, in which case they have to be sheared
696
Handbook of Surface and Nanometrology, Second Edition
or broken when relative movement between the surfaces commences. Alternatively the asperities ride over each other as Coulomb suggested. A third possibility is that the asperities of the harder material simply plow through those of the softer material. There are therefore at least two and possibly three mechanisms associated with friction: adhesion, plowing and work done, which involves energy dissipation in some way, perhaps in the form of sound waves or compression waves. Archard’s demonstration that multiple asperities could obey Amonton’s laws is elegant and simple. It is not meant to be rigorous because some assumptions have to be made which are not completely true. However, his argument predates the concept and usefulness of the fractal idea of surfaces mentioned often in this book. For this reason it is worth recapitulation. Take elastic contact between a smooth sphere of radius R1 and a smooth flat under load W. There is a circular contact area A1 of radius b where
So with one smaller scale of spheres added to the original sphere the power of A against W has changed from 2/3 to 8/9. This process can be repeated adding farther sets of smaller and smaller spheres giving the power 26/27 etc. 7.3.2.2 Friction Modeling in Simulation* As the trend is for carrying out simulations rather than to do practical work it makes sense to make sure that the simulation as near as possible are accurate to the subject. With this in mind some investigators in Sweden at the Royal Institute of Technology and Lulea Technical University as well as numerous industrial companies [180] have carried out a useful exercise in reviewing the various models that are used in friction and making suggestions for better usage. Others in Sweden [181] consider the models which are likely to be used in predicting the performance of high precision activities such as the design of instruments and machines used, for example, in surface metrology (reported in Chapter 4). The list of subject areas covered include;
(7.256)
Using the Hertz theory [191] the load δW supported by an annulus of radius r width dr is
A1 = πb2 = K1 W 2/3 .
δW =
2 W 1/ 3 r2 1− 2 3 K1 b
1/ 2
2 πr dr .
(7.257)
If now the sphere is covered with small spherical caps of radius of curvature R2 0,
1/ 2
, (7.258)
F = FApp
where FC = µN. Here μ is the coefficient of friction and N is the normal force. 7.3.2.2.2 Viscous
12
m 2 πrrdr
F = K V × v,
where K V is the velocity coefficient of friction. The coulomb and the viscous can be combined to give (7.259)
2 /3
( K1m)1/ 3 ⋅
(7.260)
v = 0 and FApp < FC ,
F = FC tanh ( k tanh v ) ,
or if the simulation is using Matlab
A2 = K 2W 8 9 3 2 where K 2 = K 2 4 3
1 − r 2 b2
These subject areas involve the following friction regimes
(v = velocity) where sgn is the sign of the function
where K2 for R2 corresponds to K1 for R1. The area a2 of each contact is r −b
.
The area a2 of each contact is 2/3
1. Transient sliding under dry, boundary and mixed lubrication conditions 2. Micro-displacements of engineering surfaces subjected to transient sliding 3. Simulation and control of systems 4. Combined models to represent physical behavior for use in simulation 5. Stochastic situations involving surfaces and noise.
F = FC sat (ksat v),
* See also atomistic simulations.
(7.261)
697
Surface Geometry and Its Importance in Function
where the function “sat” is sat ( ksat v ) = min ( ksat v,1) , v ≥ 0, = max ( k sat v , −1) , v < 0. Either of these combined versions have the disadvantage that the assumption is that F = 0 at v = 0. But for small oscillatory movement they are acceptable except for the fact that there is an inaccurate final position. 7.3.2.2.3 Stribeck This attempts to take into account regimes from the stationary to the dynamic covering boundary and full film. i abs ( v ) Thus, F = FC + ( FS − FC ) exp − sg n ( v ) + K V × v , vS (7.262)
7.3.2.2.5 Dahl This model given in Ref. [180] is,
where FC is Coulomb sliding friction force FS is the maximum static force vS is the sliding speed coefficient K V is the viscous friction coefficient i is an exponent usually one or two.
The Stribeck model is good for sliding surfaces but not good when the sliding direction changes and when the friction direction is dependent on the applied force and the contacting surfaces adhere. Then a better alternative is suggested by Andersson et al. [180] as being i abs ( v ) F = FC + ( FS − FC ) exp − tanh ( k tanh v ) + KV × v, v
F is the friction force, z is the help function, δ is the micro displacement before gross movement and FMAX is the maximum force which can be considered to qualify as Coulomb. To reduce the number of simulations the equation of motion and the friction model are usually made non-dimensional. The reason for this is understandable but it does tend to lose impact and the physics! The Dankewicz model is often acceptable but at very small movements the model can behave as a linear spring because it does not involve a dissipation term. This can introduce undamped high frequency oscillations in the frequency response, which implies that there is no sliding in the contact region but simply elastic deformation which is sometimes considered to be unrealistic in practical cases especially in machine tool and possibly coordinate measuring machine applications where micro slip is likely to be greater than micrometers.
(7.263)
z z = x 1 − sgn ( x ) δ F F = MAX × z . δ
(7.264)
dF dF dx = × dt dx dt
As
for i = 1
dF F = σ 0 1 − sgn ( x ) x . dt FC
(7.266)
In its simplest form this is equivalent to that of Dankowicz and as such also acts as a linear spring for small displacements. 7.3.2.2.6 Canudas de Wit This model has similarities with those above [184]. In its simplest form it is
where the tanh or the “sat” functions replace the sgn (sign) function. 7.3.2.2.4 Dankowicz Very small movements are often involved in surface instruments so extra conditions are sometimes needed. At the start of the imposition of the force to initiate movement there is a very small imperceptable movement designated micro slip which is not the gross movement sought but is sub-micron and may be due to elastic compliance in which case the Coulomb model is of possible use at the start. An alternative is due to Dankowicz [182]. This is a first order differential equation with an added “help” function z.
dF F i = σ 0 1 − F FC . sgn ( x ) sgn 1 − sgn ( x ) , (7.265) dx FC
z = x −
g(v) =
1 σ0
abs ( x ) ×z g ( x ) 2 abs ( v ) ( F + F − F ) exp − (7.267) C S C v S
F = σ 0 z + σ1 z + KV × x where σ1, σ2 are coefficients. This takes the Stribeck term into account through the g(v) function and the damping term σ1 (dz / dt ) in the friction force prevents the model from behaving as a linear spring. After taking into account the above models for use in simulations the authors recommend a final form, namely,
z z = x 1 − sgn ( x ) δ i F z abs ( v ) F = 1 + S − 1 exp − FC × + KV × v, vS δ FC
(7.268)
698
Handbook of Surface and Nanometrology, Second Edition
This is able to include the different friction conditions of micro-slip, Coulomb friction, Stribeck and viscous friction. Values of δ can be estimated or determined from micro displacement tests, alternatively it should be possible to use the values of the correlation lengths of the surfaces to get a realistic idea of the value. Usually FC, FS, vS, i, and K V can be Â�determined from dynamic friction tests using oscillating tests.
“weld growth” which is comparable with or greater than the contact width. It is suggested that the protective action of films is of some significance in non-conforming mechanisms running under conditions of near rolling contact. As the slide/ roll ratio is increased the distance of sliding experienced by any given asperity as it passes through the contact region will increase and this could be a significant factor in the damage and welding friction. Quin [192–194] and Childs [195] have studied the effect of oxide films in sliding contact. They find that the wear rate is very dependent on temperature, which points to a correlation between the growth of oxide film and the temperature. Although the study of oxide films and their behavior is not really surface geometry, the nature of the geometry can decide how the films behave under load. Also the study of the oxide debris produced in a wear experiment can give clues as to how and in what mode the contact is formed. For example, an analysis of debris from iron and steel in friction and wear situations is significantly different from the analysis of debris obtained in static experiments. For instance, there is under certain conditions a much higher proportion of Fe3O4 and FeO than Fe2O3 which is the normally expected film material. It has been suggested that the presence of Fe3O4 coincides with low wear and high protection against metal–metal contact. In brief the oxides are significant. Quite how they work in practice is not known for certain. Archard said that “an analytical approach which may provide a theory for the mechanism and magnitude of frictional energy dissipation in the presence of oxide films is sorely needed” [196]. Mechanistic models for friction have been attempted (e.g., Ref. [197]). The idea is that contact can be considered to be between two bodies of cantilever-like asperities whose stiffness is less than the bulk. This stiffness is in the tangential rather than the normal direction (Figures 7.72 and╯7.73). The asperities take the form of springs shown in Figures 7.72 and 7.73. This model considers friction from the cold junction and asperity climbing modes. There are four stages in the development of the sliding characteristic of the model:
7.3.2.2.7â•… Stochastic The form of this can to a first approximation be taken as
FStoch = FSmooth + P ( A, f ) .
Where P(A,f) is a noise factor usually attributed to some aspect of the random surfaces. A is amplitude and f is frequency. This term has to be added to the final expression for the components of the frictional force. The benefit of rationalizing the various approaches to the friction term for inclusion in simulations is valuable but as can be seen above somewhat messy. 7.3.2.2.8â•… Chemical Layers Although the models are an imperfect representation of real surfaces, they nevertheless show that as the model approaches that of a real surface the power law approaches unity, which is the condition hitherto reserved only for plastic contact. Then as the complexity of the model increases the number of individual areas becomes more nearly proportional to the load and their size less dependent on it. The presence of the oxide layer may be one of the reasons why it is difficult to assert that friction is independent of the load (Figure€ 7.71). According to the characterization of the load versus average area of asperity contact it should simply be because the average asperity contact does not change much with load. Unfortunately this is where the idea of average behavior falls down. The fact that the average asperity remains more or less constant does not stop the biggest contact getting bigger. This means that the stress regions around it will also be much larger than average. This increase could certainly tend to break down any oxide film and allow metal–metal contact, a condition which would not happen to the average contact. This oxide breakdown can cause a large and unpredictable increase in the coefficient of friction and as a result the wear rate. The presence or absence of oxide film is very important to the rubbing behavior of surfaces. There are two basic distance-related mechanisms according to Archard: one is junction growth and the other is called (a)
(b)
1. The stable junction (cold junction), together with the gradual increase in slope, help asperities to carry the totally applied normal load just at the downslip threshold. 2. An increase in the tangential force brings the full normal load from the down-slip threshold to the upslip threshold. Surface 1 Oxide or sulphide film Surface 2
FIGURE 7.71â•… (a) Apparent contact metal to metal, (b) oxide layers at surface boundary.
699
Surface Geometry and Its Importance in Function
Normal load
Asperity
No load condition
Tangential force Tangential force
Bulk solid
Bulk solid
Bulk solid
FIGURE 7.72 Cantilever model for asperities in friction. (Shahinpoor, M., and Mohamed, M. A. S., Wear, 112, 89–101, 1986.)
W
W F
W F
F
FIGURE 7.73 Cantilever spring model for friction. (Shahinpoor, M., and Mohamed, M. A. S., Wear, 112, 89–101, 1986.)
3. The cold junctions fail, allowing the upper asperity to slip upwards; this ends when the asperities meet at their tips. 4. Gross sliding occurs.
This model and others like it cannot be expected to mirror the truly practical case but it does enable discussion to take place of the possible mechanisms to produce the various known phases of surface contact behavior. Such phases are elastic displacement in the tangential direction (quasistatic behavior) and the microslips along asperities and gross sliding. To be more relevant, areal behavior encompassing summit distributions and a general random field characterization of these modeled asperities would have to be investigated. Some examples of friction and surfaces follow illustrating the complexity. Often just those surface geometry effects which influence the frictional characteristics are those which cause the frictional forces to change the surface geometry. Other models of friction have been put forward. An unusual approach has been made by Ganghoffer and Schultz [198]. They start by artificially inserting a third material in between the contacting surfaces. The intermediary has to be thin. They in effect, exchange the two body situation with a three body which apparently is more tractable and which enables very general laws of friction and contact to be produced. In some respects this type of approach seems more useful for describing the properties of the thin layer between the surface, rather than the contact and friction between the
two principal surfaces. They also get into some mathematical problems as the thickness of the layer is taken to the limit of zero. 7.3.2.2.9 Atomic Friction Model The question asked is: what is the nature of the transition from macroscopic to atomic friction? The answer basically is moving from an analog ie continuous condition to that of a discrete one. How this is explained is usually achieved by starting off with the traditional laws involving practical rough surfaces then moving to considering the effect of single asperities rubbing against smooth surfaces and finally to the situation where atoms move against atoms. It is not always clear why it is necessary to include the second step but it seems to help in the transition of understanding [199]. To a first order the starting place is the well known Amonton’s law
F = µW.
Because of the roughness the actual contact is very small, being made up of a large number of small contacts. As the load increases more asperities contact therefore F increases. If there is only one asperity it obeys Hertz’s rule for elastic contact giving the area π a2 ≈ KW2/3 where K is dependent on the shear force per unit area, the elastic modulus, and the radius of the contact illustrating the simple result that the value of F is different from multiple contact than from unit contact. For smooth surfaces F = A s where A is the actual area of contact and s is the shear force per unit area. It is through the properties of A and s that it is necessary to relate the mechanisms of macro-scale friction to those of atomic scale friction. The mechanical forces causing deformation, plowing, noise etc. are replaced by molecular forces, inter-atomic forces, chemical bonding and so on: the discrete replacing the continuous. A simplified example of why the forces on the atomic scale produce discrete effects is seen in Figure 7.74, which shows that in the atomic equivalent of stick–slip the jerky movement has to occur in atomic intervals.
700
Handbook of Surface and Nanometrology, Second Edition
F Fixed
Moving
Atomic spacing
FIGURE 7.74 Atomic spacing and discrete stick–slip.
FIGURE 7.75 Asperity and debris interaction. (From Moslehy, F. A., Wear, 206, 136, 1997.)
This indicates that that wear debris is being progressively generated. Also when comparing the steady-state data with that of higher sliding speeds the meanvalue of the normal force increases whereas the friction force decreases but the fluctuations in both the normal and the frictional forces increase. This can be attributed to an increase in the frequency of asperity and debris interaction.
Another example is in lubrication where the presence of a n atomic film allows friction to be controlled by the formation and breaking of bonds within the film rather than between the sliding surfaces.
7.3.2.2.11 Micromechanics of Friction—Effects of Nanoscale Roughness This, [202] was a study of gold on mica: there were two investigations.
7.3.2.2.10 Force Variations due to Asperity and Debris Interaction Although much work has been involved in evaluating the significance of the relationship between the normal force and the lateral force little has been said about the variations in these forces. Some attempts have been made, however for example by Rice and Moslehy [200] as above. Their work was principally engaged in modeling all aspects of the pin and disc machine for friction and wear and in particular the variations in the measured values of the normal forces and the lateral forces between the pin and the disc. Not only were the variations due to the contacting members examined but the variations caused by the pin and disc machine itself. In fact the modeling was concerned largely with the various compliances and damping of the machine and how these affected the interpretation of the measured values. They did, also, list some of the effects of having variations in the forces. In effect they were trying to simulate an interface between the sliding surfaces in which there were two sources of excitation, namely the interaction between the asperities (or strictly the points which make contact) and the interaction between the debris produced by the movement and the surfaces. See Figure 7.75. Not only does the force variation cause the formation of debris it also assists in the migration of the debris and with the surface interaction contributes significantly to the chemical–mechanical effects which affect the subsurface. Early on it was suspected that intermittent motion had an effect on the long term time dependence of friction [201]. Comparing data between run-in and steady-state conditions indicates that the mean value of of friction increases with time and the fluctuations become more pronounced.
1. A single nano-asperity of gold was used to measure the molecular level of friction. The adhesive friction was found to be 264 MPa and molecular friction factor of 0.0108 for a direct gold–mica contact. The nano-asperity was flattened although its hardness at this length scale was 3.68 GPa. It was found that such high pressure could be reached with the help of condensed water capillary forces. 2. A micrometer scale asperity with nanometer roughness exhibited a single asperity response of friction. However, the apparent frictional stress of 40 MPa fell well below the Hutado and Kim model prediction of 208–245 MPa. In addition the multiple nano asperities were flattened during the friction process exhibiting load and slip-history dependent behavior.
There could be some confusion because some small height bumps are allowed near to the nano-asperity and their behavior under load is commented on with regard to the capillary forces. 7.3.2.2.12 General Asperity contact models usually considered are shown below in Figure 7.76. From the molecular to the macroscopic level, there are, according to Li and Kim, [202] frictional responses which exhibit distinctly different characteristics at different length scales. Three typical configurations are depicted in the Figure 7.76a as types A, B, and C. A molecular contact, a single asperity and a multiple asperity contact. Correspondingly characteristic responses of the frictional forces as functions
701
Surface Geometry and Its Importance in Function
P (b)
(a)
P
(c)
α
Molecular α is friction factor Fo is τo A O
(Type A)
Fo
P
P Smooth on flat Pa is pull off force
F
O
(Type B)
P
P
Rough on flat µ is Amonton’s friction coefficient
µ
(Type C) O
F
FIGURE 7.76 Different asperity contact models. (Modified from Li, Q., Kim, K. S., Proc. R. Soc. A, 464, 1319–43, 2008. With permission.)
of the normal loads are shown in Figure 7.76b as curves A, B, and C, taken from Li and Kim’s Figure 1. Consider briefly Figure 7.76a.
model have made important contributions to the understanding of the behavior of large soft spheres and small hard spheres respectively.
Type A There is a point raised about the so called frictional law,
Type C Concerning experiments of the second sort larger length scales the surface roughness becomes an inevitable factor because the inter surface interaction can be substantially reduced due to roughness except at the nanoscale where the effect of actual roughness geometry gives way to effects like adhesion. In many models it has been assumed that macroscopic plastic effects had to be used for the deformation, however, it has been realized that the mechanical properties of materials are strongly size dependent: classical plasticity is probably not applicable to nano-asperity contacts.
τ = τ 0 + αp,
(7.269)
where τ0 is the adhesive stress at the interface, α is the molecular friction factor and p is the normal pressure. According to this the frictional force F increases linearly with the normal load P for a fixed true area A. In general τ0 depends on the adhesive interaction as well as the surface registry of the contacting surfaces. α is sensitive to lattice configurations e.g., the coefficient was 0.52 for Ref. [112] and only 0.02 for Ref. [110]. A point made is that in the past estimates obtained of these two parameters have been quite inaccurate because of their very small values, for this reason, in these experiments, care has been taken to ensure that reliable measures have been obtained. Type B A lot of work has been done in this area particularly in the field of adhesion and hence friction. For example the work of K L Johnson in the JKR model and Derjaguin in the DMT
Discussion In this paper there are a considerable body of results which lie outside the brief of this handbook and will not be reported on, however, most of the relevant results of this work are concerned with the single nano-asperity and its associated bumps rather than with the micro-asperity experiment. They highlight two effects as being particularly significant. One is the flattening of the nano-asperities by loading the other is the important role, at the nano scale, of water condensation. They point out that asperities at the nanoscale have enhanced
702
Handbook of Surface and Nanometrology, Second Edition
strength relative to their macro-scale behavior yet despite this they appear to be flattened by external loads which are not excessive. The reason for this they attribute to the considerable forces exerted by capillary action of water in the surface of the mica: in fact the same order of magnitude. This is a very interesting observation. The question arises, however, as to how significant this result is in practice? Is this water condensation likely to be an important factor in nanoscale contact or not! Obviously it depends on the application and its associated materials but it would have been useful to have had a comment on this from the authors.
influence of texture on the different components of friction and Figure 7.78 shows that a small amount of lubricant can make a difference.
7.3.2.2.13 Influence of Texture Lay on Friction Menezes et al. [203] found that the coefficient of friction is at a maximum when the pin is moving across the lay and reduces noticeably as the pattern becomes more random. The actual value of the roughness is not as significant as expected. Transfer of material and the onset of stick–slip are also more affected by the direction and pattern of the lay which the authors call just texture. Figure 7.77 shows the
7.3.2.3.1 Tribo-Reactive Layers The experiment was carried out using various steel and ceramic balls of 10 mm diameter sliding on DLC coated 100 Cr 6 substrates, for example Al2O3 balls at a load of 10N over a stroke length of 200 μm at 20 Hz. The friction changed 2.0
Coefficient of friction Roughness Ra value Fractal dimension
0.9 0.8
1.9 1.8
0.7
1.7
0.6
1.6
0.5
1.5
Dry condition
1.4
0.4 0.3
1.3
Lubricated condition
0.2
Fractal dimension
Coefficient of friction / roughness (µm)
1.0
7.3.2.3 Dry Friction According to Santer et al. [204], a systematic experimental investigation of the correlation between surface roughness and friction especially in dry conditions is very difficult because of the many other influences which can mask the relationship. Two of the important types of effects are the friction induced production of tribo-reaction layers, which alter the friction without changing the topography and the generation of wear particles in the sliding contact zone.
1.2 1.1
0.1 0.0
U-PD
8-Ground
U-PL
1.0
Random
Texture of surfaces
FIGURE 7.77 Influence of texture on components of friction. (From Menezes, P. L., Kailas, K., and Kailas, S. V., Wear, 261(5–6), 578–91, 2006. With permission.)
Dry conditions Lubricated conditions Plowing component
0.3
0.4 0.3
0.2
0.2
0.1
0.1
0.0
0.0
–0.1
U-PD
8-Ground U-PL Texture of surfaces
Random
Plowing component of friction
Range of oscillation
0.4
–0.1
FIGURE 7.78 Influence of small amount of lubricant with texture. (From Menezes, P. L., Kailas, K., and Kailas, S. V., Wear, 261(5–6), 578–91, 2006. With permission.)
703
Surface Geometry and Its Importance in Function
from 0.2 to 0.02 after 2000 cycles due to the transfer of a carbon film from the coating to the ball. Also SiC and Si3N4 were tried with the result that the friction varied between short low friction periods and long high friction periods which could only be due to the reaction of Si with carbon as it did not happen with steel or Al2O3. 7.3.2.3.2 Friction on Artificial Surfaces A special set up measuring topography, position, and friction were made using an AFM on some specially prepared surfaces. This step was deemed necessary because of the authors difficulty with real topographies although it was not really clear why this was so and only introduces problems of lack of validity into the results. One experiment was using an Al2O3 ball of 10 mm diameter on a rigid surface of titanium having trenches 0.4 μm deep 285 μm wide and separated by 300 μm with a normal force of 0.25 N. The first stroke showed an increase in friction over the stroke despite constant external conditions. They comment on the fact that dips in friction correspond to the grooves in the surface. Unfortunately, scratches produced by the ball produced many unpredictable effects. In another set of experiments the grooves were 130 μm deep, 1000 μm wide and separated by lands of 1000 μm. These were etched onto a silicon surface. Why the dimensions of the grooves had these values is not clear. The ball was 5 mm diameter and made of silicon nitride and operated at 0.15 N. On the whole the results from this investigation were inconclusive apart from the fact that transfer and the presence of debris completely spoilt the experiment. The fact that the choice of parameters was somewhat arbitrary did not make any difference to the conclusion that dry and partially dry friction is still not well understood. 7.3.2.3.3 High Friction Surfaces This is a subject which is not discussed as much as measures for reducing friction because it does not impinge directly on dynamic applications [205]. However this does not mean that it is not important. The authors point out that high static
(a)
(c)
“friction”: the reluctance to start movement is of vital importance in all kinds of frictional joints, flange joints shaft couplings fastener systems, bolted joints, all to keep the relative position or to transfer mechanical power. By increasing the coefficient of friction between, for example the two surfaces of a flange joint it would be possible to reduce the number or the size of bolts without losing the grip. Hence increasing the coefficient of friction can make construction cheaper, smaller, and lighter. One proven way to increase the friction is to increase the plowing component i.e., that term due to the asperities pushing material aside during movement rather than the adhesive component. Any small amount of lubricant can change dramatically the adhesive term but not so much the plowing term so the emphasis has to be placed in ways of ensuring that if any movement occurs then plowing has to take place. This is where the design of the “gripper” surface has an effect. An effective way has recently been introduced by Pettersson and Jacobson [206]. They have introduced sharp pyramidal asperities on the one surface by means of MEMS technology. By photolithography and anisotropic etching well defined pyramid shaped pits are introduced into a silicon wafer. This is then coated with diamond which in turn is electro plated with nickel and the silicon etched away. This results in an entirely flat surface except for the now positive diamond pyramids or ridges. A typical surface has 22% of its area covered with pyramids of height 85μm. See Figure 7.79. A simple approach used by the investigators assumes that the “counter surface”: the surface which the diamond tips contact, is idealized having no work hardening. μp = plowing force /normal force = HAP/HA N = AP /A N. (7.270) H is the hardness of the counter surface, AP, AN are the projected plowing area and projected load carrying area. This gives a plowing coefficient of friction of 0.71–1.0 for a face-on pyramid and a value of 1.0–1.25 for edge-on diamond some taking account the flow effect around the diamond or any
(b)
(d)
(e)
FIGURE 7.79 Surface grip with high friction showing alternative contact situations between the microtextured diamond surface and the metallic counter surface (a) idealized where load is carried by the pyramids, (b) shows more load, (c) situation with pile up where the ridges begin to take up the load (d) imperfect alignment, and (e) situation with rough metallic counter surface. (From Pettersson, U. and Jacobson, S., Tribol. Lett. 2008–2009.)
704
Handbook of Surface and Nanometrology, Second Edition
pile up in the indentations. It is obvious that this value of friction is a really useful contribution to the static friction. This assumes that only the diamonds are contacting and that there is no contact flat on flat. In practice there will be pile up in front of the diamond which will eventually foul up with the body of the surface, see Figure 7.80 which will tend to force the surfaces apart. The authors estimate that this will happen when the pyramids are indented to about 0.74% of their height. So, here is an example where there are two surface effects, one on the primary surface and another on the secondary or counter surface the deterministic nature of the primary allows a degree of control of the indentation whereas roughness on the counter surface is usually a problem in control, but in practice the friction level is robust: it is only weakly influenced by initial overloading and the surface roughness of the metal surface or the surface lay.
looking into indenter shape, elastic effects, pile up of material and so on [207]. Lafaye concluded that for plastic materials Bowden’s formula holds but the apparent coefficient of friction increases with depth of penetration for soft materials while being independent for hard materials. [208]. The basic problem of determining μP resolves itself into working out the cross sectional area of the indentation in the soft material. Contrary to expectations it is not a cone but a hyperbola, being the intersection of a cone with a plane parallel to the axis of the cone, although this does presuppose that the intersection does not occur at the axis. For a triangular shape, taken as an approximation of a cone the value of μP is
7.3.2.3.4 Plowing The first attempt to estimate the plowing friction was due to Bowden and Tabor [12]. They came up with the formula,
µP =
2 cot θ f (ω ) , π
(7.272)
where ω corresponds to the back angle of the cone µP =
2 π cos ω (1 − sin ω ) cot θ , π + 2ω + sin 2ω π
(7.273)
or a truer representation
2 µ P = cot θ, π
(7.271) µP =
assuming no local friction and that the indenter was a cone. They were interested for a number of reasons. One was because the scratch test for the hardness of materials used a sharp stylus as a tool it was therefore important to understand the mechanism underlying the test. Another reason was the behavior of surfaces rubbing together especially when one was considerably harder than the other. In this case each of the asperities of the harder surface in effect plowed into the softer surface giving rise to one aspect of the frictional force. It was the use of formulas such as the above that enabled them to realize the magnitude and the existence of the adhesion between the surfaces which was such a landmark in understanding friction and the role of surface roughness. They made, however, simplifying assumptions which are now having to be addressed. Perhaps the person who has been sorting out most of the problems in this area is Lafaye who has been
2 π sin 2 ω cot θ π + 2ω + sin 2ω π 1 1 ω ω × − tan 2 + ln mod tan . (7.274) 2 2 4 tan 2 ω 2
For a spherical tip
(
)
ρ2 sin −1 a cos ω − a cos ω ρ2 − a 2 cos2 ω 2 ρ , µP = 2 a π + 2ω + siin 2ω
(7.275)
where ρ = R2 − (a cosω 2 ) tan 2 ω .
(7.276)
For the “real” tip which is assumed to be the combination of a cone truncated at the apex into a sphere the result is Equation 7.276 multiplied by S
H
Vp
Vw h
2 1 2 1 A − − 4 ln mod A A 8
b
FIGURE 7.80 Pile up limitation-the pile up volume is same as indent volume. (From Pettersson, U. and Jacobson, S., Tribol. Lett. 2008–2009.)
1 ω ω + − tan 2 + 4 ln mod tan 2 2 tan 2 ω 2
where A =
a02 − ar2 − a0 . ar
(7.277)
705
Surface Geometry and Its Importance in Function
Lafaye concluded that his treatment taking into account the factors outlined above gives frictional coefficients about 20% higher than those obtained just using the triangular approximation to the conical stylus. What he did not mention was that very often the practical tip takes the form of a conical shape with a flat on the tip rounded slightly at the edges, which is easier to model than the spherical tip version. He does, however, recommend that if the rear angle of the stylus is greater than one radian the triangular approximation is probably acceptable for most purposes. See also Ref. [209]. 7.3.2.3.5 Friction in Metal Processing Tribology is very important in all aspects of manufacture but especially in material processing. This includes cutting mechanisms such as in turning and milling but also in the abrasive processes such as grinding. Basically the friction mode determines the way in which the chip is formed and consequently the efficiency of the process. It is also important in forming where the friction between the material and the die determines the conditions of flow. The reader is recommended to consult specialist books on this subject for example Ref. [210]. 7.3.2.4 Wet Friction—Clutch Application In heavily loaded wet clutches sintered friction materials are sometimes used due to their resilience at high loads and high temperatures and relative cheapness. During the lifetime of the clutch changes occur to the topography. These influence the friction characteristics of the clutch and thereby affects the anti-shudder performance of the transmission system. This problem has been investigated by Nyman et al. [211] for surface operating conditions for wet clutches at low sliding velocities and high clutch pressure, typically
2
2
Sbi Ssk
1.5 1
1
0.5
0.5
0
0
–0.5
–0.5
–1
–1
–1.5
–1.5
–2
0
2
4 6 Test time (h)
8
Sbi Ssk
1.5
Sbi, Ssk
Sbi, Ssk
0.5 m/sec and 10 MPa. Under these conditions it is common that stick–slip and shudder occur. Nyman says that the general opinion is that the friction–veloity curve should be that of a low static coefficient of friction μS and a dynamic coefficient that increases with sliding velocity. the aim of the investigation was to investigate whether the change in the frictional behavior could be explained by changes in the topography and if so what are the significant parameters. The experiment was set up in the usual way by running a set of clutches under controlled conditions then cleaning them and measuring a number of surface parameters to see if there was any correlation with the test results There was a factor due to the use of sintered materials which has to be reported : they are so much softer than the opposing separator plates which were made of hardened steel that the characteristics of the plates were ignored in the presence of the much greater changes in the sintered friction discs. Tests were carried out before the roughness experiments to determine the importance of the lubricating oil used. It turned out that this effect could be ignored. The topography investigation was limited to the micron scale which meant that the oil grooves were not included in the study. The part of the surface deemed important in the frictional performance was measured: this was considered to be the fraction of the surface within about 8 μm of the highest summits. Figures 7.81 and 7.82 show some of the test results. The figures on the left show tests on older friction material while those on the right results taken on newer material. It is immediately obvious the bearing index is showing nothing because it does not vary throughout the life cycle. The skew results are disappointing because it is usually good for characterizing wear processes and although the skew is sensitive to spurious effects each of the values is an
10
–2
0
5
10
15 20 25 Test time (h)
30
35
FIGURE 7.81 Results of bearing index and skew variation. (From Nyman, P., Maki, R., Olsson, R., and Ganemi, B., Wear, 41(1), 46–52, 2006. With permission.)
706
Handbook of Surface and Nanometrology, Second Edition
100 90 80
80
70
70
60 50
60 50
40
40
30
30
20
0
2
4 6 Test time (h)
8
Ssc 10’Sdq
90
Ssc, 10’Sdq
Ssc, 10’Sdq
100
Ssc 10’Sdq
10
20
0
5
10
15 20 25 Test time (h)
30
35
FIGURE 7.82 Results of peak curvature and rms slope. (From Nyman, P., Maki, R., Olsson, R., and Ganemi, B., Wear, 41(1), 46–52, 2006. With permission.)
average of 10 readings, which goes someway to smoothing out odd values. It indicates that there is more than one process happening. The peak curvature shows a significant trend upward with time indicating a sharpening which seems strange: this is the opposite to that expected for example with a non-porous material used in, say, plateau honing. Also there is a noticeable shape which could be due to two different wear processes acting on the newer material which could be something to do with a complicated running-in. Similarly for slopes. What can be said is that these two parameters definitely change indicating that there is a possibility of using them for the prediction of the life left in a clutch. 7.3.2.4.1 Note on Computer Simulation This whole concept of carrying out speculative experiments on a computer is still in its infancy, not so much in the ideas as in the application and, more importantly still, in the verification of results. This point has been somewhat neglected, but in the field of surface metrology and the behavior of surfaces it is critical. At one time very simple parameters such as Ra were used to give an idea of the surface, whether or not it was likely to be useful in a functional situation. Later on wavelength parameters were added and later still the more elaborate random process analysis. The idea was to get from a control position, as in the control of manufacture, to a predictive situation in which the functional capability could be guaranteed. In order to do this, rather than do the experiments fully the link between performance and specification is being increasingly filled by the computer simulation of the surfaces in their working environment. However, the rapid move to more and more simulation should not slow down the practical verification of results. It may be cheaper and quicker to do simulations but this does not necessarily produce more valid results. See comments on simpirical verification.
7.3.2.5 Atomistic Considerations—Simulations The word atomistic implies that atoms are involved [178]. This does not just mean that the scale of size is atomic it also has another meaning which is associated with the method of processing in which individual atoms are involved. In the mid 1980s atomic simulations for the analysis and description of the “collective behavior” of the liquid–solid state and the gas–solid state began to be introduced. Such behavior included the investigation of equilibrium and non-equilibrium systems where the evolving surface or interface may be subject to redistribution or the flow of particles such as in growth mechanisms or etching. The behavior can be broadly be broken down into two: Microscopic mechanics This includes the formation of defects and other surfaces features where a wide range of microscopic mechanisms might be involved and which need to be taken into account. Macroscopic propagating surfaces The surface or interface between surfaces may exhibit a complex evolution e.g., anisotropic etching which can lead to complex multi valued surface fronts propagating with different rates and displaying different morphologies depending on the exposed surface orientation, particularly relevant for constructions used in MEMS. There are three main approaches, Molecular Dynamics (MD), Monte Carlo (MC), and Cellular Automata (CA). Whereas the MD method tends to deal with individual atoms as they are, the MC and CA methods simplify the nature of the interactions between the atoms: simplifying them by simple rules which may be tailored to retain the essential behavior of the system. This approach enables the simulation of large system activity involving tens of millions of atoms or cells. In Ref. [212–214] above the differences between the MC and the CA
707
Surface Geometry and Its Importance in Function
are considered with respect to data storage. This leads to the adoption of the “octree” (octal tree) in both of the simulation methods because it offers substantial reductions in memory as well as allowing fast addressing and searching procedures. To illustrate the methods an example in anisotropic etching is used. The main difference between the MC and CA methods is that the MC as its name suggests deals with a random approach whereas the CA is deterministic. When simulating using the MC technique the propagation i.e., the development of the process is stochastic. In a variant KMC (kinetic Monte Carlo) the times during the simulation process are faithfully measured and monitored so that if total evolved time or some other time factor is needed it is available. This facility could well be useful if times taken to converge to the steady-state, from the transient state, are needed. In CA, however, the system evolves in a deterministic way and time is an integral feature. Just because the method is deterministic does not necessarily mean the sometimes random-like patterns do not emerge. In addition to the random/deterministic difference there is another fundamental difference, which is that MC is sequential in terms of dealing with atoms but the CA is completely parallel and deals in averages. in both methods the atoms of the surface or interface are visited one by one and their neighborhoods are inspected in order to determine the removal rate (or probability), deciding whether the the atom is to be removed or to remain attached. The essential difference is that in the KMC method the atom is removed as soon as it has been decided whereas in the CA method it first decides which atoms are to be removed then it removes them altogether. So, the CA takes two loops to do a time step and the KMC takes one. This and other processing reasons make KMC more suited to the microscopic collective processes and CA better suited for surface propagation. The Movable Cellular Automata (MCA) method has been used by Popov and colleagues, an example of which is given below in the friction problems associated between wheel and rail. 7.3.2.5.1 MCA Dry Friction Rolling The behavior of the friction mechanism between the wheel and the track of rolling stock is still not properly understood. for this reason there has been some attempts at developing new theories about the friction mechanisms involved. For, example Bucher et al. [212]. They develop a model based on “MCA.” The influence of pressure and sliding velocity can be investigated using this technique which apparently has been devised in Russia just for complicated processes like this [213]. The reader is advised to consult this paper by Popov and Psakhie. MCA In the MCA method a medium is represented as an ensemble of discrete elements such as the center of mass position, the value of the plastic deformation, the rotation vector and
discrete variables characterizing “connectivity” between the neighboring automata. The evolution of the modeled ensemble of automata was defined on the basis of the numerical solution of Newton–Euler equations of motion by the so called “integration scheme of the second order of accuracy.” This probably means that the model is a second order of approximation to the true situation. To give an example friction to a first order is simply a number i.e., the coefficient of friction whereas the relationship to the sliding velocity would have to be included to make it a second order approximation to the actual, very complex frictional behavior [214]. The MCA model seems to be a good tool to investigate the formation of a “quasi-fluid layer which is supposed to be generated during the rubbing process between the two solids. In it physical as well as chemical processes take place such as fracture, deformation, and welding all at this very small scale of size. Figure 7.83a shows the structure at Fy = 255 Mpa and Vx = 3 m/s and b the net structure illustrating the thickness of the quasi-fluid layer. Figure 7.83c and d gives the structure and net for Fy = 383 MPa and Vx = 5 m/s illustrating the thicker layer for the larger normal force. The equations are, for a single automaton in polar form where m is mass, R radius, κ rotation angle, J moment of inertia, FN and FT normal and tangential force and K T is the moment of tangential force. To get the full picture in the micro situation these equations have to apply to all of the automata in the surface system so “m” becomes “mi ” etc. and the forces apply from all the neighbors. This is basically the integration referred to above. m
d2R = FN + FT dt 2 d 2θ J 2 = KT . dt
(7.278)
Each movable automaton is further characterized by a set of mechanical parameters corresponding to mechanical properties of the simulated material. Automata can have a size chosen for a specific application but in the case of the rubbing of solid surfaces it is usually taken as the size of a typical asperity of the surfaces in contact. In the case of this investigation the modeled object consisted of four layers: An upper layer of automata moved horizontally at velocities between 1 and 10 m/s in different numerical simulations Two intermediate levels with initial roughness in the “nm” range representing the upper and lower surface regions making contact. A lower layer arranged to represent an immovable object. The problem is that although surface roughness affects performance in a dynamic system just those physical phenomena which cause the effect on performance also can be instrumental in changing the surface itself. A constant normal pressure was applied to all the automata of the upper layer, the size of the automata was kept fixed at
708
Handbook of Surface and Nanometrology, Second Edition
(a)
(b)
(c)
(d)
FIGURE 7.83 (a and b) Structure and net for Fy = 255 MPa and Vx = 3m/s, (c and d) Structure and net for F y = 383 MPa and Vx = 5m/s showing the quasi-fluid layer in b and d at different normal forces and lateral velocities. (Popov, V. L. and Psakhie, S. G., Phys. Mesomech., 4, 73–83, 2001; Dmitriev, A. I., Popov, V. L., and Psakhie, S. G., Tribol., Int., 29, 444–49, 2006.) –5 Y
1 - bottom surface 2 - top surface
–5.5 –6 –6.5
X
FIGURE 7.84 Profile through the bonds of the lower constructed surface for Fy = 255 MPa and Vx = 3m/s. (Popov, V. L. and Psakhie, S. G., Phys. Mesomech., 4, 73–83, 2001; Dmitriev, A. I., Popov, V. L., and Psakhie, S. G., Tribol., Int., 29, 444–49, 2006.)
2.5 μm in all the numerical simulations. It was assumed that the surfaces obeyed fractal characteristics, which as usual, caused problems in the matter of determining the real area of contact and the pressures at the very small sizes. The authors proceeded to get around the fractal problems by resorting to material properties such as the hardness of the asperities being finite. The MCA technique allows various activities between bonds and couplings to be performed, which enable a simulation to take place of the destruction and regeneration of the surface topographies of both of the contacting surfaces Figure 7.84 shows a path which has been drawn connecting all bonds of one of the contacting surfaces, in this case the lower surface. Having done this it is then possible to analyze this simulated profile Figure 7.85 shows the power spectrum.
2
1
–7
–7.5 –8 –8.5 –9 1.5
2
2.5 3 Log(K)
3.5
4
FIGURE 7.85 Power spectrum of the profile generated in Figure 7.84. (Popov, V. L. and Psakhie, S. G., Phys. Mesomech., 4, 73–83, 2001; Dmitriev, A. I., Popov, V. L., and Psakhie, S. G., Tribol., Int., 29, 444–49, 2006.)
This simulation method imposes a digital increment corresponding to half the size of the automaton in the formulae below designated by δx. The profile length is l. The Fourier coefficients (a) and (b) are given by the standard formulae
709
Surface Geometry and Its Importance in Function
2 a(KN ) = l
∫ 0
2 f ( x ) sin ( K N x ) dx = l
mδx M fm sin ( K N x ) dx m=1 (m−1)δx
∑ ∫
b(KN ) =
l
2 l
l
∫ f ( x ) cos ( K
N
x ) dx =
0
M fm m =1
∑
(7.279)
2 l
mδx
cos ( K N x ) dx . ( m −1)δx
∫
Here K is the wave number corresponding to the frequency. It has a decaying spectrum, which the authors claim is typical of a fractal surface. It is statements like this which reduce the credibility of the conclusions reached in this and similar work. A considerable amount of thought has gone into the MCA method of simulation and how it can describe and explain the behavior of mass and energy in the quasi-fluid zone. In fact in other work Popov and Schargott [215] liken the process of material movement and deposition to a diffusion process which seems to be a plausible mechanism. Their argument goes as follows, let the height of the lower surface be h(x) piled up (in this intermediate region) made up of particles momenta-like. The flux density of such particles j(x, t) is given by,
j ( x, t ) = − D
∂ h ( x, t ), ∂x
(7.280)
and the equation describing the transportation of these particles as a function of time and position is
∂ h ( x , t ) + j ( x , t ) = 0. ∂x
(7.281)
The combination of the two equations describe a diffusion equation, namely
∂2 h ( x , t ) = D 2 h ( x , t ) , ∂x
(7.282)
Equation (7.282) is of average behavior in a stochastic sense. At this point they introduce a stationary Gaussian distributed noise in space and time which is where the problem arises because the probability of distribution of impacts, deformation, subsequently wear in space surely has to be uniform. This automatically imposes an autocorrelation envelope which is exponential due to the Poissonian and hence Markov statistics. The power spectrum of the surface must have,
therefore, a decay at large wave numbers of K-2 (i.e., 1/ω2) which is not dependent on fractal mechanisms and so this conclusion should not be reached unless there is evidence that this MCA technique produces any decay which is significantly different from the value of –2 quoted. The other non fractal indication is that all the spectra shown have a flattening at low values of K! Unfotunately what has been described is only a taste of the very interesting technique proposed by Popov and others. It is not possible to follow the arguments properly without a knowledge of the MCA method they propose in more detail than that given here but it seems that the pressures they evaluate in the simulations are very large at the outset with the result that practically all the main assumptions are concerned with plastic flow effects almost at the start of contact thereby removing the initial surface finish aspects: the original surfaces appear to have little impact on the surfaces subsequently generated. Also, the assertion that the surfaces produced are fractal is not sustainable without more evidence. Their main conclusion is that the dynamic processes of plastic deformation and fracture at the nano-level is likely to be of most importance. 7.3.2.6 Rubber Friction and Surface Effects This subject is extremely important covering such issues as tire-road behavior, windscreen wiper blade performance and has been investigated in great depth by Persson in his seminal paper [22] and others. Rubber friction differs in many ways from the frictional properties of most solids because of the very low modulus of elasticity and the high internal friction over a wide frequency band. Much of the earlier work is due to Grosch [216], who showed that the internal friction of rubber actually determined the external frictional characteristics of rubber on hard surfaces. The temperature dependence of rubber on solids is the same as that of the complex elastic modulus of rubber itself, i.e., mainly a bulk property. Rubber friction exhibits two components namely adhesion and hysteretic. Adhesion is relevant only for the friction against smooth surfaces but hysteretic is relevant for friction against rough surfaces. According to Persson the asperities of the rough surface exert variable forces on the rubber during relative motion which causes fluctuating deformations and consequently because of the bulk properties i.e., via the bulk properties, cause dissipation of energy and hence friction so that the roughness is the source of the friction. The question is by how much? Consider the rubber to be pressed very hard against the rough solid to make a complete contact Persson maintains that the magnitude of the hysteretic contribution only depends on the ratio of “h”, some measure of the surface profile height divided by λ which is a measure of the roughness spacings, in other words the roughness slope. In his words. “The surface roughness of different length scales contribute equally to the frictional force if the ratio of h/λ is a constant.” The other
710
Handbook of Surface and Nanometrology, Second Edition
factor is velocity dependent, namely the contribution to the coefficient of friction is at a maximum when τv / λ = (1 / τ) where τ ∼ exp ( ∆E / k BT ) which determines the viscoelastic properties of the rubber. ΔE is the energy change in the molecular “flips” which accompany the movement. T is temperature and kB is Boltzman’s constant. The roughness is therefore doubly important being involved in both factors influencing friction. Thus
is a mixed regime of lubrication it is entirely possible that bonds form between the asperities of the roll and rolled strip. The level of reduction and the roll speed vary the number of asperities in the contact zone and the time in which they are under pressure. It is somewhat strange that attempts to correlate the surface finish to friction in steel rolling [221] concludes that simple parameters Ra, Sa do not have the expected correlation. This is to some extent understandable. What is more worrying is that some of the “areal” peak parameters such as Spk were also ineffective. Attempts to relate the surface texture directly to the frictional characteristics of aluminium sheet and formed steel have been made. Work with aluminium is becoming more important since car bodies and engines are being made with it. There has been an increase in experiments to control friction, drawability, paintability. Consistent results [222] indicate a friction model which depends on plastic deformation and the real area of contact under pressure. Coefficients of friction across the rolling direction were about 40% lower than with the rolling direction. As plastic flow seemed to be the dominant mechanism the actual size and shape parameters of the roughness turned out to be less important than is usually the case when controlling the process. Some very careful experimentation has been carried out with sheet steel by Sharpelos and Morris [223]. They quite convincingly tie down frictional coefficients to Ra and the average wavelength. In particular they determined that friction (hence formability) became lower as Ra was increased with the proviso that the wavelength were short. They suspected that the trapping of lubricant was helping to keep the surfaces separated. What they did not appear to do was to realize that their results pointed directly to the use of a single slope parameter for control. Some advances have been made in this area e.g., by Kang et al. [224].
τν h µ= f , λ λ
Taking the first term and letting the power spectrum be p(ω) the slope spectrum will be therefore ω2 p(ω), which is the contribution to the friction. If the surface is fractal, say, with a profile dimension of 1.5—the center value, p(ω) will be ~ω-2 making the slope spectrum a constant and consequently the contribution to friction constant for all frequencies. For fractal dimensions different from 1.5 the contribution to friction will not be constant. Machined surfaces having a spectrum of the form p(ω) ≈ (ω2 + α2) –2 will have a rising contribution until ω~α when it will also become constant. This makes sense because this frequency and higher, correspond to spacings about the size of the abrasive grit impression and smaller. The total contribution to friction will not, in practice, for either type of spectrum, be infinite because the rubber will not be able to reach the bottom of the smaller wavelengths especially in the valleys where the pressures are very much less than on the peaks so they will not contribute fully. 7.3.2.7 Other Applications and Considerations Some examples of the importance of the roughness of surfaces and friction in everyday life have been reported by Thomas. Most are concerned with slipping on floors. That floors and shoes can be ranked by roughness measurement is not disputed but quantitative values are elusive [217]. The most commonly reported studies on friction are concerned with roads [218], o-ring seals and various modes of lubrication [219]. Recent work on friction in engineering seems to have been concentrated in a few areas. These are:
1. Frictional effects of manufacturing processes 2. Temperature effects of friction 3. Mechanisms of plowing 4. Stick–slip.
In the hot rolling of aluminium the detail about the surface contact is the most difficult to quantify [220]. It is now possible to show the dependence of the coefficient of friction on the reduction and on the speed. As the reduction is increased the coefficient first drops, reaches a minimum then increases. At the highest speeds it is fairly constant at about 0.2 ± 0.1. The magnitudes and variations have been attributed to the adhesion theory of friction. It is suggested that as there
7.3.2.7.1 Dry-Friction Rolling, Sheet Material The friction behavior in rolling mills is not always related to the measured transverse roughness of the rolls. The evolution of a surface during a rolling campaign changes both the transverse roughness as well as the roughness in the rolling direction. Some work has shown that the pattern of the roughness on the rolls could also be important [224]. They show that the mill friction changes occurring during a sheet rolling campaign are reflected in the sheet surface topography. They define a surface texture anisotropy ratio (STAR) which is obtained from the standard 3D (areal) parameters measured routinely. An empirical relationship is developed measured STAR values and mill friction on aluminium sheet produced for two different rolling campaigns. Thus, two parameters are extracted from the power spectral density of the surface in the transverse (x) direction and from the longitudinal (roll) direction (y) assuming a fractal type of spectrum with parameters a and b
Transverse PSD ( x , f ) =
a(x) , f b( x )
(7.283)
711
Surface Geometry and Its Importance in Function
RMS( x , fn − p ) = RMS( y, fn− p ) =
∫ ∫
fp
a ( y) , f b( y )
PSD( x , f )df ( x ),
fn fp
0.06
(7.284) Friction coefficient
longitudinal PSD ( y, f ) =
PSD( y, f )df ( y).
fn
The parameter proposed to augment the roughness values is defined as RMS ( y, f ) STAR = RMS ( x , f )
Grit C H
0.05
0.04
0.03
2
(7.285)
This is one way of specifying anisotropy in terms of purely spatial characteristics. The best way would have been via the autocorrelation function but the way suggested is making indirect use of the high pass cut off filters in the instrument. As the basic idea is to be able to check the condition of the rolls from surface measurements on the sheet there has to be a link between the surface texture and some parameter of the mill. This is achieved by means of the coefficient of friction of the rolling process. It is known that the coefficient of friction of the process can be calculated according to the formula by Roberts [225]. µ=
T , ν (1 − r ) FR 1 − 2 r
(7.286)
where the rolling force is F, the torque T, reduction r, forward slip ν and deformed roll radius R. Also it is assumed that there is a constant coefficient of friction over the arc of contact between the roll and the strip. It is this coefficient which can be directly related to roll condition and in particular roll wear. This value of the friction coefficient can be related to the strip finish by the empirical formula developed here. This
Friction coefficient
0.06
Grit C H
0.05
0.04
0.03 300
350
400
450 500 3D × Star (nm)
550
600
FIGURE 7.86 Mill friction factor as function of the product of 3D rms roughness and STAR parameter. (From Kang, K., Pelow, C., and Witham, L., Wear, 264(5–6), 434–8, 2008. With permission.)
350
375
400 425 3D RMS (nm)
450
475
FIGURE 7.87 Mill friction factor as function of 3D surface rms of rolled sheet surface. (From Kang, K., Pelow, C., and Witham, L., Wear, 264(5–6), 434–8, 2008. With permission.)
is shown in Figure 7.86. Plotting just the roughness does not work as seen in Figure 7.87.
µ = 0.11 × (3 D RMS) × STAR.
(7.287)
The roll coefficient decreases with roll wear as the amplitude power of the longitudinal roughness decreases more compared with a decrease in the transverse roughness. It is this change which is seen by the STAR parameter. Grit lines become smoother in the rolling direction with roll wear as shorter waves are reduced in amplitude and the resistance to metal flow in the rolling direction decreases. In their words “a measurement of the 3d (areal) roughness gives a good indication of the state of the work rolls. Hence, the changing surface texture parameter of the sheet samples and the mill parameters can be used for process and quality control of sheet product.” Presumably the forward slip is the parameter directly related to wear and this can be found once the friction coefficient has been worked out from the surface roughness of the sheet then given the known or easily accessible parameters in the Roberts’ formula (the torque will be found from the pinch rolls armature current) the slip hence the wear is obtained. This is a good example of using the roughness to control manufacture in this case via friction. The only concern is the fact that as it stands the coefficient of friction worked out from the roughness has the dimension of length rather than just being a number. 7.3.2.7.2 Frictional Sound and Roughness Most of the emphasis on research into the noise of friction has been aimed at the squeal of brake pads, which is the noise emitted when flat on flat solids are rubbed together in what is usually a dry condition [226,227]. They found that there was a correlation between the squeal and the number of contact plateau a large number of small plateau gives a higher likelihood of squeal tha a small number of large plateau. They introduced a “squeal index” to
712
Handbook of Surface and Nanometrology, Second Edition
warn of the onset. This was in the form of a threshold size of plateau. If this size is below about 0.01 mm2 then there is a tendency to squeal. There has been more general work carried out on frictional sound and topography such as Ref. [228]. They decided to preclude squeal from their investigations and concentrated more on flat on flat friction rather than deal with more concentrated tests involving styli on flat which are less representative of practical situations. They used flat stainless steel plates 80 × 20 × 3 mm which they roughened by means of sandpaper of grit sizes 40, 100, 400 which gave rise to Rz values of roughness of 10.9, 6.2, and 3.4 μm, respectively. Unusually, the specimens were rubbed together by hand. This method was deliberately adopted to make that as many sources of extraneous vibration and noise were removed from the scene such as problems with the specimen holders and motors. It was also claimed that this method got rid of the possibility of stick–slip in the movement. This good idea has to be offset by the lack of control of the speed of rubbing and the pressure exerted on the contact. The specimens were rubbed in a cross action mode similar to that of a crossed cylinder machine. In this way one specimen was rubbed across the lay and the other along it. The experiments were carried out in an anechoic chamber and noises below 450 Hz were removed as irrelevant. The high frequency limit was set at 20 KHz. Resonance modes of the specimens were found by the ball impact test i.e., dropping a ball onto the plate and analysing the noise impulse response. This was a necessary move because five prominent peaks were thereby identified as specimen related. The variables were a high speed of 170 mm/sec and a low speed of 80 mm/sec together with two different pressures estimated as being at 25 N and 0.6 N, respectively. As can be expected there was high scatter in the results but not an unworkable amount. It was found that the noise peaks corresponding to the specimen did not change with speed or pressure only broadened. It was found that following the behavior of the first friction related peak gave the most sensitive results. This changed from 2.4 to 5.4 KHz as the roughness changed from 12.4 to 0.8 μm, in the presence of the specimen frequency of 2.3 KHz (which seems too close for comfort). Nevertheless, there seems to be a consistent correlation in the results, namely, that as the roughness reduced the pitch of the generated noise signal increased. This trend is to be expected because the “unit machining event” for small grit is narrower than for the large grit, but they reported no change in the peak position at different speeds only an increase in intensity with speed so the shift, they conclude is from the different roughnesses. They measured noise intensity rather than the frequency and came to the conclusion that the following expression held
SPL = (Ra / b)c dB
where b and c are experimental constants Other investigators have reported changes of the noise with roughness [229].
M
Rz ∆SPL = 20 log 1 , Rz2
(7.288)
the index M seems to take the value of about 0.8. SPL is the sound pressure level. It is not clear what the uncertainty is but it is likely the measurement of intensity is much more prone to extraneous influences than the measurement of frequency. In either case the characterization of frictional noise and its relationship to roughness is difficult. 7.3.2.8 Thermo-Mechanical Effects of Friction Caused by Surface Interaction In dry sliding the friction generates a considerable amount of energy in terms of heat sound and vibration. Various investigators have attempted to take into account the asperity collisions and the different elasto-plastic regimes that are affected by the temperature rises that result. Because of the complexity there has been considerable variability in the results that have been reported. The earliest work in this area is due to Blok [230] and Jaeger [231]. More recently amongst the many worthy contributions those of Tian and Kennedy [232], and the more general work of Ling [233]. are of particular interest particularly because of the role of the surface roughness. Unfortunately, the physical property most difficult to measure in practice in surface contact situations is temperature. Not only is there the difficulty of accessing the gap between the surfaces there is the transient and rapidly decaying property of the temperature itself. For these reasons experimental evidence of what is actually going on is rare if not negligible. At best the effects of the temperatures can be examined but the judgment of such evidence is only possible by practical scientists. As a result computer simulations dominate the scene and there is the risk of exchanging the physics for packages. The computer simulations are usually based upon finite element methods and there have been some attempts made to simplify the approach. One such example is due to Lui et al. [234]. The exercise claims to take into account steady-state heat transfer, asperity distortion due to thermal and elastic effects and material yield by means of elastic—perfectly plastic assumptions. Finite element, discrete convolution, FFT and conjugate gradient methods are all employed to get solutions. The model is applied to analyze a large number of numerically generated surfaces covering a “wide variety of statistics.” The result of all of this activity is “semi-empirical” formulae for maximum asperity flash temperatures, contact pressure, real contact area, and normal approach amongst other things. The objective is laudable but the disconcerting factor is that the word “empirical” is being used, probably unintentionally, to validate the formulae! “Empirical” means judgments based on observations made from experiments made without theory. (Oxford English Dictionary) This does not mean without insight into designing the experiments but it does mean a completely independent source of information. There is even a tendency of some investigators to refer back to earlier computational data to justify their newer
713
Surface Geometry and Its Importance in Function
computational data. Whitehouse has coined this as “simpirical” i.e., trying to get empirical data by simulation! The acid test, of course, is whether complicated, completely numerical predictions, of extremely inaccessible but highly significant physical properties actually work. One example of recent work is concerned with a practical problem [235]. A two dimensional finite element model of a rigid rough surface characterized by fractal geometry sliding over a semi infinite elasto-plastic media was modeled to simulate a magnetic read head moving over a hard computer disc. A particular feature that they highlight is the fact that in their analysis they stress the need for a simultaneous i.e., coupled approach to the development of the thermal and mechanical fields rather than the separate approach usually adopted, probably because it is easier. They cite the earlier work in this initiative by Gong and Komvopoulos [236]. The evolution deformation in the semi infinite medium due to thermomechanical surface loading is interpreted in terms of temperature, von Mises equivalent stress and equivalent plane strain. In addition to this the effects of friction coefficient, sliding and interference distance on deformation behavior were also analyzed. Their procedure was as follows;
1. To develop the surface geometry of the head by using the fractal approach 2. To develop a 2D, coupled thermo-mechanical model for the head disc interface (HDI) 3. To incorporate an equivalent topography of the HDI in a finite element model to analyze the stress and strain in the vicinity of the contact area
Finite element simulations were the performed to obtain temperature rise, stress/strain fields and contact pressure for elastic–plastic homogeneous medium. The effect of the coefficient o friction and surface interference on deformation was also investigated. The results were checked for accuracy with previous studies! They may have been consistent but were they accurate? What does accuracy mean here? The choice of surface is interesting. They use a fractal dimension D of 1.5 to get a high spectral content. They could equally well have described the surface as Markov. To get a flavor of the sort of relationship which emerges from this sort of exercise the following are taken from the paper. The load at a micro contact
P=
( 3− D ) 4 2 , E ∗G D −1 ( a ) 3 2π
(7.289)
and the average pressure
Pa =
(1− D ) 8 2 . E ∗G D−1 ( a ) 3 2π
(7.290)
The heat into a micro-contact area
q = ξ µPaV =
(1− D ) 8 2 . (7.291) ξ E ∗ µVG D −1 ( a ) 3 2π
The maximum temperature rise at a micro-contact is TSM
=
)
(
(2− D ) 4 2 , ξ k1 E ∗ µVG ( D −1) a 3π
(7.292)
ξ is heat partision factor = k1/(k1 + k2), k1, k2 are thermal conductivities Since the frictional heat is dissipated through the real contact area the maximum temperature is, from
TSM = κmax TC
4 2. ( 2 − D ) −( 2− D ) 2 θ MAX = ψ D 3 π
(2− D )
(7.293) 2
( D −1)
(G ∗ )
(2− D )
( Ar∗ )
2
.
(7.294)
Where the temperature rise TC = (µE ∗VA1AF/ 2 )/( π ( k1 + k2 )) , G ∗ = G /A1AF/ 2 , A AF is apparent contact area. Conclusions reached include; The frictional heating increases the contact area and pressure. There is a greater temperature at the trailing edge asperities than the leading asperities because of the cumulative build up of temperature. The rise in temperature of the asperities increases with the coefficient of friction and the degree of interference between the surfaces. The likelihood of cracking in the elasto-plastic medium is reduced because of frictional heating. Notice that in the numerous experiments reported which resort to fractal analysis to characterize the surfaces how few authors can interpret the G (roughness) term especially when it has D indices. 7.3.2.8.1 Friction Heating and General Comment about Surface Texture One of the interesting factors brought out in the work reported above is the asymmetric nature of the friction temperature distribution across the contact area in sliding. Although not explicitly stated in the paper, at the initiation of the asperity contacts the temperatures tend to be impulsive and that as the contact develops the cumulative content develops. In other words “differential behavior develops into integral behavior as the contact progresses spatially and temporally.” This concept follows on from the comments made earlier in this section that initial contact could be treated more impulsively than at present: the integration would derive more and more from the clumping of heat impulses of progressively shorter interval and lower “strength” as seen in Figure 7.88. The instantaneous temperature will have its source a larger number of impulses until at its maximum the number will be about the same as the number of asperities on both
714
Handbook of Surface and Nanometrology, Second Edition
(a)
δT (b)
(c)
t
T
t
FIGURE 7.88 Schematic model of differential and integral development of contact temperature. (a) Two parts making contact, (b) impulsive temperature flashes, and (c) integration.
surfaces within the contact length divided by a factor of about two. One key factor in friction is what happens to the energy generated. Obviously, at least in macroengineering, it emerges as heat which is either dissipated or which raises the temperature in the immediate surroundings. The key to the thermal effects is the time scale involve in a dynamic situation where the lateral velocities are significant. The impact time between contacting asperities is very short (e.g., at 1 m/sec). An impact between asperities of 50 μm wavelength takes 10−4 seconds which is very large when compared with about 10−5 seconds for the apparent time of contact when considering nominal dimensions of bearings for example. The role of texture temporally is the same as it was spatially—the apparent area of contact got replaced with real area of contact based on the texture. It could be that the real time of contact is similarly dependent on texture. The complication of this is that solving heat equations near to the surface should be tackled impulsively rather than by using the Fourier transform—In fact, to cater for texture, the Laplace transform with its ability to work with Dirac impulses is the best choice. This is because asperity contacts can be best formulated into Dirac trains of impulses sometimes randomly spaced and in parallel for each and every asperity contact. The conduction equation has to be solved.
1 ∂T = ∇ 2T , K ∂t
(7.295)
the two factors which are dominated by the texture are the initial and boundary conditions. Fortunately, by changing to Laplace transforms the initial time conditions are inherently included in the transformed equation. The left hand part of the equation in Laplace form can include almost infinitesimal initial time to be taken into account. The right hand side
allows the small dimensions of the texture to be incorporated into the solution. Higher temperatures than expected are the outcome of typical calculations involving texture for the simple reason that the conductivity is not large enough to allow the heat to dissipate from the restricted size and shape of the asperity. The implication is that the plastic mode of deformation of asperities is more likely to occur than elastic and therefore the working point of the function map moves to the right. The temperature at single asperities has been investigated by Yevtuchenko and Ivany K [237] in the presence of a fluid film. They make some assumptions about the asperity height distribution and asperity shape and work out the “steadystate” maximum asperity temperature! For a practical surface it would be interesting for them to employ the Laplace approach. Even failing to use the impulsive approach exemplified by Laplace very high temperatures are predicted, of the order of 1200°K, which is very near to important phase change temperatures in steel and titanium alloys. The point is that failure to consider the asperities and their distribution as the initiator of such temperature leads to artificially low temperatures. Also it is necessary to take into account the shear properties of any lubricant between the surfaces [238]. As the “unit event” of function in this case is closely related to the unit contact, which in turn is influenced by the texture, it is important to consider all possible unit or basic mechanisms especially if friction prediction is required. One such is the plowing effect of an asperity on a soft surface. This may sound irrelevant but unfortunately it is needed. The ultimate requirement is to predict the frictional forces that one surface has when rubbing against another and in particular the plowing component of that force. Even with today’s computing power this aim has not been achieved so the problem is being approached step-by-step. The first of these steps being to consider the action of one asperity. Azarkhin [240] developed an upper bound for an indentor plowing a soft surface. In this and other papers he assumes a perfect plastic deformation and partially uses finite element analysis to check some of the results. Torrance [241] calculates the frictional force of a wedge moving over an elastoplastic solid. Conditions for boundary frictional coefficients have been inferred from the results. He concludes that, for typical engineering surfaces, friction should be relatively insensitive to roughness but that it could rise dramatically with slopes > 5° and